chapter
stringlengths 1.97k
1.53M
| path
stringlengths 47
241
|
---|---|
The first law of thermodynamics states that energy can be converted from one form to another, but cannot be created or destroyed. The most important and critical aspect of life revolves around the idea of energy. During the course of a single day, a person finds him or herself using energy in all sorts to live their lives. Whether driving a car or eating lunch, the consumption of some sort of energy is unavoidable. While it may seem that energy is being created for our purposes and destroyed during it, there is in fact no change in the amount of energy in the world at one time. Taking this a step farther, one may state the entirety of the energy in the universe is at a constant with energy just being converted into different forms.
System and Surroundings
To obtain a better understanding of the workings of energy within the universe, it is helpful to classify it into two distinct parts. The first being the energy of a specific system, $E_{sys}$, and the second being whatever energy was not included in the system which we label as the energy of the surroundings, Esurr. Since these two parts are equal to the total energy of the universe, $E_{univ}$, it can be concluded that
$E_{univ} = E_{sys} + E_{surr} \label{1}$
Now, since we stated previously that the total amount of energy within the universe does not change, one can set a change in energy of the system and surroundings to equal
$ΔE_{sys} + ΔE_{surr} = 0 \label{2}$
A simple rearrangement of Equation \ref{2} leads to the following conclusion
$ΔE_{sys} = -ΔE_{surr} \label{3}$
Equation \ref{3} represents a very important premise of energy conservation. The premise is that any change in energy of a system will result in an equal but opposite change in the surroundings. This essentially summarizes the First Law of Thermodynamics which states that energy cannot be created nor destroyed.
System
15 Joules of Energy flows out of the System
Surroundings
An equal amount of 15 Joules of Energy is added to the surroundings
Types of Energy
Now that the conservation of energy has been defined, one can now study the different energies of a system. Within a system, there are three main types of energy. These three types are kinetic (the energy of motion), potential (energy stored within a system as a result of placement or configuration), and internal (energy associated with electronic and intramolecular forces). Thus, the following equation can be given
$E_{total} = KE + PE + U \label{4}$
where KE is the kinetic energy, PE is the potential energy, U is the internal energy, and Etotal is the total energy of the system. While all forms of energy are very important, the internal energy, U, is what will receive the remainder of the focus.
Internal Energy, U
As stated previously, U is the energy associated with electronic and intramolecular forces. Yet, despite the abundance of forces and interactions that may be occurring within a system, it is near impossible to calculate its internal energy. Instead, the change in the U of a system, ΔU, must be measured instead. The change in ΔU of a system is affected by two distinct variables. These two variables are designated at heat, q, and work, w. Heat refers to the total amount of energy transferred to or from a system as a result of thermal contact. Work refers to the total amount of energy transferred to or from a system as a result of changes in the external parameters (volume, pressure). Applying this, the following equation can be given
$ΔU = q + w \label{5}$
If the change of ΔU is infinitesimal, then Equation \ref{5} can be altered to
$dU = dq + dw \label{6}$
Within this equation it should be noted that U is a state function and therefore independent of pathways while $q$ and $w$ are not.
Having defined heat and work, it becomes necessary to define whether a process is exhibiting positive or negative values of q and w. Table 1 describes the sign conventions of both work and heat.
Process Sign
Work done by the system on the surroundings -
Work done on the system by the surroundings +
Heat absorbed by the system from the surroundings (endothermic) +
Heat absorbed by the surroundings from the system (exothermic) -
Table 1. Sign Conventions for Work and Heat
Contributors and Attributions
• Daniel Haywood (Hope College)
• David Todd (Hope College)
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Thermodynamics/The_Four_Laws_of_Thermodynamics/First_Law_of_Thermodynamics/First_Law_of_Thermodynamics.txt
|
The Second Law of Thermodynamics states that the state of entropy of the entire universe, as an isolated system, will always increase over time. The second law also states that the changes in the entropy in the universe can never be negative.
Introduction
Why is it that when you leave an ice cube at room temperature, it begins to melt? Why do we get older and never younger? And, why is it whenever rooms are cleaned, they become messy again in the future? Certain things happen in one direction and not the other, this is called the "arrow of time" and it encompasses every area of science. The thermodynamic arrow of time (entropy) is the measurement of disorder within a system. Denoted as $\Delta S$, the change of entropy suggests that time itself is asymmetric with respect to order of an isolated system, meaning: a system will become more disordered, as time increases.
Major players in developing the Second Law
• Nicolas Léonard Sadi Carnot was a French physicist, who is considered to be the "father of thermodynamics," for he is responsible for the origins of the Second Law of Thermodynamics, as well as various other concepts. The current form of the second law uses entropy rather than caloric, which is what Sadi Carnot used to describe the law. Caloric relates to heat and Sadi Carnot came to realize that some caloric is always lost in the motion cycle. Thus, the thermodynamic reversibility concept was proven wrong, proving that irreversibility is the result of every system involving work.
• Rudolf Clausius was a German physicist, and he developed the Clausius statement, which says "Heat generally cannot flow spontaneously from a material at a lower temperature to a material at a higher temperature."
• William Thompson, also known as Lord Kelvin, formulated the Kelvin statement, which states "It is impossible to convert heat completely in a cyclic process." This means that there is no way for one to convert all the energy of a system into work, without losing energy.
• Constantin Carathéodory, a Greek mathematician, created his own statement of the second low arguing that "In the neighborhood of any initial state, there are states which cannot be approached arbitrarily close through adiabatic changes of state."
Probabilities
If a given state can be accomplished in more ways, then it is more probable than the state that can only be accomplished in a fewer/one way.
Assume a box filled with jigsaw pieces were jumbled in its box, the probability that a jigsaw piece will land randomly, away from where it fits perfectly, is very high. Almost every jigsaw piece will land somewhere away from its ideal position. The probability of a jigsaw piece landing correctly in its position, is very low, as it can only happened one way. Thus, the misplaced jigsaw pieces have a much higher multiplicity than the correctly placed jigsaw piece, and we can correctly assume the misplaced jigsaw pieces represent a higher entropy.
Derivation and Explanation
To understand why entropy increases and decreases, it is important to recognize that two changes in entropy have to considered at all times. The entropy change of the surroundings and the entropy change of the system itself. Given the entropy change of the universe is equivalent to the sums of the changes in entropy of the system and surroundings:
$\Delta S_{univ}=\Delta S_{sys}+\Delta S_{surr}=\dfrac{q_{sys}}{T}+\dfrac{q_{surr}}{T} \label{1}$
In an isothermal reversible expansion, the heat q absorbed by the system from the surroundings is
$q_{rev}=nRT\ln\frac{V_{2}}{V_{1}}\label{2}$
Since the heat absorbed by the system is the amount lost by the surroundings, $q_{sys}=-q_{surr}$.Therefore, for a truly reversible process, the entropy change is
$\Delta S_{univ}=\dfrac{nRT\ln\frac{V_{2}}{V_{1}}}{T}+\dfrac{-nRT\ln\frac{V_{2}}{V_{1}}}{T}=0 \label{3}$
If the process is irreversible however, the entropy change is
$\Delta S_{univ}=\frac{nRT\ln \frac{V_{2}}{V_{1}}}{T}>0 \label{4}$
If we put the two equations for $\Delta S_{univ}$together for both types of processes, we are left with the second law of thermodynamics,
$\Delta S_{univ}=\Delta S_{sys}+\Delta S_{surr}\geq0 \label{5}$
where $\Delta S_{univ}$ equals zero for a truly reversible process and is greater than zero for an irreversible process. In reality, however, truly reversible processes never happen (or will take an infinitely long time to happen), so it is safe to say all thermodynamic processes we encounter everyday are irreversible in the direction they occur.
The second law of thermodynamics can also be stated that "all spontaneous processes produce an increase in the entropy of the universe".
Gibbs Free Energy
Given another equation:
$\Delta S_{total}=\Delta S_{univ}=\Delta S_{surr}+\Delta S{sys} \label{6}$
The formula for the entropy change in the surroundings is $\Delta S_{surr}=\Delta H_{sys}/T$. If this equation is replaced in the previous formula, and the equation is then multiplied by T and by -1 it results in the following formula.
$-T \, \Delta S_{univ}=\Delta H_{sys}-T\, \Delta S_{sys} \label{7}$
If the left side of the equation is replaced by $G$, which is know as Gibbs energy or free energy, the equation becomes
$\Delta G_{}=\Delta H-T\Delta S \label{8}$
Now it is much simpler to conclude whether a system is spontaneous, non-spontaneous, or at equilibrium.
• $\Delta H$ refers to the heat change for a reaction. A positive $\Delta H$ means that heat is taken from the environment (endothermic). A negative $\Delta H$ means that heat is emitted or given the environment (exothermic).
• $\Delta G$ is a measure for the change of a system's free energy in which a reaction takes place at constant pressure ($P$) and temperature ($T$).
According to the equation, when the entropy decreases and enthalpy increases the free energy change, $\Delta G_{}$, is positive and not spontaneous, and it does not matter what the temperature of the system is. Temperature comes into play when the entropy and enthalpy both increase or both decrease. The reaction is not spontaneous when both entropy and enthalpy are positive and at low temperatures, and the reaction is spontaneous when both entropy and enthalpy are positive and at high temperatures. The reactions are spontaneous when the entropy and enthalpy are negative at low temperatures, and the reaction is not spontaneous when the entropy and enthalpy are negative at high temperatures. Because all spontaneous reactions increase entropy, one can determine if the entropy changes according to the spontaneous nature of the reaction (Equation $\ref{8}). Table \(1$: Matrix of Conditions Dictating Spontaneity
Case $\Delta H$ $\Delta S$ $\Delta G$ Answer
high temperature - + - Spontaneous
low temperature - + - Spontaneous
high temperature - - + Nonspontaneous
low temperature - - - Spontaneous
high temperature + + - Spontaneous
low temperature + + + Nonspontaneous
high temperature + - + Nonspontaneous
low temperature + - + Nonspontaneous
Example $1$
Lets start with an easy reaction:
$2 H_{2(g)}+O_{2(g)} \rightarrow 2 H_2O_{(g)}$
The enthalpy, $\Delta H_{}$, for this reaction is -241.82 kJ, and the entropy, $\Delta S_{}$, of this reaction is -233.7 J/K. If the temperature is at 25º C, then there is enough information to calculate the standard free energy change, $\Delta G_{}$.
The first step is to convert the temperature to Kelvin, so add 273.15 to 25 and the temperature is at 298.15 K. Next plug $\Delta H_{}$, $\Delta S_{}$, and the temperature into the $\Delta G=\Delta H-T \Delta S_{}$.
$\Delta G$= -241.8 kJ + (298.15 K)(-233.7 J/K)
= -241.8 kJ + -69.68 kJ (Don't forget to convert Joules to Kilojoules)
= -311.5 kJ
Example $2$
Here is a little more complex reaction:
$2 ZnO_{(s)}+2 C_{(g)} \rightarrow 2 Zn_{(s)}+2 CO_{(g)}$
If this reaction occurs at room temperature (25º C) and the enthalpy, $\Delta H_{}$, and standard free energy, $\Delta G_{}$, is given at -957.8 kJ and -935.3 kJ, respectively. One must work backwards somewhat using the same equation from Example 1 for the free energy is given.
-935.3 kJ = -957.8 kJ + (298.15 K) ($\Delta S_{}$)
22.47 kJ = (298.15 K) ($\Delta S_{}$) (Add -957.8 kJ to both sides)
0.07538 kJ/K = $\Delta S_{}$ (Divide by 298.15 K to both sides)
Multiply the entropy by 1000 to convert the answer to Joules, and the new answer is 75.38 J/K.
Example $3$
For the following dissociation reaction
$O_{2(g)} \rightarrow 2 O_{(g)}$
under what temperature conditions will it occurs spontaneously?
Solution
By simply viewing the reaction one can determine that the reaction increases in the number of moles, so the entropy increases. Now all one has to do is to figure out the enthalpy of the reaction. The enthalpy is positive, because covalent bonds are broken. When covalent bonds are broken energy is absorbed, which means that the enthalpy of the reaction is positive. Another way to determine if enthalpy is positive is to to use the formation data and subtract the enthalpy of the reactants from the enthalpy of the products to calculate the total enthalpy. So, if the temperature is low it is probable that $\Delta H_{}$ is more than $T*\Delta S_{}$, which means the reaction is not spontaneous. If the temperature is large then $T*\Delta S_{}$ will be larger than the enthalpy, which means the reaction is spontaneous.
Example $4$
The following reaction
$CO_{(g)} + H_2O_{(g)} \rightleftharpoons CO_{2(g)} + H_{2(g)}$
occurs spontaneously under what temperature conditions? The enthalpy of the reaction is -40 kJ.
Solution
One may have to calculate the enthalpy of the reaction, but in this case it is given. If the enthalpy is negative then the reaction is exothermic. Now one must find if the entropy is greater than zero to answer the question. Using the entropy of formation data and the enthalpy of formation data, one can determine that the entropy of the reaction is -42.1 J/K and the enthalpy is -41.2 kJ. Because both enthalpy and entropy are negative, the spontaneous nature varies with the temperature of the reaction. The temperature would also determine the spontaneous nature of a reaction if both enthalpy and entropy were positive. When the reaction occurs at a low temperature the free energy change is also negative, which means the reaction is spontaneous. However, if the reaction occurs at high temperature the reaction becomes nonspontaneous, for the free energy change becomes positive when the high temperature is multiplied with a negative entropy as the enthalpy is not as large as the product.
Example $5$
Under what temperature conditions does the following reaction occurs spontaneously ?
$H_{2(g)} + I_{(g)} \rightleftharpoons 2 HI_{(g)}$
Solution
Only after calculating the enthalpy and entropy of the reaction is it possible for one can answer the question. The enthalpy of the reaction is calculated to be -53.84 kJ, and the entropy of the reaction is 101.7 J/K. Unlike the previous two examples, the temperature has no affect on the spontaneous nature of the reaction. If the reaction occurs at a high temperature, the free energy change is still negative, and $\Delta G_{}$ is still negative if the temperature is low. Looking at the formula for spontaneous change one can easily come to the same conclusion, for there is no possible way for the free energy change to be positive. Hence, the reaction is spontaneous at all temperatures.
Application of the Second Law
The second law occurs all around us all of the time, existing as the biggest, most powerful, general idea in all of science.
Explanation of Earth's Age
When scientists were trying to determine the age of the Earth during 1800s they failed to even come close to the value accepted today. They also were incapable of understanding how the earth transformed. Lord Kelvin, who was mentioned earlier, first hypothesized that the earth's surface was extremely hot, similar to the surface of the sun. He believed that the earth was cooling at a slow pace. Using this information, Kelvin used thermodynamics to come to the conclusion that the earth was at least twenty million years, for it would take about that long for the earth to cool to its current state. Twenty million years was not even close to the actual age of the Earth, but this is because scientists during Kelvin's time were not aware of radioactivity. Even though Kelvin was incorrect about the age of the planet, his use of the second law allowed him to predict a more accurate value than the other scientists at the time.
Evolution and the Second Law
Some critics claim that evolution violates the Second Law of Thermodynamics, because organization and complexity increases in evolution. However, this law is referring to isolated systems only, and the earth is not an isolated system or closed system. This is evident for constant energy increases on earth due to the heat coming from the sun. So, order may be becoming more organized, the universe as a whole becomes more disorganized for the sun releases energy and becomes disordered. This connects to how the second law and cosmology are related, which is explained well in the video below.
Problems
1. Predict the entropy changes of the converse of SO2 to SO3: 2 SO2 (g) + O2 (g) --> 2 SO3 (g)
2. True/False: $\Delta G$ > 0, the process is spontaneous
3. State the conditions when $\Delta G$ is nonspontaneous.
4. True/False: A nonspontaneous process cannot occur with external intervention.
Answers
1. Entropy decreases
2. False
3. Case 3, Case 6, Case 7, Case 8 (Table above)
4. True
References
1. Chang, Raymond. Physical Chemistry for the Biosciences. Sausalito, California: University Science Books, 2005.
2. How the Earth Was Made. Dir. Peter Chin. A&E Home Video, 2008. DVD.
3. Petrucci, Ralph H., William S. Harwood, F. G. Herring, and Jeffry D. Madura. General Chemistry: Principles & Modern Applications. 9th ed. Upper Saddle River, NJ: Pearson Prentice Hall, 2007. 791-796.
Conditional content (Pro member)
Contributors and Attributions
• Konstantin Malley, Ravneet Singh (UCD), Tianyu Duan (UCD)
Third Law of Thermodynamics
The 3rd law of thermodynamics will essentially allow us to quantify the absolute amplitude of entropies. It says that when we are considering a totally perfect (100% pure) crystalline structure, at absolute zero (0 Kelvin), it will have no entropy (S). Note that if the structure in question were not totally crystalline, then although it would only have an extremely small disorder (entropy) in space, we could not precisely say it had no entropy. One more thing, we all know that at zero Kelvin, there will still be some atomic motion present, but to continue making sense of this world, we have to assume that at absolute Kelvin there is no entropy whatsoever.
Introduction
From physics we know that the change in entropy \( \Delta S \) equals to the area under the graph of heat capacity (C) versus some temperature range. We can now extend this reasoning when trying to make sense of absolute entropies as well.
Entropy at an absolute temperature
First off, since absolute entropy depends on pressure we must define a standard pressure. It is conventional to choose the standard pressure of just 1 bar. Also, from now on when you see "S" we mean the absolute molar entropy at one bar of pressure. We know that \( \Delta S = S_{T=final} - S_{T=0} \); however, by the 3rd law this equation becomes \( \Delta S = S_{T=final} \).
Now note that we can calculate the absolute entropy simply by extrapolating (from the above graph) the heat capacities all the way down to zero Kelvin. Actually, it is not exactly zero, but as close as we can possible get. For several reasons, it is so hard to measure the heat capacities at such low temperatures (T=0) that we must reserve to a different approach, much simpler.
Debye's Law
Debye's 3rd thermodynamic law says that the heat capacities for most substances (does not apply to metals) is: \(C = bT^3\). It's possible to find the constant \(b\) if you fit Debye's equation to some experimental measurements of heat capacities extremely close to absolute zero (T=0 K). Just remember that \(b\) depends on the type of substance. Debye's law can be used to calculate the molar entropy at values infinitely close to absolute Kelvin temperatures: S(T) = (1/3)C(T) Note that \(C\) is the molar and constant volume heat capacity.
• Abel Mersha
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Thermodynamics/The_Four_Laws_of_Thermodynamics/Second_Law_of_Thermodynamics.txt
|
Learning Objectives
• Calculate amount of electric energy for heat capacity measurement.
• Perform experiments to measure heats of reactions.
• Calculate the heats of reactions from experimental results.
• Calculate internal energies of reactions from bomb calorimeter experiments.
• Calculate enthalpies of reactions from bomb calorimetry experiments.
Chemical Energy
$\mathrm{H_2O_{\large{(l)}} \rightarrow H_{2\large{(g)}} + \dfrac{1}{2} O_2}, \hspace{20px} dH = \mathrm{+285.8\: kJ/mol}$
A Chemical Energy Level Diagram
------------H2(g) + 1/2O2
|
| |
286 kJ | | -286 kJ
| |
| ¯
------------ H2O
We can also use an energy level diagram to show the relative content of energy. The energy content of $\mathrm{H_{2\large{(g)}} + 0.5\, O_2}$ is 285.8 kJ higher than a mole of water, $\ce{H2O}$.
Oil, gas, and food are often called energy by the news media, but more precisely they are sources of (chemical) energy -- energy stored in chemicals with a potential to be released in a chemical reaction. The released energy performs work or causes physical and chemical changes.
It is obvious that the amount of energy released in a chemical reaction is related to the amount of reactants. For example, when the amount is doubled, so is the amount of energy released.
$\mathrm{2 H_{2\large{(g)}} + O_2 \rightarrow 2 H_2O_{\large{(l)}}}, \hspace{20px} dH = \mathrm{-571.6\: kJ/mol}$
Example 1 shows the calculation when the amount of reactants is only a fraction of a mole.
Energy
Learning Objectives
• Describe heat, mechanical work (including potential and kinetic energy), light, electric, and chemical energy as forms of energy.
• Use proper units for energy.
• Explain power as rate of energy conversion from one form to another.
• Use the concept of internal energy to explain how energy of a system changes.
• Calculate the change of internal energy of a system.
Hess's Law: The Principle of Conservation of Energy
. To illustrate Hess's law, the thermal equations and the energy level diagrams are shown below.
Thermal equations Hess's law energy level diagram
• $\mathrm{A + B = AB},\: \Delta H_1$
• $\mathrm{AB + B = AB_2},\: \Delta H_2$
then,
$\mathrm{A + 2 B = AB_2},\: \Delta H_{1\: 2} = \Delta H_1 + \Delta H_2$
======= A + 2 B
| | \Delta H1
\Delta H1 2 | ===== AB + B
| | \Delta H2
======= AB2
Chemical energy and Hess's law
The standard enthalpy of reaction and standard enthalpy of formation introduced in Chemical Energy are very useful chemical properties. We have already mentioned some basic rules regarding the quantities \Delta H, \Delta H°, and \Delta Hf and their preceding equations.
If both sides of the equations are multiplied by a factor to alter the number of moles, \Delta H, \Delta H°, or \Delta Hf for the equation should be multiplied by the same factor, since they are quantities per equation as written. Thus, for the equation
$\mathrm{C_{\large{(graphite)}} + 0.5\, O_2 \rightarrow CO, \hspace{20px} \mathit{\Delta H}^\circ = -110\: kJ/mol}$.
We can write it in any of the following forms:
$\mathrm{ 2 C_{\large{(graphite)}} + O_2 \rightarrow 2 CO, \hspace{20px} \mathit{\Delta H}^\circ = -220\: kJ/mol\: (multiplied\: by\: 2) \ 6 C_{\large{(graphite)}} + 3 O_2 \rightarrow 6 CO, \hspace{20px} \mathit{\Delta H}^\circ = -660\: kJ/mol\: (multiplied\: by\: 6)}$
For the reverse reaction, the signs of these quantities are changed (multiply by -1). The equation implies the following:
$\mathrm{ CO \rightarrow C_{\large{(graphite)}} + 0.5 O_2, \hspace{20px} \mathit{\Delta H}^\circ = 110\: kJ/mol\ 2 CO \rightarrow 2 C_{\large{(graphite)}} + O_2, \hspace{20px} \mathit{\Delta H}^\circ = 220\: kJ/mol}$
Hess's law states that energy changes are state functions. The amount of energy depends only on the states of the reactants and the state of the products, not on the intermediate steps. Energy (enthalpy) changes in chemical reactions are the same, regardless of whether the reactions occur in one or several steps. The total energy change in a chemical reaction is the sum of the energy changes in its many steps leading to the overall reaction.
For example, in the diagram below, we look at the oxidation of carbon into $\ce{CO}$ and $\ce{CO2}$. The direct oxidation of carbon (graphite) into $\ce{CO2}$ yields an enthalpy of -393 kJ/mol. When carbon is oxidized into $\ce{CO}$ and then $\ce{CO}$ is oxidized to $\ce{CO2}$, the enthalpies are -110 and -283 kJ/mol respectively. The sum of enthalpy in the two steps is exactly -393 kJ/mol, same as the one-step reaction.
0 kJ ------------ C(graphite) + O2
| |
-110 kJ | |
V |
CO + 0.5 O2 ----- |
| | -393 kJ
| |
-283 kJ | |
| |
V V
------------ CO2
The two-step reactions are:
$\mathrm{ C + \dfrac{1}{2} O_2 \rightarrow CO, \hspace{20px} \mathit{\Delta H}^\circ = -110\: kJ/mol\ CO + \dfrac{1}{2} O_2 \rightarrow CO_2, \hspace{20px} \mathit{\Delta H}^\circ = -283\: kJ/mol}$
Adding the two equations together and canceling out the intermediate, $\ce{CO}$, on both sides leads to
$\mathrm{C + O_2 \rightarrow CO_2, \hspace{20px} \mathit{\Delta H}^\circ = (-110)+(-283) = -393\: kJ/mol}$
The real merit is actually to evaluate the enthalpy of formation of $\ce{CO}$ as we shall see soon.
Application of Hess's Law
Hess's law can be applied to calculate enthalpies of reactions that are difficult to measure. In the above example, it is very difficult to control the oxidation of graphite to give pure $\ce{CO}$. However, enthalpy for the oxidation of graphite to $\ce{CO2}$ can easily be measured. So can the enthalpy of oxidation of $\ce{CO}$ to $\ce{CO2}$. The application of Hess's law enables us to estimate the enthalpy of formation of $\ce{CO}$. Since,
$\mathrm{ C + O_2 \rightarrow CO_2, \hspace{20px} \mathit{\Delta H}^\circ = -393\: kJ/mol\ CO + \dfrac{1}{2} O_2 \rightarrow CO_2, \hspace{20px} \mathit{\Delta H}^\circ = -283\: kJ/mol}$
Subtracting the second equation from the first gives
$\mathrm{C + \dfrac{1}{2} O_2 \rightarrow CO, \hspace{20px} \mathit{\Delta H}^\circ = -393 -(-283) = -110\: kJ/mol}$
The equation shows the standard enthalpy of formation of $\ce{CO}$ to be -110 kJ/mol.
Application of Hess's law enables us to calculate \Delta H, \Delta H°, and \Delta Hf for chemical reactions that impossible to measure, providing that we have all the data of related reactions.
Some more examples are given below to illustrate the applications of Hess Law.
From these data, we can construct an energy level diagram for these chemical combinations as follows:
===C(graphite) + 2 H2(g) + 2 O2(g)===
- 74.7 kJ | |
== CH4 (g) + 2 O2(g)== |
| |
| |
| |
| |-965.1 kJ
-890.4 kJ | | [(-2*285.8-393.5) kJ]
| |
| |
| |
| |
V V
==========CO2(g) + 2 H2O(l)==========
Thermochemistry - A Review
• Chem120 Quizzes
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Thermodynamics/Thermochemistry/Calorimetry%3A_Measuring_Heats_of_Reactions.txt
|
A thermodynamic cycle consists of a linked sequence of thermodynamic processes that involve transfer of heat and work into and out of the system, while varying pressure, temperature, and other state variables within the system, and eventually return the system to its initial state.
• Brayton Cycle
The Brayton Cycle is a thermodynamic cycle that describes how gas turbines operate. The idea behind the Brayton Cycle is to extract energy from flowing air and fuel to generate usuable work which can be used to power many vehicles by giving them thrust. The most basic steps in extracting energy is compression of flowing air, combustion, and then expansion of that air to create work and also power the compression at the same time.
• Carnot Cycle
The Carnot cycle has the greatest efficiency possible of an engine (although other cycles have the same efficiency) based on the assumption of the absence of incidental wasteful processes such as friction, and the assumption of no conduction of heat between different parts of the engine at different temperatures.
• Hess's Law
Hess's Law of Constant Heat Summation (or just Hess's Law) states that regardless of the multiple stages or steps of a reaction, the total enthalpy change for the reaction is the sum of all changes. This law is a manifestation that enthalpy is a state function.
• Hess's Law and Simple Enthalpy Calculations
Thermodynamic Cycles
The Brayton Cycle is a thermodynamic cycle that describes how gas turbines operate. The idea behind the Brayton Cycle is to extract energy from flowing air and fuel to generate usuable work which can be used to power many vehicles by giving them thrust. The most basic steps in extracting energy is compression of flowing air, combustion, and then expansion of that air to create work and also power the compression at the same time. The usefulness of the Brayton Cycle is tremendous due to the fact it is the backbone in driving many vehicles such as jets, helicopters, and even submarines.
Introduction
The first gas turbine that implemented the Brayton Cycle (not knowingly however, because it was created before the Brayton Cycle was even established) was John Barber's gas turbine patented in 1791. The idea of the machine was to compress atmospheric air in one chamber and fuel in another chamber and both chambers would be connected to a combustion vessel. Once the air has mixed with the fuel and reacted, the energy from the combustion would be used to spin a turbine to do useful work. However, because back in the late 18th century there was lack of technological advances and such, the gas turbine did not have enough energy to pressurize the gases and do useful work at the same time therefore it was not used.
George Brayton was an engineer that designed the first continuous ignition combustion engine which was a two-stroke engine that was sold under the name "Brayton's Ready Motors." The design employed the thermodynamic processes that is now considered "The Brayton Cycle," but is also coined The Joule Cycle. The gas turbine was patented in 1872. The design was a engine connected to a reservoir of pressurized atmospheric air and gas which would only turn on if a valve was turned. This would release the pressurized gas to a combustion vessel, which would turn pistons to create mechanical work and re-compress the gas in the reservoir.
John Barber's Gas Turbine (1791)
The Brayton Cycle for John barber's gas turbine is incomplete due to the fact energy is not redirected into compression of the initial gases, yet because this was one of the first prominent gas turbine engines ever created, it still holds much significance. Fuel and atmospheric gas is held in different chambers and heated to increase pressure. This is in part to the ideal gas law PV = nRT, and since the volume of the vessel stays constant, an increase in temperature increases pressure. The gas combine into a square compartment where a spark or flame ignites the mixture which then rapidly increases temperature (but not pressure because the gas quickly escapes to spin the turbine).
Although this is a very crude gas turbine engine, it nontheless was a great foundations for further scientific advancements and the development of the Brayton Cycle.
George Brayton's Gas Turbine (1872)
George Brayton's gas turbine was the first and most prominent fully operational gas turbine that implemented the Brayton Cycle. Gas is pressurized and held in reservoir A, where a valve would release it to move through tunnel B and be ignited in chamber C. This is an isobaric process due to the fact any increase in pressure would just push the gas out of the engine. When work is done on the D piston, mechanical work can be employed for a variety of things, such as generation of electricity or movement, and there is also a piston in compartment E that sucks in atmospheric air from valve F. Valve G is a fitting to a fuel cell where the fuel to air ratio can be set so that a desired ratio for maximum efficiency is kept. The gas is then mechanically compressed back into reservoir A through an adiabatic process due to the piston E.
Modern Day Jet Gas Turbine
This is one of the many modern day gas turbine engines that utilize the Brayton Cycle in order to power many vehicles or generate power. At the front of the engine is the inlet of the compression chamber so that air is sucked in by the many turbines that are constantly spinning and angled in a specific location for optimum air compression. Enough the air is compressed enough in the middle of the engine (the combustion vessel), fuel is added to the combustion chamber and an ignition is initiated, where the extremely exothermic reaction causes the gas to violently exits the engine in the expansion chamber in the back of the engine. There are turbines right in front of the expansion chamber that is connected to the turbins in the compression chamber so the whole engine is a continuous cycle as long as there is a steady stream of fuel being introduced into the combustion chamber.
Ideal Brayton Cycle
A quick qualitative look at how the Brayton Cycle works it by reviewing how a jet engine works. A gas turbine from a jet's wing sucks in atmospheric air from the back of its engine and is compressed in the mixing/combustion chamber. In the mixing/combustion chamber, fuel is mixed with the compressed atmospheric air, where it is ignited and left to exit in the expansion chamber. The energy that comes out of the back of the gas turbine is work used to power the compression step as well as give thrust to the jet.
The Brayton Cycle can then be described quantitatively in the gas turbine engine of a jet by two diagrams, the Temperature/Entropy Diagram and the Pressure/Volume Diagram.
Temperature/Entropy Diagram
In this diagram, we see that there are 8 processses to describe the Brayton cycle in terms of temperature, entropy, and pressure.
(1) Ambiant air in the atmosphere that is currently undisturbed.
(1 -> 2) Ambiant air comes into contact with the compressor of the gas turbine and the pressure and temperature rises dramatically. The rise in pressure comes from work being done the air by the compressor which packs the air into the mixer/combustion chamber, and the rise in pressure causes a rise in temperature in the gas molecules because volume of the vessel stays constant (PV = nRT). Because this is an ideal process, entropy is believed to stay the same, thus this is an isentropic process (in reality though, entropy does increase due to the flow and movement of the gas molecules).
(3 -> 5) The atmospheric air has been compacted into the combustion chamber where gaseous fuel is mixed with the air. Once this mixture has been ignited, we a see a steep rise in the temperature and entropy (not the pressure, because the curves represent a specific value of pressure, so this is an isobaric process) due to the combustion reaction of the fuel and air. The energy from the chemical bonds in the fuel are broken due to ignition and a highly exothermic reaction occurs which raises entropy because of the breaking down of hydrocarbon chains to water and air (more molecules) and raises the temperature due to increased ambiant energy from the exothermic reaction.
(5 -> 8) At point 5, the pressurized fuel and air leave the combustion chamber to the expansion chamber, where we see a quick drop in pressure due to a larger volume and exposure to the surroundings. The energy from the combustion chamber is used two for two purposes: spinning a turbine that is connected to the compressor (which keeps the Brayton Cycle running continuously) and as thrust. These two purposes represent point 6 and ideally is an isentropic process. The quick drop in pressure shows how the energy from the air in the combustion energy is used mechanically to turn a turbine that will run the compressor process because the energy it takes to compress the atmospheric energy is lesser than the energy produced from the ignition of the fuel. The energy left over from spinning the turbine is the used as thrust to do work (such as flight in a jet). The expelled air then becomes ambiant air that is of a higher energy level than the air from point 1, but will eventually lose energy to the surroundings (isobaric process) and become the initial ambiant air.
Pressure/Volume Diagram
In this diagram, there are six processes that describe the pressure and volume of the gas. A common mistake is by thinking the volume relates to the vessel of the reaction, when in fact it is the volume of the gas. This graph coincides with the TS diagram but do not progress through the points at the same time (because this diagram only has 6 points).
(1 -> 3) The ambiant air is sucked into the compressor where the volume of the gas quickly falls due to the compression into the combustion chamber. As compression continues, the pressure of the gas begins to quickly rise at point 2 after the volume of the combustion chamber is filled and peaks at point 3. At point 3, ignition occurs.
(4 -> 6) As ignition occurs, the pressure of the gas remains constant due to the face the gas is able to escape into the expansion chamber (notice that even though gas is leaving, the compression process is still working, so any pressure lost from the gas leaving is held constant from the gas entering the combustion chamber) which results in a rise in volume of the energized gas. As the gas leaves into the atmosphere, the pressure drops and the volume of the gas expands to what it was in point 1. Work is done from the expansion of the gas which pushes out of the expansion chamber with high force. This force is then used to spin turbins and give thrust.
Non-Ideal/Realistic Processes of the Brayton Cycle
The ideal processes shown in the above diagrams are used to study and understand the Brayton Cycle better. However, some corrections are needed when applied to real world problems. The first problem is the fact that in the compression process, it is believed to be isentropic. This is wrong, because the high speed flow of the ambiant air increases the entropy (higher energy molecules) and thus is not an isentropic process. It is actually an adiabatic process because no heat exchange occurs in the gas and only mechanical work is done for compression. This also relates back to the expansion process where the gas expands in the expansion chamber but has yet to leave into the atmosphere. Ideally it is isentropic, but the expansion of gas does increase entropy, therefore it is actually an adiabatic process because again, there is not heat exchange (only mechanical work done by expansion).
The non-ideal processes of the Brayton Cycle points out a problem; that the work used to raise entropy is thus a leak in the amount of work that could have been used for useful mechanical energy. A set of equations is then used to calculate the efficiency of the Brayton Cycle at certain pressures and temperatures.
Efficiency of the Brayton Cycle
To find the efficiency of the Brayton Cycle, we must find out how much work each process contributes to the total internal energy. We will be analyzing the PV diagram above to do this.
First, the internal energy
$U = q_1 + q_2 - w = 0$
is equal to zero because the first law of thermodynamics states that energy is not destroyed or created, and because in the Brayton cycle the final state function of the gas is the initial, U = 0.
This means
$w = q_1 + q_2$
where $q_1$ is the heat recieved by the combustion (so it is negative) and $q_2$ is the heat released after expansion.
If you treat the gas as a perfect gas with constant specific heats, we can find the heat addition from the combuster to be
$q_1 = c_p(T_I - T_F)$
and the heat lost to the atmosphere
$q_2 = c_p(T_F - T_I)$
Where $T_F$ is the final temperature of the combustion or "heat lost to the atmosphere" part and the latter is the initial. (So in the PV curve, the combustion process would have $q_1 = c_p(T_4 - T_3)$
So now we have expressed the amount of heat lost and gained in terms of temperatures, we can re-establish the equation to find eta (thermal efficiency)
$\eta = \dfrac{\textrm{Net work}}{\textrm{Heat in}} = \dfrac{c_p[(T_c -T_c) - (T_d - T_a)]}{c_p(T_[T_c = T_b]} = 1 - \frac{(T_d-T_a)}{(T_c-T_b)} = 1 - \frac{T_a(T_d/T_a-1)}{T_b(T_c/T_b-1)}.$
where c is the final temperature of the combustion process and b is the initial temperature before combustion and a is the initial temperature of the undisturbed gas and d is the temperature of the gas after it has be expelled. The corresponding numbers to letters from the PV graph is a = 2; b = 3; c = 4; d = 6.
The smaller the temperature ratio is the higher the efficiency of the Braytons cycle is. So that is, the more heat input into the system and the smaller amount of heat lost to the atmosphere will significantly reduce the temperature ratio and have a higher percentage of efficiency.
Problems
1. There are four processes in the ideal and non-ideal Brayton Cycle. List them (both ideal and non - ideal) in the correct order and briefly say what is happening (regarding to a jet engine gas turbine).
2. During the combustion phase of the Brayton Cycle, mark what happens to the following conditions:
Increase Decrease Constant
Entropy (ideal (mark x)/non-ideal (mark o))
Temperature
Volume
Pressure
1. During the combustion phase of the Brayton Cycle, the temperature before ignition was 300 K. After ignition, the temperature of the gaseous mixture was 4000 K. Assume the heat absorbed from the atmosphere to be 5000 K. Find the efficiency of that gas turbine.
2. From the data in question 3, find $q_1$ and $q_2$ assuming $c_p= 5$.
3. Why does entropy stay constant (what reasoning) in the ideal Brayton Cycle but changes in the non-ideal Brayton Cycle.
Solutions
1. Ideal
1. Isentropic - Entropy stays as compression occurs.
2. Isobaric - Pressure constant as combustion occurs to where heat is evolved.
3. Isentropic - Entropy stays as expansion occurs.
4. Isobaric - Pressure constant as heat from reaction goes into the atmosphere.
Non-Ideal
1. Adiabatic - Compression (no heat exchange).
2. Isobaric - Same as ideal.
3. Adiabatic - Expansion (no heat exchange).
4. Isobaric - Same as ideal.
2.
Increase Decrease Constant
Entropy (ideal (mark x)/non-ideal (mark o)) xo (+ energy = + entropy)
Temperature x (exothermic)
Volume x (expansion)
Pressure x (isobaric; gas escapes before pressure +)
3. $1 - (4000 - 300)/5000 = 0.26$. Which means it is running at 26% efficiency.
4. ANS for combustion phase = 5(300 - 4000) = -18500
$q_2$ cannot be found because the initial temperature of the air was not given, therefore no solution.
5. Ideally, compression or expansion does not cause entropy (disorder in the system) to change because nothing is causing the the molecules to increase in energy was they are being pushed together or going apart, thus entropy stays constant. However, in the real world, the mechanical energy pushing the molecules together (the compression turbine) causes them to gain molecular energy and does increase entropy in the system.
• Khoa Nguyen
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Thermodynamics/Thermodynamic_Cycles/Brayton_Cycle.txt
|
In the early 19th century, steam engines came to play an increasingly important role in industry and transportation. However, a systematic set of theories of the conversion of thermal energy to motive power by steam engines had not yet been developed. Nicolas Léonard Sadi Carnot (1796-1832), a French military engineer, published Reflections on the Motive Power of Fire in 1824. The book proposed a generalized theory of heat engines, as well as an idealized model of a thermodynamic system for a heat engine that is now known as the Carnot cycle. Carnot developed the foundation of the second law of thermodynamics, and is often described as the "Father of thermodynamics."
The Carnot Cycle
The Carnot cycle consists of the following four processes:
1. A reversible isothermal gas expansion process. In this process, the ideal gas in the system absorbs $q_{in}$ amount heat from a heat source at a high temperature $T_{high}$, expands and does work on surroundings.
2. A reversible adiabatic gas expansion process. In this process, the system is thermally insulated. The gas continues to expand and do work on surroundings, which causes the system to cool to a lower temperature, $T_{low}$.
3. A reversible isothermal gas compression process. In this process, surroundings do work to the gas at $T_{low}$, and causes a loss of heat, $q_{out}$.
4. A reversible adiabatic gas compression process. In this process, the system is thermally insulated. Surroundings continue to do work to the gas, which causes the temperature to rise back to $T_{high}$.
P-V Diagram
The P-V diagram of the Carnot cycle is shown in Figure $2$. In isothermal processes I and III, ∆U=0 because ∆T=0. In adiabatic processes II and IV, q=0. Work, heat, ∆U, and ∆H of each process in the Carnot cycle are summarized in Table $1$.
Table $1$: Work, heat, ∆U, and ∆H in the P-V diagram of the Carnot Cycle.
Process w q ΔU ΔH
I $-nRT_{high}\ln\left(\dfrac{V_{2}}{V_{1}}\right)$ $nRT_{high}\ln\left(\dfrac{V_{2}}{V_{1}}\right)$ 0 0
II $n\bar{C_{v}}(T_{low}-T_{high})$ 0 $n\bar{C_{v}}(T_{low}-T_{high})$ $n\bar{C_{p}}(T_{low}-T_{high})$
III $-nRT_{low}\ln\left(\dfrac{V_{4}}{V_{3}}\right)$ $nRT_{low}\ln\left(\dfrac{V_{4}}{V_{3}}\right)$ 0 0
IV $n\bar{C_{v}}(T_{high}-T_{low})$ 0 $n\bar{C_{v}}(T_{hight}-T_{low})$ $n\bar{C_{p}}(T_{high}-T_{low})$
Full Cycle $-nRT_{high}\ln\left(\dfrac{V_{2}}{V_{1}}\right)-nRT_{low}\ln\left(\dfrac{V_{4}}{V_{3}}\right)$ $nRT_{high}\ln\left(\dfrac{V_{2}}{V_{1}}\right)+nRT_{low}\ln\left(\dfrac{V_{4}}{V_{3}}\right)$ 0 0
T-S Diagram
The T-S diagram of the Carnot cycle is shown in Figure $3$. In isothermal processes I and III, ∆T=0. In adiabatic processes II and IV, ∆S=0 because dq=0. ∆T and ∆S of each process in the Carnot cycle are shown in Table $2$.
Table $1$: Work, heat, and ∆U in the T-S diagram of the Carnot Cycle.
Process ΔT ΔS
I 0 $-nR\ln\left(\dfrac{V_{2}}{V_{1}}\right)$
II $T_{low}-T_{high}$ 0
III 0 $-nR\ln\left(\dfrac{V_{4}}{V_{3}}\right)$
IV $T_{high}-T_{low}$ 0
Full Cycle 0 0
Efficiency
The Carnot cycle is the most efficient engine possible based on the assumption of the absence of incidental wasteful processes such as friction, and the assumption of no conduction of heat between different parts of the engine at different temperatures. The efficiency of the carnot engine is defined as the ratio of the energy output to the energy input.
\begin{align*} \text{efficiency} &=\dfrac{\text{net work done by heat engine}}{\text{heat absorbed by heat engine}} =\dfrac{-w_{sys}}{q_{high}} \[4pt] &=\dfrac{nRT_{high}\ln\left(\dfrac{V_{2}}{V_{1}}\right)+nRT_{low}\ln \left(\dfrac{V_{4}}{V_{3}}\right)}{nRT_{high}\ln\left(\dfrac{V_{2}}{V_{1}}\right)} \end{align*}
Since processes II (2-3) and IV (4-1) are adiabatic,
$\left(\dfrac{T_{2}}{T_{3}}\right)^{C_{V}/R}=\dfrac{V_{3}}{V_{2}}$
and
$\left(\dfrac{T_{1}}{T_{4}}\right)^{C_{V}/R}=\dfrac{V_{4}}{V_{1}}$
And since T1 = T2 and T3 = T4,
$\dfrac{V_{3}}{V_{4}}=\dfrac{V_{2}}{V_{1}}$
Therefore,
$\text{efficiency}=\dfrac{nRT_{high}\ln\left(\dfrac{V_{2}}{V_{1}}\right)-nRT_{low}\ln\left(\dfrac{V_{2}}{V_{1}}\right)}{nRT_{high}\ln\left(\dfrac{V_{2}}{V_{1}}\right)}$
$\boxed{\text{efficiency}=\dfrac{T_{high}-T_{low}}{T_{high}}}$
Summary
The Carnot cycle has the greatest efficiency possible of an engine (although other cycles have the same efficiency) based on the assumption of the absence of incidental wasteful processes such as friction, and the assumption of no conduction of heat between different parts of the engine at different temperatures.
Problems
1. You are now operating a Carnot engine at 40% efficiency, which exhausts heat into a heat sink at 298 K. If you want to increase the efficiency of the engine to 65%, to what temperature would you have to raise the heat reservoir?
2. A Carnot engine absorbed 1.0 kJ of heat at 300 K, and exhausted 400 J of heat at the end of the cycle. What is the temperature at the end of the cycle?
3. An indoor heater operating on the Carnot cycle is warming the house up at a rate of 30 kJ/s to maintain the indoor temperature at 72 ºF. What is the power operating the heater if the outdoor temperature is 30 ºF?
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Thermodynamics/Thermodynamic_Cycles/Carnot_Cycle.txt
|
Hess's Law of Constant Heat Summation (or just Hess's Law) states that regardless of the multiple stages or steps of a reaction, the total enthalpy change for the reaction is the sum of all changes. This law is a manifestation that enthalpy is a state function.
Introduction
Hess's Law is named after Russian Chemist and Doctor Germain Hess. Hess helped formulate the early principles of thermochemistry. His most famous paper, which was published in 1840, included his law on thermochemistry. Hess's law is due to enthalpy being a state function, which allows us to calculate the overall change in enthalpy by simply summing up the changes for each step of the way, until product is formed. All steps have to proceed at the same temperature and the equations for the individual steps must balance out. The principle underlying Hess's law does not just apply to Enthalpy and can be used to calculate other state functions like changes in Gibbs' Energy and Entropy.
Definition: Hess's Law
The heat of any reaction $\Delta{H^°_f}$ for a specific reaction is equal to the sum of the heats of reaction for any set of reactions which in sum are equivalent to the overall reaction:
(Although we have not considered the restriction, applicability of this law requires that all reactions considered proceed under similar conditions: we will consider all reactions to occur at constant pressure.)
Application
Hydrogen gas, which is of potential interest nationally as a clean fuel, can be generated by the reaction of carbon (coal) and water:
$C_{(s)} + 2 H_2O_{(g)} \rightarrow CO_{2\, (g)} + 2 H_{2\, (g)} \tag{2}$
Calorimetry reveals that this reaction requires the input of 90.1 kJ of heat for every mole of $C_{(s)}$ consumed. By convention, when heat is absorbed during a reaction, we consider the quantity of heat to be a positive number: in chemical terms, $q > 0$ for an endothermic reaction. When heat is evolved, the reaction is exothermic and $q < 0$ by convention.
It is interesting to ask where this input energy goes when the reaction occurs. One way to answer this question is to consider the fact that the reaction converts one fuel, $C_{(s)}$, into another, $H_{2(g)}$. To compare the energy available in each fuel, we can measure the heat evolved in the combustion of each fuel with one mole of oxygen gas. We observe that
$C_{(s)}+O_{2(g)} \rightarrow CO_{2(g)} \tag{3}$
produces $393.5\, kJ$ for one mole of carbon burned; hence $q=-393.5\, kJ$. The reaction
$2 H_{2(g)} + O_{2(g)} \rightarrow 2 H_2O_{(g)} \tag{4}$
produces 483.6 kJ for two moles of hydrogen gas burned, so q=-483.6 kJ. It is evident that more energy is available from combustion of the hydrogen fuel than from combustion of the carbon fuel, so it is not surprising that conversion of the carbon fuel to hydrogen fuel requires the input of energy. Of considerable importance is the observation that the heat input in equation [2], 90.1 kJ, is exactly equal to the difference between the heat evolved, -393.5 kJ, in the combustion of carbon and the heat evolved, -483.6 kJ, in the combustion of hydrogen. This is not a coincidence: if we take the combustion of carbon and add to it the reverse of the combustion of hydrogen, we get
$C_{(s)}+O_{2(g)} \rightarrow CO_{2(g)}$
$2 H_2O_{(g)} \rightarrow 2 H_{2(g)} + O_{2(g)}$
$C_{(s)} + O_{2(g)} + 2 H_2O_{(g)} \rightarrow CO_{2(g)} + 2 H_{2(g)} + O_{2(g)} \tag{5}$
Canceling the $O_{2(g)}$ from both sides, since it is net neither a reactant nor product, equation [5] is equivalent to equation [2]. Thus, taking the combustion of carbon and "subtracting" the combustion of hydrogen (or more accurately, adding the reverse of the combustion of hydrogen) yields equation [2]. And, the heat of the combustion of carbon minus the heat of the combustion of hydrogen equals the heat of equation [2]. By studying many chemical reactions in this way, we discover that this result, known as Hess's Law, is general.
Why it works
A pictorial view of Hess's Law as applied to the heat of equation [2] is illustrative. In figure 1, the reactants C(s) + 2 H2O(g) are placed together in a box, representing the state of the materials involved in the reaction prior to the reaction. The products CO2(g) + 2 H2(g) are placed together in a second box representing the state of the materials involved after the reaction. The reaction arrow connecting these boxes is labeled with the heat of this reaction. Now we take these same materials and place them in a third box containing C(s), O2(g), and 2 H2(g). This box is connected to the reactant and product boxes with reaction arrows, labeled by the heats of reaction in equation [3] and equation [4].
This picture of Hess's Law reveals that the heat of reaction along the "path" directly connecting the reactant state to the product state is exactly equal to the total heat of reaction along the alternative "path" connecting reactants to products via the intermediate state containing $C_{(s)}$, $O_{2(g)}$, and 2 $H_{2(g)}$. A consequence of our observation of Hess's Law is therefore that the net heat evolved or absorbed during a reaction is independent of the path connecting the reactant to product (this statement is again subject to our restriction that all reactions in the alternative path must occur under constant pressure conditions).
A slightly different view of figure 1 results from beginning at the reactant box and following a complete circuit through the other boxes leading back to the reactant box, summing the net heats of reaction as we go. We discover that the net heat transferred (again provided that all reactions occur under constant pressure) is exactly zero. This is a statement of the conservation of energy: the energy in the reactant state does not depend upon the processes which produced that state. Therefore, we cannot extract any energy from the reactants by a process which simply recreates the reactants. Were this not the case, we could endlessly produce unlimited quantities of energy by following the circuitous path which continually reproduces the initial reactants.
By this reasoning, we can define an energy function whose value for the reactants is independent of how the reactant state was prepared. Likewise, the value of this energy function in the product state is independent of how the products are prepared. We choose this function, H, so that the change in the function, ΔH = Hproducts - Hreactants, is equal to the heat of reaction q under constant pressure conditions. H, which we call the enthalpy, is a state function, since its value depends only on the state of the materials under consideration, that is, the temperature, pressure and composition of these materials.
The concept of a state function is somewhat analogous to the idea of elevation. Consider the difference in elevation between the first floor and the third floor of a building. This difference is independent of the path we choose to get from the first floor to the third floor. We can simply climb up two flights of stairs, or we can climb one flight of stairs, walk the length of the building, then walk a second flight of stairs. Or we can ride the elevator. We could even walk outside and have a crane lift us to the roof of the building, from which we climb down to the third floor. Each path produces exactly the same elevation gain, even though the distance traveled is significantly different from one path to the next. This is simply because the elevation is a "state function". Our elevation, standing on the third floor, is independent of how we got to the third floor, and the same is true of the first floor. Since the elevation thus a state function, the elevation gain is independent of the path. Now, the existence of an energy state function H is of considerable importance in calculating heats of reaction. Consider the prototypical reaction in subfigure 2.1, with reactants R being converted to products P. We wish to calculate the heat absorbed or released in this reaction, which is ΔH. Since H is a state function, we can follow any path from R to P and calculate ΔH along that path. In subfigure 2.2, we consider one such possible path, consisting of two reactions passing through an intermediate state containing all the atoms involved in the reaction, each in elemental form. This is a useful intermediate state since it can be used for any possible chemical reaction. For example, in figure 1, the atoms involved in the reaction are C, H, and O, each of which are represented in the intermediate state in elemental form. We can see in subfigure 2.2 that the ΔH for the overall reaction is now the difference between the ΔH in the formation of the products P from the elements and the ΔH in the formation of the reactants R from the elements.
The ΔH values for formation of each material from the elements are thus of general utility in calculating ΔH for any reaction of interest. We therefore define the standard formation reaction for reactant R, as
elements in standard state R
and the heat involved in this reaction is the standard enthalpy of formation, designated by ΔHf°. The subscript f, standing for "formation," indicates that the ΔH is for the reaction creating the material from the elements in standard state. The superscript ° indicates that the reactions occur under constant standard pressure conditions of 1 atm. From subfigure 2.2, we see that the heat of any reaction can be calculated from
$\Delta{H^°_f} = \Delta{H^°_{f,products}} -\Delta{H^°_{f,reactants}} \tag{6}$
Extensive tables of ΔH°f values (Table T1) have been compiled that allows us to calculate with complete confidence the heat of reaction for any reaction of interest, even including hypothetical reactions which may be difficult to perform or impossibly slow to react.
Example 1
The enthalpy of a reaction does not depend on the elementary steps, but on the final state of the products and initial state of the reactants. Enthalpy is an extensive property and hence changes when the size of the sample changes. This means that the enthalpy of the reaction scales proportionally to the moles used in the reaction. For instance, in the following reaction, one can see that doubling the molar amounts simply doubles the enthalpy of the reaction.
H2 (g) + 1/2O2 (g) → H2O (g) ΔH° = -572 kJ
2H2 (g) + O2 (g) → 2H2O (g) ΔH° = -1144kJ
The sign of the reaction enthalpy changes when a process is reversed.
H2 (g) + 1/2O2 (g) → H2O (g) ΔH° = -572 kJ
When switched:
H2O (g) → H2 (g) + 1/2O2 (g) ΔH° = +572 kJ
Since enthalpy is a state function, it is path independent. Therefore, it does not matter what reactions one uses to obtain the final reaction.
Contributors and Attributions
• Shelly Cohen (UCD)
Hesss Law and Simple Enthalpy Calculations
Hess's Law is used to do some simple enthalpy change calculations involving enthalpy changes of reaction, formation and combustion. The enthalpy change accompanying a chemical change is independent of the route by which the chemical change occurs.
Hess's Law is the most important law in this part of chemistry and most calculations follow from it. Hess's Law is saying that if you convert reactants A into products B, the overall enthalpy change will be exactly the same whether you do it in one step or two steps or however many steps. If you look at the change on an enthalpy diagram, that is actually fairly obvious.
This shows the enthalpy changes for an exothermic reaction using two different ways of getting from reactants A to products B. In one case, you do a direct conversion; in the other, you use a two-step process involving some intermediates.
In either case, the overall enthalpy change must be the same, because it is governed by the relative positions of the reactants and products on the enthalpy diagram. If you go via the intermediates, you do have to put in some extra heat energy to start with, but you get it back again in the second stage of the reaction sequence. However many stages the reaction is done in, ultimately the overall enthalpy change will be the same, because the positions of the reactants and products on an enthalpy diagram will always be the same.
You can do calculations by setting them out as enthalpy diagrams as above, but there is a much simpler way of doing it which needs virtually no thought. You could set out the above diagram as:
Hess's Law says that the overall enthalpy change in these two routes will be the same. That means that if you already know two of the values of enthalpy change for the three separate reactions shown on this diagram (the three black arrows), you can easily calculate the third - as you will see below. The big advantage of doing it this way is that you don't have to worry about the relative positions of everything on an enthalpy diagram. It is completely irrelevant whether a particular enthalpy change is positive or negative.
Although most calculations you will come across will fit into a triangular diagram like the above, you may also come across other slightly more complex cases needing more steps. You need to take care in choosing your two routes. The pattern will not always look like the one above.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Thermodynamics/Thermodynamic_Cycles/Hesss_Law.txt
|
In most technological applications, metals are used either in a finely divided form or in a massive, polycrystalline form. At the microscopic level, most materials, with the notable exception of a few truly amorphous specimens, can be considered as a collection or aggregate of single crystal crystallites. The surface chemistry of the material as a whole is therefore crucially dependent upon the nature and type of surfaces exposed on these crystallites.
01: Structure of Solid Surfaces
In most technological applications, metals are used either in a finely divided form (e.g. supported metal catalysts) or in a massive, polycrystalline form (e.g. electrodes, mechanical fabrications). At the microscopic level, most materials, with the notable exception of a few truly amorphous specimens, can be considered as a collection or aggregate of single crystal crystallites. The surface chemistry of the material as a whole is therefore crucially dependent upon the nature and type of surfaces exposed on these crystallites. In principle, therefore, we can understand the surface properties of any material if we
1. (This approach assumes that we can neglect the possible influence of crystal defects and solid state interfaces on the surface chemistry). It is therefore vitally important that we can independently study different, well-defined surfaces. The most commonly employed technique, is to prepare macroscopic (i.e. size ~ cm) single crystals of metals and then to deliberately cut-them in a way which exposes a large area of the specific surface of interest.
Most metals only exist in one bulk structural form - the most common metallic crystal structures being:
• For each of these crystal systems, there are in principle an infinite number of possible surfaces which can be exposed. In practice, however, only a limited number of planes (predominantly the so-called "low-index" surfaces) are found to exist in any significant amount and we can concentrate our attention on these surfaces. Furthermore, it is possible to predict the ideal atomic arrangement at a given surface of a particular metal by considering how the bulk structure is intersected by the surface. Firstly, however, we need to look in detail at the bulk crystal structures.
1.02: Miller Indices (hkl)
The orientation of a surface or a crystal plane may be defined by considering how the plane (or indeed any parallel plane) intersects the main crystallographic axes of the solid. The application of a set of rules leads to the assignment of the Miller Indices (hkl), which are a set of numbers which quantify the intercepts and thus may be used to uniquely identify the plane or surface.
The following treatment of the procedure used to assign the Miller Indices is a simplified one (it may be best if you simply regard it as a "recipe") and only a cubic crystal system (one having a cubic unit cell with dimensions a x a x a ) will be considered.
The procedure is most easily illustrated using an example so we will first consider the following surface/plane:
Step 1: Identify the intercepts on the x-, y- and z- axes.
In this case the intercept on the x-axis is at x = a ( at the point (a,0,0) ), but the surface is parallel to the y- and z-axes - strictly therefore there is no intercept on these two axes but we shall consider the intercept to be at infinity ( ) for the special case where the plane is parallel to an axis. The intercepts on the x-, y- and z-axes are thus
Intercepts: a, ,
Step 2: Specify the intercepts in fractional co-ordinates
Co-ordinates are converted to fractional co-ordinates by dividing by the respective cell-dimension - for example, a point (x,y,z) in a unit cell of dimensions a x b x c has fractional co-ordinates of ( x/a, y/b, z/c ). In the case of a cubic unit cell each co-ordinate will simply be divided by the cubic cell constant, a . This gives
Fractional Intercepts: a/a, /a, /a i.e. 1, ,
Step 3: Take the reciprocals of the fractional intercepts
This final manipulation generates the Miller Indices which (by convention) should then be specified without being separated by any commas or other symbols. The Miller Indices are also enclosed within standard brackets (….) when one is specifying a unique surface such as that being considered here.
The reciprocals of 1 and are 1 and 0 respectively, thus yielding
Miller Indices: (100)
So the surface/plane illustrated is the (100) plane of the cubic crystal.
Other Examples
1. The (110) surface
Assignment
Intercepts: a, a,
Fractional intercepts: 1, 1,
Miller Indices: (110)
2. The (111) surface
Assignment
Intercepts: a, a, a
Fractional intercepts: 1, 1, 1
Miller Indices: (111)
The (100), (110) and (111) surfaces considered above are the so-called low index surfaces of a cubic crystal system (the "low" refers to the Miller indices being small numbers - 0 or 1 in this case). These surfaces have a particular importance but there an infinite number of other planes that may be defined using Miller index notation. We shall just look at one more …
3. The (210) surface
Assignment
Intercepts: ½ a, a,
Fractional intercepts: ½, 1,
Miller Indices: (210)
Further notes:
1. in some instances the Miller indices are best multiplied or divided through by a common number in order to simplify them by, for example, removing a common factor. This operation of multiplication simply generates a parallel plane which is at a different distance from the origin of the particular unit cell being considered. e.g. (200) is transformed to (100) by dividing through by 2 .
2. if any of the intercepts are at negative values on the axes then the negative sign will carry through into the Miller indices; in such cases the negative sign is actually denoted by overstriking the relevant number. e.g. (00 -1) is instead denoted by
3. in the hcp crystal system there are four principal axes; this leads to four Miller Indices e.g. you may see articles referring to an hcp (0001) surface. It is worth noting, however, that the intercepts on the first three axes are necessarily related and not completely independent; consequently the values of the first three Miller indices are also linked by a simple mathematical relationship.
What are symmetry-equivalent surfaces ?
In the following diagram the three highlighted surfaces are related by the symmetry elements of the cubic crystal - they are entirely equivalent.
In fact there are a total of 6 faces related by the symmetry elements and equivalent to the (100) surface - any surface belonging to this set of symmetry related surfaces may be denoted by the more general notation {100} where the Miller indices of one of the surfaces is instead enclosed in curly-brackets.
Final important note: in the cubic system the (hkl) plane and the vector [hkl], defined in the normal fashion with respect to the origin, are normal to one another but this characteristic is unique to the cubic crystal system and does not apply to crystal systems of lower symmetry.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Surface_Science_(Nix)/01%3A_Structure_of_Solid_Surfaces/1.01%3A_Introduction.txt
|
Many of the technologically most important metals possess the fcc structure, e.g., the catalytically important precious metals (Pt, Rh, Pd) all exhibit an fcc structure. The low index faces of this system are the most commonly studied of surfaces: as we shall see they exhibit a range of
• Surface symmetry
• Surface atom coordination
• Surface reactivity
The fcc (100) Surface
The (100) surface is that obtained by cutting the fcc metal parallel to the front surface of the fcc cubic unit cell - this exposes a surface (the atoms in blue) with an atomic arrangement of 4-fold symmetry
fcc unit cell (100) face
The diagram below shows the conventional birds-eye view of the (100) surface - this is obtained by rotating the preceding diagram through 45° to give a view which emphasizes the 4-fold (rotational) symmetry of the surface layer atoms.
The tops of the second layer atoms are just visible through the holes in the first layer, but would not be accessible to molecules arriving from the gas phase.
Exercise \(1\)
What is the coordination number of the surface layer atoms on the fcc(100) surface ?
Answer
The coordination number of the surface layer atoms is 8
Rationale: Each surface atom has four nearest neighbors in the 1st layer, and another four in the layer immediately below ; a total of 8. This contrasts with the CN of metal atoms in the bulk of the solid which is 12 for a fcc metal.
There are several other points worthy of note:
1. All the surface atoms are equivalent
2. The surface is relatively smooth at the atomic scale
3. The surface offers various adsorption sites for molecules which have different local symmetries and lead to different coordination geometries - specifically there are:
• On-top sites (above a single metal atom)
• Bridging sites, between two atoms
• Hollow sites, between four atoms
Depending upon the site occupied, an adsorbate species (with a single point of attachments to the surface) is therefore likely to be bonded to either one, two or four metal atoms.
The fcc(110) surface
The (110) surface is obtained by cutting the fcc unit cell in a manner that intersects the x and y axes but not the z-axis - this exposes a surface with an atomic arrangement of 2-fold symmetry.
fcc unit cell (110) face
The diagram below shows the conventional birds-eye view of the (110) surface - emphasizing the rectangular symmetry of the surface layer atoms. The diagram has been rotated such that the rows of atoms in the first atomic layer now run vertically, rather than horizontally as in the previous diagram.
It is clear from this view that the atoms of the topmost layer are much less closely packed than on the (100) surface - in one direction (along the rows) the atoms are in contact i.e. the distance between atoms is equal to twice the metallic(atomic) radius, but in the orthogonal direction there is a substantial gap between the rows.
This means that the atoms in the underlying second layer are also, to some extent, exposed at the surface
(110) surface plane, e.g. Cu(110)
The preceding diagram illustrates some of those second layer atoms, exposed at the bottom of the troughs.
In this case, the determination of atomic coordination numbers requires a little more careful thought: one way to double-check your answer is to remember that the CN of atoms in the bulk of the fcc structure is 12, and then to subtract those which have been removed from above in forming the surface plane.
Exercise \(2\)
What is the coordination number of the topmost layer atoms on the fcc(110) surface?
Answer
The coordination number of the topmost layer atoms is 7
Rationale: Each surface atom has two nearest neighbors in the 1st layer, and another four in the layer immediately below, and one directly below it in the third layer; this gives a total of 7.
To confirm this consider those that have been removed from the layers above - clearly there would have been 4 nearest neighbors in the layer immediately above the surface layer ( equivalent to the four in the layer immediately below). In addition, there would have been one nearest neighbor directly above each surface atom ( equivalent to the one directly below in the third layer). Hence, 7 (present) + 5 (removed) = 12 - which is correct !
If we compare this coordination number with that obtained for the (100) surface, it is worth noting that the surface atoms on a more open ("rougher") surface have a lower CN; this has important implications when it comes to the chemical reactivity of surfaces.
Question
Do the atoms in the second layer have the bulk coordination ?
No - the fact that they are clearly exposed (visible) at the surface implies that they have a lower CN than they would in the bulk.
Exercise \(3\)
What is the coordination number of these second layer atoms on the fcc(110) surface ?
Answer
The coordination number of the second layer atoms is 11
Rationale: The atoms in the second layer are only missing one atom from their complete coordination shell ( the atom that would have been directly above them) i.e. they have CN = (12-1) = 11
In summary, we can note that
1. All first layer surface atoms are equivalent, but second layer atoms are also exposed
2. The surface is atomically rough, and highly anisotropic
3. The surface offers a wide variety of possible adsorption sites, including:
• On-top sites
• Short bridging sites between two atoms in a single row
• Long bridging sites between two atoms in adjacent rows
• Higher coordination sites ( in the troughs)
The fcc (111) Surface
The (111) surface is obtained by cutting the fcc metal in such a way that the surface plane intersects the x-, y- and z- axes at the same value - this exposes a surface with an atomic arrangement of 3-fold ( apparently 6-fold, hexagonal) symmetry. This layer of surface atoms actually corresponds to one of the close-packed layers on which the fcc structure is based.
fcc unit cell (111) face
The diagram below shows the conventional birds-eye view of the (111) surface - emphasizing the hexagonal packing of the surface layer atoms. Since this is the most efficient way of packing atoms within a single layer, they are said to be "close-packed".
(111) surface plane, e.g. Pt(111)
Exercise \(4\)
What is the coordination number of the surface layer atoms on the fcc(111) surface?
Answer
The coordination number of the surface layer atoms is 9
Rationale: Each surface atom has six nearest neighbors in the 1st layer, and another three in the layer immediately below ; a total of 9.
The following features are worth noting ;
1. All surface atoms are equivalent and have a relatively high CN
2. The surface is almost smooth at the atomic scale
3. The surface offers the following adsorption sites:
• On-top sites
• Bridging sites, between two atoms
• Hollow sites, between three atoms
How do these surfaces intersect in irregular-shaped samples
Flat surfaces of single crystal samples correspond to a single Miller Index plane and, as we have seen, each individual surface has a well-defined atomic structure. It is these flat surfaces that are used in most surface science investigations, but it is worth a brief aside to consider what type of surfaces exist for an irregular shaped sample (but one that is still based on a single crystal). Such samples can exhibit facets corresponding to a range of different Miller Index planes. This is best illustrated by looking at the diagrams below.
Summary
Depending upon how an fcc single crystal is cleaved or cut, flat surfaces of macroscopic dimensions which exhibit a wide range of structural characteristics may be produced. The single crystal surfaces discussed here ( (100), (110) & (111)) represent only the most frequently studied surface planes of the fcc system - however, they are also the most commonly occurring surfaces on such metals and the knowledge gained from studies on this limited selection of surfaces goes a long way in propagating the development of our understanding of the surface chemistry of these metals. For further information on other fcc metal surfaces you should take a look at Section 1.8 which includes a brief description of high index fcc surfaces with illustrative examples.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Surface_Science_(Nix)/01%3A_Structure_of_Solid_Surfaces/1.03%3A_Surface_Structures-_fcc_Metals.txt
|
This important class of metallic structures includes metals such as Co, Zn, Ti & Ru.
The Miller Index notation used to describe the orientation of surface planes for all crystallographic systems is slightly more complex in this case since the crystal structure does not lend itself to description using a standard cartesian set of axes- instead the notation is based upon three axes at 120 degrees in the close-packed plane, and one axis (the c-axis) perpendicular to these planes. This leads to a four-digit index structure ; however, since the third of these is redundant it is sometimes left out !
I. The hcp (0001) surface
This is the most straightforward of the hcp surfaces since it corresponds to a surface plane which intersects only the c-axis, being coplanar with the other 3 axes i.e. it corresponds to the close packed planes of hexagonally arranged atoms that form the basis of the structure. It is also sometimes referred to as the (001) surface.
Figure (0001) surface plane, e.g. Ru(0001)
This conventional plan view of the (0001) surface shows the hexagonal packing of the surface layer atoms.
Summary
We can summarize the characteristics of this surface by noting that:
1. All the surface atoms are equivalent and have CN=9
2. The surface is almost smooth at the atomic scale
3. The surface offers the following adsorption sites:
• On-top sites
• Bridging sites, between two atoms
• Hollow sites, between three atoms
1.05: Surface Structures- bcc metals
A number of important metals ( e.g. Fe, W, Mo ) have the bcc structure. As a result of the low packing density of the bulk structure, the surfaces also tend to be of a rather open nature with surface atoms often exhibiting rather low coordination numbers.
I. The bcc (100) surface
The (100) surface is obtained by cutting the metal parallel to the front surface of the bcc cubic unit cell - this exposes a relatively open surface with an atomic arrangement of 4-fold symmetry.
bcc unit cell (100) face
The diagram below shows a plan view of this (100) surface - the atoms of the second layer (shown on left) are clearly visible, although probably inaccessible to any gas phase molecules.
bcc (100) surface plane, e.g. Fe(100)
II. The bcc (110) surface
The (110) surface is obtained by cutting the metal in a manner that intersects the x and y axes but creates a surface parallel to the z-axis - this exposes a surface which has a higher atom density than the (100) surface.
bcc unit cell (110) face
The following diagram shows a plan view of the (110) surface - the atoms in the surface layer strictly form an array of rectangular symmetry, but the surface layer coordination of an individual atom is quite close to hexagonal.
bcc(110) surface plane, e.g. Fe(110)
III. The bcc (111) surface
The (111) surface of bcc metals is similar to the (111) face of fcc metals only in that it exhibits a surface atomic arrangement exhibiting 3-fold symmetry - in other respects it is very different.
Top View: bcc(111) surface plane e.g. Fe(111)
In particular it is a very much more open surface with atoms in both the second and third layers clearly visible when the surface is viewed from above. This open structure is also clearly evident when the surface is viewed in cross-section as shown in the diagram below in which atoms of the various layers have been annotated.
Side View: bcc(111) surface plane, e.g. Fe(111)
1.06: Energetics of Surfaces
All surfaces are energetically unfavorable in that they have a positive free energy of formation. A simple rationalization for why this must be the case comes from considering the formation of new surfaces by cleavage of a solid and recognizing that bonds have to be broken between atoms on either side of the cleavage plane in order to split the solid and create the surfaces. Breaking bonds requires work to be done on the system, so the surface free energy (surface tension) contribution to the total free energy of a system must therefore be positive.
The unfavorable contribution to the total free energy may, however, be minimized in several ways:
1. By reducing the amount of surface area exposed
2. By predominantly exposing surface planes which have a low surface free energy
3. By altering the local surface atomic geometry in a way which reduces the surface free energy
The first and last points are considered elsewhere (1.7 Particulate Metals, & 1.6 Relaxation and Reconstruction, respectively ) - only the second point will be considered further here.
Of course, systems already possessing a high surface energy (as a result of the preparation method) will not always readily interconvert to a lower energy state at low temperatures due to the kinetic barriers associated with the restructuring - such systems (e.g. highly dispersed materials such as those in colloidal suspensions or supported metal catalysts) are thus "metastable".
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Surface_Science_(Nix)/01%3A_Structure_of_Solid_Surfaces/1.04%3A_Surface_Structures-_hcp_Metals.txt
|
The phenomena of relaxation and reconstruction involve rearrangements of surface ( and near surface ) atoms, this process being driven by the energetics of the system i.e. the desire to reduce the surface free energy (see Energetics of Surfaces). As with all processes, there may be kinetic limitations which prevent or hinder these rearrangements at low temperatures. Both processes may occur with clean surfaces in ultrahigh vacuum, but it must be remembered that adsorption of species onto the surface may enhance, alter or even reverse the process !
I. Relaxation
Relaxation is a small and subtle rearrangement of the surface layers which may nevertheless be significant energetically, and seems to be commonplace for metal surfaces. It involves adjustments in the layer spacings perpendicular to the surface, there is no change either in the periodicity parallel to the surface or to the symmetry of the surface.
Figure (left) Unrelaxed Surface and (right) Relaxed Surface with \(d_{1-2} < d_{bulk}\).
The right picture shows the relaxed surface: the first layer of atoms is typically drawn in slightly towards the second layer (i.e. d1-2 < dbulk ). We can consider what might be the driving force for this process at the atomic level. If we use a localized model for the bonding in the solid then it is clear that an atom in the bulk is acted upon by a balanced, symmetrical set of forces.
On the other hand, an atom at the unrelaxed surface suffers from an imbalance of forces and the surface layer of atoms may therefore be pulled in towards the second layer.
(Whether this is a reasonable model for bonding in a metal is open to question !)
The magnitude of the contraction in the first layer spacing is generally small ( < 10 % )- compensating adjustments to other layer spacings may extend several layers into the solid.
II. Reconstruction
The reconstruction of surfaces is a much more readily observable effect, involving larger (yet still atomic scale) displacements of the surface atoms. It occurs with many of the less stable metal surfaces (e.g. it is frequently observed on fcc(110) surfaces), but is much more prevalent on semiconductor surfaces.
Unlike relaxation, the phenomenon of reconstruction involves a change in the periodicity of the surface structure - the diagram below shows a surface, viewed from the side, which corresponds to an unreconstructed termination of the bulk structure.
This may be contrasted with the following picture which shows a schematic of a reconstructed surface - this particular example is similar to the "missing row model" proposed for the structure of a number of reconstructed (110) fcc metal surfaces.
Since reconstruction involves a change in the periodicity of the surface and in some cases also a change in surface symmetry, it is readily detected using surface diffraction techniques (e.g. LEED & RHEED ).
The overall driving force for reconstruction is once again the minimization of the surface free energy - at the atomic level, however, it is not always clear why the reconstruction should reduce the surface free energy. For some metallic surfaces, it may be that the change in periodicity of the surface induces a splitting in surface-localized bands of energy levels and that this can lead to a lowering of the total electronic energy when the band is initially only partly full.
In the case of many semiconductors, the simple reconstructions can often be explained in terms of a "surface healing" process in which the co-ordinative unsaturation of the surface atoms is reduced by bond formation between adjacent atoms. For example, the formation of a Si(100) surface requires that the bonds between the Si atoms that form the new surface layer and those that were in the layer immediately above in the solid are broken - this leaves two "dangling bonds" per surface Si atom.
A relatively small co-ordinated movement of the atoms in the topmost layer can reduce this unsatisfied co-ordination - pairs of Si atoms come together to form surface "Si dimers", leaving only one dangling bond per Si atom. This process leads to a change in the surface periodicity: the period of the surface structure is doubled in one direction giving rise to the so-called (2x1) reconstruction observed on all clean Si(100) surfaces [ Si(100)-(2x1) ].
More examples:
Si(111)-(7x7)
From the web-pages of the Omicron NanoTechnology GmbH (showing data courtesy of Prof. Hongiun Gao´s group, Institute of Physics, CAS, Beijing, China).
In this section, attention has been concentrated on the reconstruction of clean surfaces. It is, however, worth noting that reconstruction of the substrate surface is frequently induced by the adsorption of molecular or atomic species onto the surface - this phenomenon is known as adsorbate-induced reconstruction (see Section 2.5 for some examples).
Summary
The minimization of surface energy means that even single crystal surfaces will not exhibit the ideal geometry of atoms to be expected by truncating the bulk structure of the solid parallel to a particular plane. The differences between the real structure of the clean surface and the ideal structure may be imperceptibly small (e.g. a very slight surface relaxation ) or much more marked and involving a change in the surface periodicity in one or more of the main symmetry directions ( surface reconstruction ).
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Surface_Science_(Nix)/01%3A_Structure_of_Solid_Surfaces/1.07%3A_Relaxation_and_Reconstruction.txt
|
As mentioned in the Introduction, macroscopic single crystals of metals are not generally employed in technological applications.
Massive metallic structures (electrodes etc.) are polycrystalline in nature - the size of individual crystallites being determined by the mechanical treatment and thermal history of the metal concerned. Nevertheless, the nature and properties of the exposed polycrystalline, metal surface is still principally determined by the characteristics of the individual crystal surfaces present. Furthermore, the proportions in which the different crystal surfaces occur is controlled by their relative thermodynamic stabilities. Thus, a macroscopic piece of an fcc metal will generally expose predominantly (111)-type surface planes.
A more interesting case for consideration is that of metals in a highly dispersed system - the classic example of which is a supported metal catalyst (such as those employed in the petrochemical industries and automotive catalytic converters). In such catalysts the average metal particle size is invariably sub-micron and may be as small as 1 nm . These metal particles are often tiny single crystals or simple twinned crystals.
The shape of these small crystals is principally determined by the surface free energy contribution to the total energy. There are two ways in which the surface energy can be reduced for a crystal of fixed mass / volume:
1. By minimizing the surface area of the crystallite
2. By ensuring that only surfaces of low surface free energy are exposed.
If matter is regarded as continuous then the optimum shape for minimizing the surface free energy is a sphere (since this has the lowest surface area/volume ratio of any 3D object) - this is why liquid droplets in free space are basically spherical.
Unfortunately, we cannot ignore the discrete, atomic nature of matter and the detailed atomic structure of surfaces when considering particles of the size found in catalysts. If, for example, we consider an fcc metal (eg. Pt) and ensure that only the most stable (111)-type surfaces are exposed, then we end up with a crystal which is an octahedron.
Octahedron exposing. 8 symmetry-related. fcc(111)-type faces
Note
There are 8 different, but crystallographically-equivalent, surface planes which have the (111) surface structure - the {111} faces. They are related by the symmetry elements of the cubic fcc system.
A compromise between exposing only the lowest energy surface planes and minimizing the surface area is obtained by truncating the vertices of the octahedron - this generates a cubo-octahedral particle as shown below, with 8 (111)-type surfaces and 6 smaller, (100)-type surfaces and gives a lower (surface area / volume) ratio.
Crystals of this general form are often used as conceptual models for the metal particles in supported catalysts.
The atoms in the middle of the {111} faces show the expected CN=9 characteristic of the (111) surface. Similarly, those atoms in the centre of the {100} surfaces have the characteristic CN=8 of the (100) surface. However, there are also many atoms at the corners and intersection of surface planes on the particle which show lower coordination numbers.
What is the lowest coordination number exhibited by a surface atom on this crystallite ?
This model for the structure of catalytic metal crystallites is not always appropriate: it is only reasonable to use it when there is a relatively weak interaction between the metal and the support phase (e.g. many silica supported catalysts).
A stronger metal-support interaction is likely to lead to increased "wetting" of the support by the metal, giving rise to:
• a greater metal-support contact area
• a significantly different metal particle morphology
Example \(1\)
In the case of a strong metal-support interaction the metal/oxide interfacial free energy is low and it is inappropriate to consider the surface free energy of the metal crystallite in isolation from that of the support.
Our knowledge of the structure of very small particles comes largely from high resolution electron microscopy (HREM) studies - with the best modern microscopes it is possible to directly observe such particles and resolve atomic structure.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Surface_Science_(Nix)/01%3A_Structure_of_Solid_Surfaces/1.08%3A_Particulate_Metals.txt
|
Although some of the more common metallic surface structures have been discussed in previous sections (1.2-1.4), there are many other types of single crystal surface which may be prepared and studied. These include
• high index surfaces of metals
• single crystal surfaces of compounds
These will not be covered in any depth, but a few illustrative examples are given below to give you a flavor of the additional complexity involved when considering such surfaces.
High Index Surfaces of Metals
High index surfaces are those for which one or more of the Miller Indices are relatively large numbers. The most commonly studied surfaces of this type are vicinal surfaces which are cut at a relatively small angle to one of the low index surfaces. The ideal surfaces can then be considered to consist of terraces which have an atomic arrangement identical with the corresponding low index surface, separated by monatomic steps (steps which are a single atom high).
Perspective view of the fcc(775) surface
As seen above, the ideal fcc(775) surface has a regular array of such steps and these steps are both straight and parallel to one another.
Exercise \(1\)
What is the coordination number of a step atom on this surface ?
Answer
The coordination number of the atoms at the steps is 7
Rationale: Each step atom has four nearest neighbors in the surface layer of terrace atoms which terminates at the step, and another three in the layer immediately below ; a total of 7. This contrasts with the CN of surface atoms on the terraces which is 9.
By contrast a surface for which all the Miller indices differ must not only exhibit steps but must also contain kinks in the steps. An example of such a surface is the fcc(10.8.7) surface - the ideal geometry of which is shown below.
Perspective view of the fcc(10.8.7) surface
Exercise \(2\)
What is the lowest coordination number exhibited by any of the atoms on this surface?
Answer
The lowest coordination number is 6 which is that exhibited by atoms at the kinks in the steps.
Rationale: The lowest coordination number is exhibited by atoms "on the outside" of the kinks in the steps. Such atoms have only three nearest neighbors in the surface layer of terrace atoms which terminates at the step, and another three in the layer immediately below ; a total of 6. This contrasts with the surface atoms on the terraces which have a coordination number of 9 and the normal step atoms which have a coordination number of 7.
Real vicinal surfaces do not, of course, exhibit the completely regular array of steps and kinks illustrated for the ideal surface structures, but they do exhibit this type of step and terrace morphology. The special adsorption sites available adjacent to the steps are widely recognized to have markedly different characteristics to those available on the terraces and may thus have an important role in certain types of catalytic reaction.
Single Crystal Surfaces of Compounds
The ideal surface structures of the low index planes of compound materials can be easily deduced from the bulk structures in exactly the same way as can be done for the basic metal structures. For example, the NaCl(100) surface that would be expected from the bulk structure is shown below:
Perspective view of the NaCl(100) surface
In addition to the relaxation and reconstruction exhibited by elemental surfaces, the surfaces of compounds may also show deviations from the bulk stoichiometry due to surface localised reactions (e.g. surface reduction) and/or surface segregation of one or more components of the material.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Surface_Science_(Nix)/01%3A_Structure_of_Solid_Surfaces/1.09%3A_Other_Single_Crystal_Surfaces.txt
|
• 2.1: Introduction to Molecular Adsorption
The adsorption of molecules on to a surface is a necessary prerequisite to any surface mediated chemical process.
• 2.2: How do Molecules Bond to Surfaces?
There are two principal modes of adsorption of molecules on surfaces: Physical Adsorption (physisorption ) and Chemical Adsorption (chemisorption). The basis of distinction is the nature of the bonding between the molecule and the surface.
• 2.3: Kinetics of Adsorption
The rate of adsorption of a molecule onto a surface can be expressed in the same manner as any kinetic process. For example, when it is expressed in terms of the partial pressure of the molecule in the gas phase above the surface.
• 2.4: PE Curves and Energetics of Adsorption
In this section we will consider both the energetics of adsorption and factors which influence the kinetics of adsorption by looking at the "potential energy diagram/curve" for the adsorption process. The potential energy curve for the adsorption process is a representation of the variation of the energy (PE or E ) of the system as a function of the distance (d) of an adsorbate from a surface.
• 2.5: Adsorbate Geometries and Structures
We can address the question of what happens when a molecule becomes adsorbed onto a surface at two levels; specifically we can aim to identify (1) the nature of the adsorbed species and its local adsorption geometry (i.e., its chemical structure and co-ordination to adjacent substrate atoms) and (2) the overall structure of the extended adsorbate/substrate interface (i.e., the long range ordering of the surface) .
• 2.6: The Desorption Process
An adsorbed species present on a surface at low temperatures may remain almost indefinitely in that state. As the temperature of the substrate is increased, however, there will come a point at which the thermal energy of the adsorbed species is such that one of several things may occur. Including that the species may desorb from the surface and return into the gas phase. This the desorption process.
02: Adsorption of Molecules on Surfaces
The adsorption of molecules on to a surface is a necessary prerequisite to any surface mediated chemical process. For example, in the case of a surface catalyzed reaction it is possible to break down the whole continuously-cycling process into the following five basic steps:
1. Diffusion of reactants to the active surface
2. Adsorption of one or more reactants onto the surface
3. Surface reaction
4. Desorption of products from the surface
5. Diffusion of products away from the surface
The above scheme not only emphasizes the importance of the adsorption process but also its reverse - namely desorption. It is these two processes which are considered in this chapter.
Notes on Terminology
• Substrate - frequently used to describe the solid surface onto which adsorption can occur; the substrate is also occasionally (although not here) referred to as the adsorbent.
• Adsorbate - the general term for the atomic or molecular species which are adsorbed (or are capable of being adsorbed) onto the substrate.
• Adsorption - the process in which a molecule becomes adsorbed onto a surface of another phase (note - to be distinguished from absorption which is used when describing uptake into the bulk of a solid or liquid phase)
• Coverage - a measure of the extent of adsorption of a species onto a surface (unfortunately this is defined in more than one way !). It is usually denoted by the lower case Greek "theta", θ
• Exposure - a measure of the amount of gas which a surface has seen; more specifically, it is the product of the pressure and time of exposure (normal unit is the Langmuir, where 1 L = 10-6 Torr s ).
2.02: How do Molecules Bond to Surfaces?
There are two principal modes of adsorption of molecules on surfaces: Physical Adsorption (physisorption ) and Chemical Adsorption (chemisorption). The basis of distinction is the nature of the bonding between the molecule and the surface with:
• Physical Adsorption: the only bonding is by weak Van der Waals - type forces. There is no significant redistribution of electron density in either the molecule or at the substrate surface.
• Chemisorption: a chemical bond, involving substantial rearrangement of electron density, is formed between the adsorbate and substrate. The nature of this bond may lie anywhere between the extremes of virtually complete ionic or complete covalent character.
Typical Characteristics of Adsorption Processes
Chemisorption Physisorption
Material Specificity
(variation between substrates of different chemical composition)
Substantial variation between materials Slight dependence upon substrate composition
Crystallographic Specificity
(variation between different surface planes of the same crystal)
Marked variation between crystal planes Virtually independent of surface atomic geometry
Temperature Range
(over which adsorption occurs)
Virtually unlimited
(but a given molecule may effectively adsorb only over a small range)
Near or below the condensation point of the gas
(e.g. Xe < 100 K, CO2 < 200 K)
Adsorption Enthalpy Wide range (related to the chemical bond strength) - typically 40 - 800 kJ mol-1 Related to factors like molecular mass and polarity - typically 5-40 kJ mol-1
(similar to heat of liquefaction)
Nature of Adsorption Often dissociative
May be irreversible
Non-dissociative
Reversible
Saturation Uptake Limited to one monolayer Multilayer uptake possible
Kinetics of Adsorption Very variable - often an activated process Fast - since it is a non-activated process
The most definitive method for establishing the formation of a chemical bond between the adsorbing molecule and the substrate (i.e. chemisorption) is to use an appropriate spectroscopic technique, for example
• IR (Section 5.4 ) to observe the vibrational frequency of the substrate/adsorbate bond
• UPS (Section 5.3 ) to monitor intensity & energy shifts in the valence orbitals of the adsorbate and substrate
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Surface_Science_(Nix)/02%3A_Adsorption_of_Molecules_on_Surfaces/2.01%3A_Introduction_to_Molecular_Adsorption.txt
|
The rate of adsorption, $R_{ads}$, of a molecule onto a surface can be expressed in the same manner as any kinetic process. For example, when it is expressed in terms of the partial pressure of the molecule in the gas phase above the surface:
$R_{ads} = k' P^x$
where:
• x - kinetic order
• k' - rate constant
• P - partial pressure
If the rate constant is then expressed in an Arrhenius form, then we obtain a kinetic equation of the form:
$R_{ads} = A e^{-E_a / RT} P^ x$
where $E_a$ is the activation energy for adsorption and $A$ the pre-exponential (frequency) factor. It is much more informative, however, to consider the factors controlling this process at the molecular level. The rate of adsorption is governed by (1) the rate of arrival of molecules at the surface and (2) the proportion of incident molecules which undergo adsorption. Hence, we can express the rate of adsorption (per unit area of surface, i.e., molecules m-2 s-1 ) as a product of the incident molecular flux, $F$, and a sticking probability, $S$.
$R_{ads} = S\, F \label{flux-stick}$
The sticking probability varies from 0 (never sticking) to 1 (always sticking). The flux (in molecules m-2 s-1) of incident molecules is given by the Hertz-Knudsen equation
$F = \dfrac{P}{ \sqrt{2πmkT}}$
where
• $P$ - gas pressure [ N m-2 ]
• $m$ - mass of one molecule [ kg ]
• $T$ - temperature [ K ]
The sticking probability is clearly a property of the adsorbate / substrate system under consideration but must lie in the range 0 < S < 1; it may depend upon various factors - foremost amongst these being the existing coverage of adsorbed species ($θ$) and the presence of any activation barrier to adsorption. In general,therefore
$S = f (θ) e^{-E_a / RT}$
where, once again, $E_a$ is the activation energy for adsorption and $f(θ)$ is some, as yet undetermined, function of the existing surface coverage of adsorbed species. Combining the equations for $S$ and $F$ yields the following expression for the rate of adsorption:
$R =\dfrac{f(θ) P}{\sqrt{2\pi m kT}} e^{-E_a/RT} \label{eq5}$
1. Equation \ref{eq5} indicates that the rate of adsorption is expected to be first order with regard to the partial pressure of the molecule in the gas phase above the surface.
2. It should be recognized that the activation energy for adsorption may itself be dependent upon the surface coverage, i.e. $E_a = E(θ)$.
3. If it is further assumed that the sticking probability is directly proportional to the concentration of vacant surface sites (which would be a reasonable first approximation for non-dissociative adsorption) then $f(θ)$ is proportional to $(1-θ)$, where, in this instance, $θ$ is the fraction of sites which are occupied (i.e. the Langmuir definition of surface coverage).
For a discussion of some of the factors which determine the magnitude of the activation energy of adsorption you should see Section 2.4 which looks at the typical PE curve associated with various types of adsorption process.
Estimating Surface Coverages arising as a result of Gas Exposure
If a surface is initially clean and it is then exposed to a gas pressure under conditions where the rate of desorption is very slow, then the coverage of adsorbed molecules may initially be estimated simply by consideration of the kinetics of adsorption. As noted above, the rate of adsorption is given by Equation $\ref{flux-stick}$, which can be written in term of a derivative
$\dfrac{dN_{ads}}{dt} = S \, F \label{diffEq}$
where $N_{ads}$ is the number of adsorbed species per unit area of surface.
Equation $\ref{diffEq}$ must be integrated to obtain an expression for $N_{ads}$, since the sticking probability is coverage (and hence also time) dependent. However, if it is assumed that the sticking probability is essentially constant (which may be a reasonable approximation for relatively low coverages), then this integration simply yields:
$N_{ads} = SFt$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Surface_Science_(Nix)/02%3A_Adsorption_of_Molecules_on_Surfaces/2.03%3A_Kinetics_of_Adsorption.txt
|
In this section, we will consider both the energetics of adsorption and factors which influence the kinetics of adsorption by looking at the "potential energy diagram/curve" for the adsorption process. The potential energy curve for the adsorption process is a representation of the variation of the energy (PE or E) of the system as a function of the distance (\(d\)) of an adsorbate from a surface.
Within this simple one-dimensional (1D) model, the only variable is the distance (\(d\)) of the adsorbing molecule from the substrate surface.
Thus, the energy of the system is a function only of this variable i.e.
\[E = E(d)\]
It should be remembered that this is a very simplistic model which neglects many other parameters which influence the energy of the system (a single molecule approaching a clean surface), including for example
• the angular orientation of the molecule
• changes in the internal bond angles and bond lengths of the molecule
• the position of the molecule parallel to the surface plane
The interaction of a molecule with a given surface will also clearly be dependent upon the presence of any existing adsorbed species, whether these be surface impurities or simply pre-adsorbed molecules of the same type (in the latter case we are starting to consider the effect of surface coverage on the adsorption characteristics). Nevertheless, it is useful to first consider the interaction of an isolated molecule with a clean surface using the simple 1D model. For the purposes of this Module, we will also not be overly concerned whether the "energy" being referred to should strictly be the internal energy, the enthalpy or free energy of the system.
CASE I - Physisorption
In the case of pure physisorption, e.g., Ar/metals, the only attraction between the adsorbing species and the surface arises from weak, van der Waals forces. As illustrated below, these forces give rise to a shallow minimum in the PE curve at a relatively large distance from the surface (typically \(d > 0.3\, nm\)) before the strong repulsive forces arising from electron density overlap cause a rapid increase in the total energy.
There is no barrier to prevent the atom or molecule which is approaching the surface from entering this physisorption well, i.e. the process is not activated and the kinetics of physisorption are invariably fast.
CASE II - Physisorption + Molecular Chemisorption
The weak physical adsorption forces and associated long-range attraction will be present to varying degrees in all adsorbate / substrate systems. However, in cases where chemical bond formation between the adsorbate and substrate can also occur, the PE curve is dominated by a much deeper chemisorption minimum at shorter values of d .
The graph above shows the PE curves due to physisorption and chemisorption separately - in practice, the PE curve for any real molecule capable of undergoing chemisorption is best described by a combination of the two curves, with a curve crossing at the point at which chemisorption forces begin to dominate over those arising from physisorption alone.
The minimum energy pathway obtained by combining the two PE curves is now highlighted in red. Any perturbation of the combined PE curve from the original, separate curves is most likely to be evident close to the highlighted crossing point.
For clarity, we will now consider only the overall PE curve:
The depth of the chemisorption well is a measure of the strength of binding to the surface - in fact it is a direct representation of the energy of adsorption, whilst the location of the global minimum on the horizontal axis corresponds to the equilibrium bond distance (re ) for the adsorbed molecule on this surface.
The energy of adsorption is negative, and since it corresponds to the energy change upon adsorption it is better represented as ΔE(ads) or ΔEads . However, you will also often find the depth of this well associated with the enthalpy of adsorption, ΔH(ads).
The "heat of adsorption", Q, is taken to be a positive quantity equal in magnitude to the enthalpy of adsorption ; i.e. Q = -ΔH(ads) )
In this particular case, there is clearly no barrier to be overcome in the adsorption process and there is no activation energy of adsorption (i.e. Eaads = 0, but do remember the previously mentioned limitations of this simple 1D model). There is of course a significant barrier to the reverse, desorption process - the red arrow in the diagram below represents the activation energy for desorption.
Clearly in this particular case, the magnitudes of the energy of adsorption and the activation energy for desorption can also be equated i.e.
\[E_a^{des} = ΔE (ads)\]
or
\[ E_a^{des} \approx -ΔH (ads)\]
CASE III - Physisorption + Dissociative Chemisorption
In this case the main differences arise from the substantial changes in the PE curve for the chemisorption process. Again, we start off with the basic PE curve for the physisorption process which represents how the molecule can weakly interact with the surface:
If we now consider a specific molecule such as H2 and initially treat it as being completely isolated from the surface ( i.e. when the distance, d, is very large ) then a substantial amount of energy has to be put into the system in order to cause dissociation of the molecule.
\[\ce{H_2 → H + H}\]
- this is the bond dissociation energy [ D(H-H) ], some 435 kJ mol-1 or 4.5 eV.
The red dot in the diagram above thus represents two hydrogen atoms, equidistant (and a long distance) from the surface and also now well separated from each other. If these atoms are then allowed to approach the surface they may ultimately both form strong chemical bonds to the substrate .... this corresponds to the minimum in the red curve which represents the chemisorption PE curve for the two H atoms.
In reality, of course, such a mechanism for dissociative hydrogen chemisorption is not practical - the energy downpayment associated with breaking the H-H bond is far too severe.
Instead, a hydrogen molecule will initially approach the surface along the physisorption curve. If it has sufficient energy it may pass straight into the chemisorption well ( "direct chemisorption" ) ....
or, alternatively, it may first undergo transient physisorption - a state from which it can then either desorb back as a molecule into the gas phase or cross over the barrier into the dissociated, chemisorptive state (as illustrated schematically below).
In this latter case, the molecule can be said to have undergone "precursor-mediated" chemisorption.
The characteristics of this type of dissociative adsorption process are clearly going to be strongly influenced by the position of the crossing point of the two curves (molecular physisorption v's dissociative chemisorption) - relatively small shifts in the position of either of the two curves can significantly alter the size of any barrier to chemisorption.
In the example immediately below there is no direct activation barrier to dissociative adsorption - the curve crossing is below the initial "zero energy" of the system.
whilst, in this next case ….
there is a substantial barrier to chemisorption. Such a barrier has a major influence on the kinetics of adsorption.
The depth of the physisorption well for the hydrogen molecule is actually very small (in some cases negligible), but this is not the case for other molecules and does not alter the basic conclusions regarding dissociative adsorption that result from this model; namely that the process may be either activated or non-activated depending on the exact location of the curve crossing.
At this point it is useful to return to consider the effect of such a barrier on the relationship between the activation energies for adsorption and desorption, and the energy (or enthalpy) of adsorption.
Clearly, from the diagram
\[E_a^{des} - E_a^{ads} = - ΔE_{ads}\]
but, since the activation energy for adsorption is nearly always very much smaller than that for desorption, and the difference between the energy and enthalpy of adsorption is also very small, it is still quite common to see the relationship
\[E_a^{des} \approx -ΔH_{ads}\]
For a slightly more detailed treatment of the adsorption process, you are referred to the following examples of More Complex PE Curves & Multi-Dimensional PE Surfaces.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Surface_Science_(Nix)/02%3A_Adsorption_of_Molecules_on_Surfaces/2.04%3A_PE_Curves_and_Energetics_of_Adsorption.txt
|
We can address the question of what happens when a molecule becomes adsorbed onto a surface at two levels; specifically we can aim to identify
1. the nature of the adsorbed species and its local adsorption geometry (i.e., its chemical structure and co-ordination to adjacent substrate atoms)
2. the overall structure of the extended adsorbate/substrate interface (i.e., the long range ordering of the surface)
The latter topic is covered in detail in Section 6.1, while this section will consider only the local adsorption geometry and adsorbate structure.
Chemisorption, by definition, involves the formation of new chemical bonds between the adsorbed species and the surface atoms of the substrate - basically the same type of bonds that are present in any molecular complex. In considering what type of species may be formed on a metal surface, therefore, it is important not to abandon chemical common sense and, if in doubt, to look for inspiration at the structures of known metal-organic complexes.
Chemisorption of Hydrogen and Halogens
Hydrogen (H2 )
In the H2 molecule, the valence electrons are all involved in the H-H σ-bond and there are no additional electrons which may interact with the substrate atoms. Consequently, chemisorption of hydrogen on metals is almost invariably a dissociative process in which the H-H bond is broken, thereby permitting the hydrogen atoms to independently interact with the substrate (see Section 2.4 for a description of the energetics of this process). The adsorbed species in this instance therefore are hydrogen atoms.
The exact nature of the adsorbed hydrogen atom complex is generally difficult to determine experimentally, and the very small size of the hydrogen atom does mean that migration of hydrogen from the interface into sub-surface layers of the substrate can occur with relative ease on some metals (e.g. Pd, rare earth metals).
The possibility of molecular H2 chemisorption at low temperatures cannot be entirely excluded, however, as demonstrated by the discovery of molecular hydrogen transition metal compounds, such as W(η2-H2 )(CO)3(PiPr3 )2, in which both atoms of the hydrogen molecule are coordinated to a single metal centre.
Halogens (F2 , Cl2, Br2 etc.)
Halogens also chemisorb in a dissociative fashion to give adsorbed halogen atoms. The reasons for this are fairly clear - in principle a halogen molecule could act as a Lewis base and bind to the surface without breakage of the X-X bond, in practice the lone pairs are strongly held by the highly electronegative halogen atom so any such interaction would be very weak and the thermodynamics lie very heavily in favour of dissociative adsorption [ i.e. D(X-X) + D(M-X2 ) << 2 D(M-X) ]. Clearly the kinetic barrier to dissociation must also be low or non-existent for the dissociative adsorption to occur readily.
Another way of looking at the interaction of a halogen molecule with a metal surface is as follows: the significant difference in electronegativity between a typical metal and halogen is such that substantial electron transfer from the metal to halogen is favoured. If a halogen molecule is interacting with a metal surface then this transferred electron density will enter the σ* antibonding orbital of the molecule, thereby weakening the X-X bond. At the same time the build-up of negative charge on the halogen atoms enhances the strength of the metal-halogen interaction. The net result of these two effects when taken to their limit is that the halogen molecule dissociates and the halogen atoms interact with the metal with a strong ionic contribution to the bonding.
Halogen atoms tend to occupy high co-ordination sites on the surface - for example, the 3-fold hollow site on fcc (111) surfaces (A) and the 4-fold hollow site on fcc(100) surfaces (B).
(A) Plan View (B)
This behavior is typical of atomic adsorbates which almost invariably endeavor to maximize their co-ordination and hence prefer to occupy the highest-available co-ordination site on the surface. As a result of the electron transfer from the metal substrate to the halogen atoms, each adsorbed atom is associated with a significant surface dipole.
Cross-section
One consequence of this is that there are repulsive (dipole-dipole) interactions between the adsorbed atoms, which are especially evident at higher surface coverages and which can lead to a substantial reduction in the enthalpy of adsorption at specific coverages (if these coverages mark a watershed, above which the atoms are forced to occupy sites which are much closer together).
Another feature of the halogen adsorption chemistry of some metals is the transition from an adsorbed surface layer to surface compound formation at high gas exposures.
Chemisorption of Nitrogen and Oxygen
Oxygen
Oxygen is an example of a molecule which usually adsorb dissociatively, but are also found to adsorb molecularly on some metals (e.g. Ag, Pt). In those cases where both types of adsorption are observed it is the dissociative process that corresponds to the higher adsorption enthalpy.
As noted above, in the molecular adsorption state the interaction between the molecule and the surface is relatively weak. Molecules aligned such that the internuclear axis is parallel to the surface plane may bond to a single metal atom of the surface via both
1. σ-donor interaction, in which the charge transfer is from the occupied molecular π-bonding molecular orbital of the molecule into vacant orbitals of σ-symmetry on the metal (i.e. M ← O2 ), and
2. π-acceptor interaction, in which an occupied metal d-orbital of the correct symmetry overlaps with empty π* orbitals of the molecule and the charge transfer is from the surface to the molecule (i.e. M → O2 ).
Although the interaction of the molecule with the surface is generally weak, one might expect that there might be a substantial barrier to dissociation due to the high strength (and high dissociation enthalpy) of the O=O bond. Nevertheless on most metal surfaces, dissociation of oxygen is observed to be facile which is related to the manner in which the interaction with the surface can mitigate the high intrinsic bond energy (see Section 2.4 ) and thereby facilitate dissociation.
Once formed, oxygen atoms are strongly bound to the surface and, as noted previously, will tend to occupy the highest available co-ordination site. The strength of the interaction between adsorbate and substrate is such that the adjacent metal atoms are often seen to undergo significant displacements from the equilibrium positions that they occupy on the clean metal surface. This displacement may simply lead to a distortion of the substrate surface in the immediate vicinity of the adsorbed atom (so that, for example, the adjacent metal atoms are drawn in towards the oxygen and the metal-oxygen bond distance is reduced) or to a more extended surface reconstruction (see Section 1.6 )
Dissociative oxygen adsorption is frequently irreversible - rather than simply leading to desorption, heating of an adsorbed oxygen overlayer often results in either the gradual removal of oxygen from the surface by diffusion into the bulk of the substrate (e.g. Si(111) or Cu(111)) or to the formation of a surface oxide compound. Even at ambient temperatures, extended oxygen exposure often leads to the nucleation of a surface oxide. Depending on the reactivity of the metal concerned, further exposure at low temperatures may result either in a progressive conversion of the bulk material to oxide or the oxidation process may effectively stop after the formation of a passivating surface oxide film of a specific thickness (e.g. Al).
Nitrogen
The interaction of nitrogen with metal surfaces shows many of the same characteristics as those described above for oxygen. However, in general N2 is less susceptible to dissociation as a result of the lower M-N bond strength and the substantial kinetic barrier associated with breaking the N≡N triple bond.
Chemisorption of Carbon Monoxide
Depending upon the metal surface, carbon monoxide may adsorb either in a molecular form or in a dissociative fashion - in some cases both states coexist on particular surface planes and over specific ranges of temperature.
1. On the reactive surfaces of metals from the left-hand side of the periodic table (e.g. Na, Ca, Ti, rare earth metals) the adsorption is almost invariably dissociative, leading to the formation of adsorbed carbon and oxygen atoms (and thereafter to the formation of surface oxide and oxy-carbide compounds).
2. By contrast, on surfaces of the metals from the right hand side of the d-block (e.g. Cu, Ag) the interaction is predominantly molecular; the strength of interaction between the CO molecule and the metal is also much weaker, so the M-CO bond may be readily broken and the CO desorbed from the surface by raising the surface temperature without inducing any dissociation of the molecule.
3. For the majority of the transition metals, however, the nature of the adsorption (dissociative v.'s molecular) is very sensitive to the surface temperature and surface structure (e.g. the Miller index plane, and the presence of any lower co-ordination sites such as step sites and defects).
Molecularly chemisorbed CO has been found to bond in various ways to single crystal metal surfaces - analogous to its behaviour in isolated metal carbonyl complexes.
Terminal ("Linear")
(all surfaces)
Bridging ( 2f site )
(all surfaces)
Bridging / 3f hollow
( fcc(111) )
Bridging / 4f hollow
(rare- fcc(100) ?)
Whilst the above structural diagrams amply demonstrate the inadequacies of a simple valence bond description of the bonding of molecules to surface, they do to an extent also illustrate one of its features and strengths - namely that a given element, in this case carbon, tends to have a specific valence. Consequently, as the number of metal atoms to which the carbon is co-ordinated increases, so there is a corresponding reduction in the C-O bond order.
However, it must be emphasised that a molecule such as CO does not necessarily prefer to bind at the highest available co-ordination site. So, for example, the fact that there are 3-fold hollow sites on an fcc(111) surface does not mean that CO will necessarily adopt this site - the preferred site may still be a terminal or 2-fold bridging site, and the site or site(s) which is(are) occupied may change with either surface coverage or temperature. The energy difference between the various adsorption sites available for molecular CO chemisorption appears therefore to be very small. A description of the nature of the bonding in a terminal CO-metal complex, in terms of a simple molecular orbital model, is given in Section 5.4.
Chemisorption of Ammonia and other Group V/VI Hydrides
Ammonia has lone pairs available for bonding on the central nitrogen atom and may bond without dissociation to a single metal atom of a surface, acting as a Lewis base, to give a pseudo-tetrahedral co-ordination for the nitrogen atom.
Alternatively, progressive dehydrogenation may occur to give surface NHx (x = 2,1,0) species and adsorbed hydrogen atoms, i.e.
NH3 → NH2 (ads) + H (ads) → NH (ads) + 2 H (ads) → N (ads) + 3 H (ads)
As the number of hydrogens bonded to the nitrogen atom is reduced, the adsorbed species will tend to move into a higher co-ordination site on the surface (thereby tending to maintain the valence of nitrogen).
Decomposition fragments of ammonia on an fcc(111) surface. (Picture adapted from the BALSAC Picture Gallery by K. Hermann, Fritz-Haber-Institut, Berlin)
Other Group V and Group VI hydrides (e.g. PH3, H2O, H2S) exhibit similar adsorption characteristics to ammonia.
Chemisorption of Unsaturated Hydrocarbons
Unsaturated hydrocarbons (alkenes, alkynes, aromatic molecules etc.) all tend to interact fairly strongly with metal atom surfaces. At low temperatures (and on less reactive metal surfaces) the adsorption may be molecular, albeit perhaps with some distortion of bond angles around the carbon atom.
Ethene, for example, may bond to give both a π-complex (A) or a di-σ adsorption complex (B):
(A) Chemisorbed (B) Ethene
[ Further examples: models of chemisorbed ethene and ethyne on Cu(111) ]
As the temperature is raised, or even at low temperatures on more reactive surfaces (in particular those that bind hydrogen strongly), a stepwise dehydrogenation may occur. One particularly stable surface intermediate found in the dehydrogenation of ethene is the ethylidyne complex, whose formation also involves H-atom transfer between the carbon atoms.
Ethylidyne: this adsorbate preferentially occupies a 3-fold hollow site to give pseudo-tetrahedral co-ordination for the carbon atom.
The ultimate product of complete dehydrogenation, and the loss of molecular hydrogen by desorption, is usually either carbidic or graphitic surface carbon.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Surface_Science_(Nix)/02%3A_Adsorption_of_Molecules_on_Surfaces/2.05%3A_Adsorbate_Geometries_and_Structures.txt
|
An adsorbed species present on a surface at low temperatures may remain almost indefinitely in that state. As the temperature of the substrate is increased, however, there will come a point at which the thermal energy of the adsorbed species is such that one of several things may occur:
1. a molecular species may decompose to yield either gas phase products or other surface species.
2. an atomic adsorbate may react with the substrate to yield a specific surface compound, or diffuse into the bulk of the underlying solid.
3. the species may desorb from the surface and return into the gas phase.
The last of these options is the desorption process. In the absence of decomposition the desorbing species will generally be the same as that originally adsorbed but this is not necessarily always the case.
(An example where it is not is found in the adsorption of some alkali metals on metallic substrates exhibiting a high work function where, at low coverages, the desorbing species is the alkali metal ion as opposed to the neutral atom. Other examples would include certain isomerization reactions.)
Desorption Kinetics
The rate of desorption, Rdes , of an adsorbate from a surface can be expressed in the general form:
$R_{des} = k N^x \label{1}$
with
• $x$ - kinetic order of desorption
• $k$ - rate constant for the desorption process
• $N$ - surface concentration of adsorbed species
The order of desorption can usually be predicted because we are concerned with an elementary step of a "reaction": specifically,
I. Atomic or Simple Molecular Desorption
$A_{(ads)} \rightarrow A_{(g)}$
$M_{(ads)} \rightarrow M_{(g)}$
- will usually be a first order process (i.e. $x = 1$ ). Examples include …
• The desorption of copper atoms from a tungsten surface $W / Cu_{(ads)} \rightarrow W_{(s)} + Cu_{(g)}$
• the desorption of CO molecules from a copper surface $Cu / CO_{(ads)} \rightarrow Cu_{(s)} + CO_{(g)}$
II. Recombinative Molecular Desorption
$2 A_{(ads)} \rightarrow A_{2 (g)}$
- will usually be a second order process (i.e. $x = 2$). Examples include:
• desorption of O atoms as O2 from a Pt surface $Pt / O_{(ads)} \rightarrow Pt_{(s)} + O_{2 (g)}$
• desorption of H atoms as H2 from a Ni surface $Ni / H_{(ads)} \rightarrow Ni_{(s)} + H_{2 (g)}$
The rate constant for the desorption process may be expressed in an Arrhenius form,
$k_{des} = A \;\exp( -E_a^{des} / RT ) \label{2}$
with
• $E_a^{des}$ is the activation energy for desorption, and
• $A$ is the pre-exponential factor; this can also be considered to be the "attempt frequency", $\nu$, at overcoming the barrier to desorption.
This then gives the following general expression for the rate of desorption
$R_{des} = -\dfrac{dN}{dt} = \nu N^x \exp ( -E_a^{des} / RT) \label{3}$
In the particular case of simple molecular adsorption, the pre-exponential/frequency factor ($\nu$) may also be equated with the frequency of vibration of the bond between the molecule and substrate; this is because every time this bond is stretched during the course of a vibrational cycle can be considered an attempt to break the bond and hence an attempt at desorption.
Surface Residence Times
One property of an adsorbed molecule that is intimately related to the desorption kinetics is the surface residence time - this is the average time that a molecule will spend on the surface under a given set of conditions (in particular, for a specified surface temperature) before it desorbs into the gas phase.
For a first order process such as the desorption step of a molecularly adsorbed species:
$M_{(ads)} \rightarrow M_{(g)}$
the average time ($\tau$) prior to the process occurring is given by:
$\tau = \dfrac{1}{k_1} \label{4}$
where $k_1$ is the first order rate constant (no proof of this will be given here).
From equation 3 with $x=1$, we know that
$k_1 =\nu \exp ( -E_a^{des} / RT) \label{5}$
and if we also substitute for $E_a^{des}$ using the approximate relation $E_a^{des} \approx -ΔH_{ads}$ discussed in Section 2.4, then we get the following expression for the surface residence time
$\tau = \tau_o \exp ( -ΔH_{ads} / RT ) \label{6}$
with
• $\tau_o$ corresponds to the period of vibration $\approx 1/\nu$ of the bond between the adsorbed molecule and substrate and is frequently taken to be about 10-13 s .
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Surface_Science_(Nix)/02%3A_Adsorption_of_Molecules_on_Surfaces/2.06%3A_The_Desorption_Process.txt
|
A continuous monolayer of adsorbate molecules surrounding a homogeneous solid surface is the conceptual basis for this adsorption model. The Langmuir isotherm is formally equivalent to the Hill equation in biochemistry.
• 3.1: Introduction
Whenever a gas is in contact with a solid there will be an equilibrium established between the molecules in the gas phase and the corresponding adsorbed species which are bound to the surface of the solid. The position of equilibrium will depend upon a number of factors: (1) The relative stabilities of the adsorbed and gas phase species involved, (2) The temperature of the system (both the gas and surface, although these are normally the same) and (3) The pressure of the gas above the surface
• 3.2: Langmuir Isotherm - derivation from equilibrium considerations
We may derive the Langmuir isotherm by treating the adsorption process as we would any other equilibrium process - except in this case the equilibrium is between the gas phase molecules, together with vacant surface sites, and the species adsorbed on the surface.
• 3.3: Langmuir Isotherm from a Kinetics Consideration
The equilibrium that may exist between gas adsorbed on a surface and molecules in the gas phase is a dynamic state, i.e. the equilibrium represents a state in which the rate of adsorption of molecules onto the surface is exactly counterbalanced by the rate of desorption of molecules back into the gas phase. It should therefore be possible to derive an isotherm for the adsorption process simply by considering and equating the rates for these two processes.
• 3.4: Variation of Surface Coverage with Temperature and Pressure
Application of the assumptions of the Langmuir Isotherm leads to readily derivable expressions for the pressure dependence of the surface coverage.
• 3.5: Applications - Kinetics of Catalytic Reactions
It is possible to predict how the kinetics of certain heterogeneously-catalysed reactions might vary with the partial pressures of the reactant gases above the catalyst surface by using the Langmuir isotherm expression for equilibrium surface coverages.
03: The Langmuir Isotherm
Whenever a gas is in contact with a solid there will be an equilibrium established between the molecules in the gas phase and the corresponding adsorbed species (molecules or atoms) which are bound to the surface of the solid.
As with all chemical equilibria, the position of equilibrium will depend upon a number of factors:
1. The relative stabilities of the adsorbed and gas phase species involved
2. The temperature of the system (both the gas and surface, although these are normally the same)
3. The pressure of the gas above the surface
In general, factors (2) and (3) exert opposite effects on the concentration of adsorbed species - that is to say that the surface coverage may be increased by raising the gas pressure but will be reduced if the surface temperature is raised.
The Langmuir isotherm was developed by Irving Langmuir in 1916 to describe the dependence of the surface coverage of an adsorbed gas on the pressure of the gas above the surface at a fixed temperature. There are many other types of isotherm (Temkin, Freundlich ...) which differ in one or more of the assumptions made in deriving the expression for the surface coverage; in particular, on how they treat the surface coverage dependence of the enthalpy of adsorption. Whilst the Langmuir isotherm is one of the simplest, it still provides a useful insight into the pressure dependence of the extent of surface adsorption.
Note: Surface Coverage & the Langmuir Isotherm
When considering adsorption isotherms it is conventional to adopt a definition of surface coverage (θ) which defines the maximum (saturation) surface coverage of a particular adsorbate on a given surface always to be unity, i.e. \(θ_{max} = 1\). This way of defining the surface coverage differs from that usually adopted in surface science where the more common practice is to equate \(θ\) with the ratio of adsorbate species to surface substrate atoms (which leads to saturation coverages which are almost invariably less than unity).
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Surface_Science_(Nix)/03%3A_The_Langmuir_Isotherm/3.01%3A_Introduction.txt
|
We may derive the Langmuir isotherm by treating the adsorption process as we would any other equilibrium process - except in this case the equilibrium is between the gas phase molecules ($M$), together with vacant surface sites, and the species adsorbed on the surface. Thus, for a non-dissociative (molecular) adsorption process, we consider the adsorption to be represented by the following chemical equation:
$S - * + M_{(g)} \rightleftharpoons S - M \label{Eq1}$
where:
• $S - *$ represents a vacant surface site
Assumption 1
In writing Equation $\ref{Eq1}$ we are making an inherent assumption that there are a fixed number of localized surface sites present on the surface. This is the first major assumption of the Langmuir isotherm.
We may now define an equilibrium constant ($K$) in terms of the concentrations of "reactants" and "products"
$K = \dfrac{[S-M]}{[S-*][M]}$
We may also note that:
• [ S - M ] is proportional to the surface coverage of adsorbed molecules, i.e. proportional to θ
• [ S - * ] is proportional to the number of vacant sites, i.e. proportional to (1-θ)
• [ M ] is proportional to the pressure of gas, P
Hence, it is also possible to define another equilibrium constant, b, as given below:
$b =\dfrac{\theta}{(1- \theta)P}$
Rearrangement then gives the following expression for the surface coverage
$\theta =\dfrac{b P}{1 + bP}$
which is the usual form of expressing the Langmuir Isotherm. As with all chemical reactions, the equilibrium constant, $b$, is both temperature-dependent and related to the Gibbs free energy and hence to the enthalpy change for the process.
Assumption 2
$b$ is only a constant (independent of $\theta$) if the enthalpy of adsorption is independent of coverage. This is the second major assumption of the Langmuir Isotherm.
3.03: Langmuir Isotherm from a Kinetics Consideration
The equilibrium that may exist between gas adsorbed on a surface and molecules in the gas phase is a dynamic state, i.e. the equilibrium represents a state in which the rate of adsorption of molecules onto the surface is exactly counterbalanced by the rate of desorption of molecules back into the gas phase. It should therefore be possible to derive an isotherm for the adsorption process simply by considering and equating the rates for these two processes.
Expressions for the rate of adsorption and rate of desorption have been derived in Sections 2.3 & 2.6 respectively: specifically,
$R_{ads}=\dfrac{f(\theta)P}{\sqrt{2\pi mkT}} \exp (-E_a^{ads}/RT)$
$R_{des} = v\; f'(\theta) \exp (-E_a^{ads}/RT)$
Equating these two rates yields an equation of the form:
$\dfrac{P \; f(\theta)}{f'(\theta)} = C(T) \label{33Eq1}$
where $\theta$ is the fraction of sites occupied at equilibrium and the terms $f(\theta)$ and $f '(\theta)$ contain the pre-exponential surface coverage dependence of the rates of adsorption and desorption respectively and all other factors have been taken over to the right hand side to give a temperature-dependent "constant" characteristic of this particular adsorption process, $C(T)$. We now need to make certain simplifying assumptions. The first is one of the key assumptions of the Langmuir isotherm.
Note: Assumption 1
Adsorption takes place only at specific localized sites on the surface and the saturation coverage corresponds to complete occupancy of these sites.
Let us initially further restrict our consideration to a simple case of reversible molecular adsorption, i.e.
$S_- + \ce{M_{(g)}} \rightleftharpoons \ce{S-M} \label{33Eq2}$
where
• $S_-$ represents a vacant surface site and
• $\ce{S-M}$ the adsorption complex.
Under these circumstances it is reasonable to assume coverage dependencies for rates of the two processes of the form:
• Adsorption (forward reaction in Equation $\ref{33Eq2}$): $f (θ) = c (1-θ) \label{Eqabsorb}$ i.e. proportional to the fraction of sites that are unoccupied.
• Desorption (reverse reaction in Equation $\ref{33Eq2}$): $f '(θ) = c'θ \label{Eqdesorb}$ i.e. proportional to the fraction of sites which are occupied by adsorbed molecules.
Note
These coverage dependencies in Equations $\ref{Eqabsorb}$ and $\ref{Eqdesorb}$ are exactly what would be predicted by noting that the forward and reverse processes are elementary reaction steps, in which case it follows from standard chemical kinetic theory that
• The forward adsorption process will exhibit kinetics having a first order dependence on the concentration of vacant surface sites and first order dependence on the concentration of gas particles (proportional to pressure).
• The reverse desorption process will exhibit kinetics having a first order dependence on the concentration of adsorbed molecules.
Substitution of Equations $\ref{Eqabsorb}$ and $\ref{Eqdesorb}$ into Equation $\ref{33Eq1}$ yields:
$\dfrac{P(1-\theta)}{\theta}=B(T)$
where
$B(T) = \left(\dfrac{c'}{c}\right) C(T).$
After rearrangement this gives the Langmuir Isotherm expression for the surface coverage
$\theta = \dfrac{bP}{1+bP}$
where $b = 1/B(T)$ is a function of temperature and contains an exponential term of the form
$b \propto \exp [ ( E_a^{des} - E_a^{ads} ) / R T ] = \exp [ - ΔH_{ads} / R T ]$
with
$ΔH_{ads} = E_a^{des} - E_a^{ads}.$
Note: Assumption 2
$b$ can be regarded as a constant with respect to coverage only if the enthalpy of adsorption is itself independent of coverage; this is the second major assumption of the Langmuir Isotherm.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Surface_Science_(Nix)/03%3A_The_Langmuir_Isotherm/3.02%3A_Langmuir_Isotherm_-_derivation_from_equilibrium_considerations.txt
|
Application of the assumptions of the Langmuir Isotherm leads to readily derivable expressions for the pressure dependence of the surface coverage (see Sections 3.2 and 3.3) - in the case of a simple, reversible molecular adsorption process the expression is
$\theta = \dfrac{bP}{1+bP} \label{Eq1}$
where $b = b(T)$. This is illustrated in the graph below which shows the characteristic Langmuir variation of coverage with pressure for molecular adsorption.
Note two extremes in Equation $\ref{Eq1}$:
• At low pressures $\lim_{P \rightarrow 0} θ = bP$
• At high pressures $\lim_{P \rightarrow \infty} θ = 1$
At a given pressure the extent of adsorption is determined by the value of $b$, which is dependent upon both the temperature (T) and the enthalpy (heat) of adsorption. Remember that the magnitude of the adsorption enthalpy (a negative quantity itself) reflects the strength of binding of the adsorbate to the substrate.
The value of $b$ is increased by
1. a reduction in the system temperature
2. an increase in the strength of adsorption
Therefore the set of curves shown below illustrates the effect of either (i) increasing the magnitude of the adsorption enthalpy at a fixed temperature, or (ii) decreasing the temperature for a given adsorption system.
A given equilibrium surface coverage may be attainable at various combinations of pressure and temperature as highlighted below … note that as the temperature is lowered the pressure required to achieve a particular equilibrium surface coverage decreases.
- this is often used as justification for one of the main ideologies of surface chemistry ; specifically, that it is possible to study technologically-important (high pressure / high temperature) surface processes within the low pressure environment of typical surface analysis systems by working at low temperatures. It must be recognized however that, at such low temperatures, kinetic restrictions that are not present at higher temperatures may become important.
If you wish to see how the various factors relating to the adsorption and desorption of molecules influence the surface coverage then try out the Interactive Demonstration of the Langmuir Isotherm (note - this is based on the derivation discuss previously ).
Determination of Enthalpies of Adsorption
It has been shown in previous sections how the value of b is dependent upon the enthalpy of adsorption. It has also just been demonstrated how the value of b influences the pressure/temperature (P-T) dependence of the surface coverage. The implication of this is that it must be possible to determine the enthalpy of adsorption for a particular adsorbate/substrate system by studying the P-T dependence of the surface coverage. Various methods based upon this idea have been developed for the determination of adsorption enthalpies - one method is outlined below:
Step 1: Involves determination of a number of adsorption isotherms, where a single isotherm is a coverage / pressure curve at a fixed temperature (Figure 3.4.1).
Figure 3.4.1
Step 2: It is then possible to read off a number of pairs of values of pressure and temperature which yield the same surface coverage (Figure 3.4.2)
Figure 3.4.2:
Step 3: The Clausius-Clapeyron equation
$\left( \dfrac{\partial \ln P}{\partial \frac{1}{T}} \right)_{\theta} = \dfrac{\Delta H_{abs}}{R}$
may then be applied to this set of (P-T) data and a plot of ( ln P ) v's (1/T) should give a straight line, the slope of which yields the adsorption enthalpy (Figure .
Figure 3.4.3:
The value obtained for the adsorption enthalpy is that pertaining at the surface coverage for which the P-T data was obtained, but steps 2 & 3 may be repeated for different surface coverages enabling the adsorption enthalpy to be determined over the whole range of coverages. This method is applicable only when the adsorption process is thermodynamically reversible.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Surface_Science_(Nix)/03%3A_The_Langmuir_Isotherm/3.04%3A_Variation_of_Surface_Coverage_with_Temperature_and_Pressure.txt
|
It is possible to predict how the kinetics of certain heterogeneously-catalyzed reactions might vary with the partial pressures of the reactant gases above the catalyst surface by using the Langmuir isotherm expression for equilibrium surface coverages.
Unimolecular Decomposition
Consider the surface decomposition of a molecule A, i.e. the process
$A_{(g)} \rightleftharpoons A_{(ads)} → Products$
Let us assume that:
1. The decomposition reaction occurs uniformly across the surface sites at which molecule A may be adsorbed and is not restricted to a limited number of special sites.
2. The products are very weakly bound to the surface and, once formed, are rapidly desorbed.
3. The rate determining step (rds) is the surface decomposition step.
Under these circumstances, the molecules of A adsorbed on the surface are in equilibrium with those in the gas phase and we may predict the surface concentration of A from the Langmuir isotherm, i.e.
$θ = \dfrac{bP}{1 + bP}$
The rate of the surface decomposition (and hence of the reaction) is given by an expression of the form
$rate = k θ$
This is assuming that the decomposition of A(ads) occurs in a simple unimolecular elementary reaction step and that the kinetics are first order with respect to the surface concentration of this adsorbed intermediate). Substituting for the coverage, θ, gives us the required expression for the rate in terms of the pressure of gas above the surface
$rate = \dfrac{k b P}{1 + b P} \label{rate}$
It is useful to consider two extremes:
Low Pressure/Binding Limit
This is the low pressure (or weak binding. i.e., small $b$) limit: under these conditions the steady state surface coverage, $θ$, of the reactant molecule is very small.
$b P \ll 1$
then
$1 + bP \approx 1$
and Equation $\ref{rate}$ can be simplified to
$rate \approx kbP$
Under this limiting case, the kinetics follow a first order reaction (with respect to the partial pressure of $A$) with an apparent first order rate constant $k' = kb$.
High Pressure/Binding Limit
This is the high pressure (or strong binding, i.e., large $b$) limit: under these conditions the steady state surface coverage, $θ$, of the reactant molecule is almost unity and
$bP \gg 1$
then
$1 + bP \approx bP$
and Equation $\ref{rate}$ can be simplified to
$rate \approx k$
under this limiting case, the kinetics follow a zero order reaction (with respect to the partial pressure of $A$). The rate shows the same pressure variation as does the surface coverage, but this hardly surprising since it is directly proportional to θ.
These two limiting cases can be identified in the general kinetics from Equation $\ref{rate}$ in Figure 3.5.1.
Figure 3.5.1:
Bimolecular Reaction (between molecular adsorbates)
Consider a Langmuir-Hinshelwood reaction of the following type:
$A_{(g)} \rightleftharpoons A_{(ads)} \label{Eq2.1}$
$B_{(g)} \rightleftharpoons B_{(ads)} \label{Eq2.2}$
$A_{(ads)} + B_{(ads)} \overset{slow}{\longrightarrow} AB_{(ads)} \overset{fast}{\longrightarrow} AB_{(g)} \label{Eq2.3}$
We will further assume, as noted in the above scheme, that the surface reaction between the two adsorbed species (left side of Equation $\ref{Eq2.3}$ is the rate determining step.
If the two adsorbed molecules are mobile on the surface and freely intermix then the rate of the reaction will be given by the following rate expression for the bimolecular surface combination step
$Rate = k θ_A θ_B$
For a single molecular adsorbate the surface coverage (as given by the standard Langmuir isotherm) is
$θ = \dfrac{bP}{1 + bP}$
Where two molecules ($A$ & $B$ ) are competing for the same adsorption sites then the relevant expressions are (see derivation):
$\theta_A = \dfrac{b_AP_A}{1+b_AP_A + b_BP_B}$
and
$\theta_B = \dfrac{b_BP_B}{1+b_AP_A + b_BP_B}$
Substituting these into the rate expression gives:
$Rate = k \theta_A \theta_B = \dfrac{k b_AP_A b_BP_B }{( 1+b_AP_A + b_BP_B )^2}$
Once again, it is interesting to look at several extreme limits
Low Pressure/Binding Limit
$b_A P_A \ll 1$
and
$b_B P_B \ll 1$
In this limit that $θ_A$ & $θ_B$ are both very low, and
$rate → k b_A P_A b_B P_B = k' P_A P_B$
i.e. first order in both reactants
Mixed Pressure/Binding Limit
$b_A P_A \ll 1 \ll b_B P_B$
In this limit $θ_A → 0$, $θ_B → 1$, and
$Rate → \dfrac{k b_A P_A }{b_B P_B } = \dfrac{k' P_A}{P_B}$
i.e. first order in $A$, but negative first order in $B$
Clearly, depending upon the partial pressure and binding strength of the reactants, a given model for the reaction scheme can give rise to a variety of apparent kinetics: this highlights the dangers inherent in the reverse process - namely trying to use kinetic data to obtain information about the reaction mechanism.
Example 3.5.1: CO Oxidation Reaction
On precious metal surfaces (e.g. Pt), the $CO$ oxidation reaction is generally believed to by a Langmuir-Hinshelwood mechanism of the following type:
$CO_{(g)} \rightleftharpoons CO_{(ads)}$
$O_{2 (g)} \rightleftharpoons 2 O_{(ads)}$
$CO_{(ads)} + O_{(ads)} \overset{slow}{\longrightarrow} CO_{2 (ads)} \overset{fast}{\longrightarrow} CO_{2 (g)}$
As CO2 is comparatively weakly-bound to the surface, the desorption of this product molecule is relatively fast and in many circumstances it is the surface reaction between the two adsorbed species that is the rate determining step.
If the two adsorbed molecules are assumed to be mobile on the surface and freely intermix then the rate of the reaction will be given by the following rate expression for the bimolecular surface combination step
$Rate = k \,θ_{CO}\, θ_O$
Where two such species (one of which is molecularly adsorbed, and the other dissociatively adsorbed) are competing for the same adsorption sites then the relevant expressions are (see derivation):
$\theta_{CO} = \dfrac{b_{CO}P_{CO}}{1+ \sqrt{b_OP_{O_2}} + b_{CO}P_{CO}}$
and
$\theta_{O} = \dfrac{ \sqrt{b_OP_{O_2}} }{1+ \sqrt{b_OP_{O_2}} + b_{CO}P_{CO}}$
Substituting these into the rate expression gives:
$rate = k \theta_{CO} \theta_O = \dfrac{ k b_{CO}P_{CO} \sqrt{b_OP_{O_2}} }{(1+ \sqrt{b_OP_{O_2}} + b_{CO}P_{CO})^2} \label{Ex1.1}$
Once again, it is interesting to look at certain limits. If the $CO$ is much more strongly bound to the surface such that
$b_{CO}P_{CO} \gg 1 + \sqrt{b_OP_{O_2}}$
then
$1 + \sqrt{b_OP_{O_2}} + b_{CO}P_{CO} \approx b_{CO}P_{CO}$
and the Equation $\ref{Ex1.1}$ simplifies to give
$rate \approx \dfrac{k \sqrt{b_OP_{O_2}} } {b_{CO}P_{CO}} = k' \dfrac{P^{1/2}_{O_2}}{P_{CO}}$
In this limit the kinetics are half-order with respect to the gas phase pressure of molecular oxygen, but negative order with respect to the $CO$ partial pressure, i.e. $CO$ acts as a poison (despite being a reactant) and increasing its pressure slows down the reaction. This is because the CO is so strongly bound to the surface that it blocks oxygen adsorbing, and without sufficient oxygen atoms on the surface the rate of reaction is reduced.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Surface_Science_(Nix)/03%3A_The_Langmuir_Isotherm/3.05%3A_Applications_-_Kinetics_of_Catalytic_Reactions.txt
|
• 4.1: What is UltraHigh Vacuum?
Vacuum technology has advanced considerably over the last 25 years and very low pressures are now routinely obtainable. While the mbar is often used as a unit of pressure for describing the level of vacuum, the most commonly employed unit is still the Torr (the SI unit, the Pa, is almost never used). Classification of the degree of vacuum is hardly an exact science - it much depends upon who you are talking to.
• 4.2: Why is UHV required for surface studies ?
Ultra high vacuum is required for most surface science experiments for two principal reasons : (1) To enable atomically clean surfaces to be prepared for study, and such surfaces to be maintained in a contamination-free state for the duration of the experiment. (2) To permit the use of low energy electron and ion-based experimental techniques without undue interference from gas phase scattering.
• 4.E: UHV and Effects of Gas Pressure (Exercises)
04: UHV and Effects of Gas Pressure
Vacuum technology has advanced considerably over the last 25 years and very low pressures are now routinely obtainable. Firstly, we need to remind ourselves of the units of pressure:
• The SI unit of pressure is the Pascal ( 1 Pa = 1 N m-2 )
• Normal atmospheric pressure ( 1 atm.) is 101325 Pa or 1013 mbar ( 1 bar = 105 Pa ).
• An older unit of pressure is the Torr ( 1 Torr = 1 mmHg ). One atmosphere is ca. 760 Torr ( i.e. 1 Torr = 133.3 Pa ).
While the mbar is often used as a unit of pressure for describing the level of vacuum, the most commonly employed unit is still the Torr (the SI unit, the Pa, is almost never used). Classification of the degree of vacuum is hardly an exact science - it much depends upon who you are talking to - but as a rough guideline:
Rough (low) vacuum: 1 - 10-3 Torr
Medium vacuum: 10-3 - 10-5 Torr
High vacuum (HV): 10-6 - 10-8 Torr
Ultrahigh vacuum (UHV): < 10-9 Torr
Virtually all surface studies are carried out under UHV conditions - the question is why ? This is the question that we will address in Section 4.2.
4.02: Why is UHV required for surface studies ?
Ultra high vacuum is required for most surface science experiments for two principal reasons:
1. To enable atomically clean surfaces to be prepared for study, and such surfaces to be maintained in a contamination-free state for the duration of the experiment.
2. To permit the use of low energy electron and ion-based experimental techniques without undue interference from gas phase scattering.
To put these points in context we shall now look at the variation of various parameters with pressure
Gas Density
The gas density $\rho$ is easily estimated from the ideal gas law:
$\rho = \dfrac{N}{V} = \dfrac{P}{k T} \label{1}$
with
• $\rho$ is the gas density [ molecules/ m-3 ]
• $P$ is the pressure [ N m-2 ]
• $k$ is Boltzmann constant, which is $\frac{R}{N_A} = 1.38 \times 10^{-23}\; J\; K^{-1}$ )
• $T$ is absolution temperature [ K ]
Mean Free Path of Particles in the Gas Phase
The average distance that a particle (atom, electron, molecule ..) travels in the gas phase between collisions can be determined from a simple hard-sphere collision model - this quantity is known as the mean free path of the particle, here denoted by $\lambda$, and for neutral molecules is given by the equation:
$\lambda = \dfrac{kT}{\sqrt{2} P \sigma} \label{2}$
with
• $\lambda$ is the mean free path [m]
• P - pressure [ N m-2 ]
• k - Boltzmann constant ( = 1.38 x 10-23 J K-1 )
• T - temperature [ K ]
• σ - collision cross section [ m2 ]
Incident Molecular Flux on Surfaces
One of the crucial factors in determining how long a surface can be maintained clean (or, alternatively, how long it takes to build-up a certain surface concentration of adsorbed species) is the number of gas molecules impacting on the surface from the gas phase. The incident flux is the number of incident molecules per unit time per unit area of surface.
The flux takes no account of the angle of incidence, it is merely a summation of all the arriving molecules over all possible incident angles
For a given set of conditions (P,T etc.) the flux is readily calculated using a combination of the ideas of statistical physics, the ideal gas equation and the Maxwell-Boltzmann gas velocity distribution.
Step 1:
It can be readily shown that the incident flux, $F$, is related to the gas density above the surface by
$F = \frac{1}{2} n \overline{c} \label{3}$
with
• $n$ is the molecular gas density [ molecules m-3 ]
• $F$ is the flux[ molecules m-2 s-1 ]
• $\overline{c}$ is the average molecular speed [ m s-1 ]
Step 2: the molecular gas density is given by the ideal gas equation, namely
$n = \dfrac{N}{V}=\dfrac{P}{kT} \;\; (molecules \; m^{-3})$
Step 3: the mean molecular speed is obtained from the Maxwell-Boltzmann distribution of gas velocities by integration, yielding
$\bar{c} = \sqrt{\dfrac{8kT}{m \pi}} \;\;\; ( m\,s^{-1})$
where
• m - molecular mass [ kg ]
• k - Boltzmann constant ( = 1.38 x 10-23 J K-1 )
• T - temperature [ K ]
• π = 3.1416
Step 4: combining the equations from the first three steps gives the Hertz-Knudsen formula for the incident flux
$F = \dfrac{P}{\sqrt{2 \pi m k T}} \;\;\; [ molecules \;m^{-2} s^{-1}]$
Note
1. all quantities in the above equation need to be expressed in SI units
2. the molecular flux is directly proportional to the pressure
Gas Exposure: The "Langmuir"
The gas exposure is measure of the amount of gas which a surface has been subjected to. It is numerically quantified by taking the product of the pressure of the gas above the surface and the time of exposure (if the pressure is constant, or more generally by calculating the integral of pressure over the period of time of concern).
Although the exposure may be given in the SI units of Pa s (Pascal seconds), the normal and far more convenient unit for exposure is the Langmuir, where 1 L = 10-6 Torr s . i.e.
$(Exposure/L) = 10^6 \times (Pressure/Torr) \times (Time/s)$
Sticking Coefficient & Surface Coverage
The sticking coefficient, S, is a measure of the fraction of incident molecules which adsorb upon the surface i.e. it is a probability and lies in the range 0 - 1, where the limits correspond to no adsorption and complete adsorption of all incident molecules respectively. In general, S depends upon many variables i.e.
$S = f ( surface coverage, temperature, crystal face .... )$
The surface coverage of an adsorbed species may itself, however, be specified in a number of ways:
1. as the number of adsorbed species per unit area of surface (e.g. in molecules cm-2).
2. as a fraction of the maximum attainable surface coverage i.e. $\theta = \dfrac{\text{actual surface coverage}}{\text{saturation surface coverage }}$
3. - in which case θ lies in the range 0 - 1 .
4. relative to the atom density in the topmost atomic layer of the substrate i.e.
$\theta=\frac{\text { No. of adsorbed species per unit area of surface }}{\text { No. of surface substrate atoms per unit area }}$
where $θ_{max}$ is usually less than one, but can for an adsorbate such as H occasionally exceed one.
Note:
1. whichever definition is used, the surface coverage is normally denoted by the Greek $θ$
2. the second means of specifying the surface coverage is only usually employed for adsorption isotherms (e.g. the Langmuir isotherm). The third method is the most generally accepted way of defining the surface coverage.
3. a monolayer (1 ML) of adsorbate is taken to correspond to the maximum attainable surface concentration of adsorbed species bound to the substrate.
Example $1$:
How long will it take for a clean surface to become covered with a complete monolayer of adsorbate?
Solution
This is dependent upon the flux of gas phase molecules incident upon the surface, the actual coverage corresponding to the monolayer and the coverage-dependent sticking probability and other aspect. however, it is possible to get a minimum estimate of the time required by assuming a unit sticking probability (i.e. S = 1) and noting that monolayer coverages are generally of the order of 1015 per cm2 or 1019 per m2 . Then
Time / ML ~ ( 1019 / F ) [ s ]
Variation of Parameters with Pressure
All values given below are approximate and are generally dependent on factors such as temperature and molecular mass.
Degree of Vacuum
Pressure
(Torr)
Gas Density
(molecules m-3 )
Mean Free Path
(m)
Time / ML
(s)
Atmospheric
760
2 x 1025
7 x 10-8
10-9
Low
1
3 x 1022
5 x 10-5
10-6
Medium
10-3
3 x 1019
5 x 10-2
10-3
High
10-6
3 x 1016
50
1
UltraHigh
10-10
3 x 1012
5 x 105
104
We can therefore conclude that the following requirements exist for:
Collision Free Conditions
P < 10-4 Torr
Maintenance of a Clean Surface
P < 10-9 Torr
Summary
For most surface science experiments there are a number of factors necessitating a high vacuum environment:
1. For surface spectroscopy, the mean free path of probe and detected particles (ions, atoms, electrons) in the vacuum environment must be significantly greater than the dimensions of the apparatus in order that these particles may travel to the surface and from the surface to detector without undergoing any interaction with residual gas phase molecules. This requires pressures better than 10-4 Torr. There are, however, some techniques, such as IR spectroscopy, which are "photon-in/photon-out" techniques and do not suffer from this requirement.
(On a practical level, it is also the case that the lifetime of channeltron and multiplier detectors used to detect charged particles is substantially reduced by operation at pressures above 10-6 Torr).
2. Most spectroscopic techniques are also capable of detecting molecules in the gas phase; in these cases it is preferable that the number of species present on the surface substantially exceeds those present in the gas phase immediately above the surface - to achieve a surface/gas phase discrimination of better than 10:1 when analysing ca. 1% of a monolayer on a flat surface this requires that the gas phase concentration is less than ca. 1012 molecules cm-3 ( = 1018 molecules m-3), i.e. that the (partial) pressure is of the order of 10-4 Torr or lower.
To begin experiments with a reproducibly clean surface, and to ensure that significant contamination by background gases does not occur during an experiment, the background pressure must be such that the time required for contaminant build-up is substantially greater than that required to conduct the experiment i.e. of the order of hours. The implication with regard to the required pressure depends upon the nature of the surface, but for the more reactive surfaces this necessitates the use of UHV (i.e. < 1 x 10-9 Torr).
It is clear therefore that it is the last factor that usually determines the need for a very good vacuum in order to carry out reliable surface science experiments.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Surface_Science_(Nix)/04%3A_UHV_and_Effects_of_Gas_Pressure/4.01%3A_What_is_UltraHigh_Vacuum%3F.txt
|
This section provides a limited number of examples of the application of the formulae given in the previous section to determine the:
• Density of Molecules in the Gas Phase
• Mean Free Path of Molecules in the Gas Phase
• Flux of Molecules incident upon a Surface
• Rate of Adsorption of Molecules and Surface Coverages
If you have not already been through Section 4.2 then I would suggest that you stop now and return to this page only after you have done so !
Within any one of the following sub-sections, it will be assumed that you have already done the previous questions and may make use of the answers from these questions - you are therefore advised to work through the questions in the order they are presented.
A. Molecular Gas Densities
Calculate the molecular gas density for an ideal gas at 300 K, under the following conditions (giving your answer in molecules m-3 ):
At a pressure of 10-6 Torr
At a pressure of 10-9 Torr
B. Mean Free Path of Molecules in the Gas Phase
Calculate the mean free path of CO molecules in a vessel at the indicated pressure and temperature, using a value for the collision cross section of CO of 0.42 nm2.
P = 10-4 Torr, at 300 K
P = 10-9 Torr, at 300 K
C. Fluxes of Molecules Incident upon a Surface
Calculate the flux of molecules incident upon a solid surface under the following conditions:
[Note - 1 u = 1.66 x 10-27 kg: atomic masses ; m(O) =16.0 u, m(H) = 1.0 u]
Oxygen gas ( P = 1 Torr ) at 300 K
Oxygen gas ( P = 10-6Torr ) at 300 K
Hydrogen gas ( P = 10-6Torr ) at 300 K
Hydrogen gas ( P = 10-6Torr ) at 1000 K
D. The Kinetically Limited Uptake of Molecules onto a Surface
The rate of adsorption of molecules onto a surface can be determined from the flux of molecules incident on the surface and the sticking probability pertaining at that instant in time (note that in general the sticking probability itself will be dependent upon a number of factors including the existing coverage of adsorbed species).
In the following examples we will assume that the surface is initially clean (i.e. the initial coverage is zero), and that there is no desorption of the molecules once they have adsorbed. You should determine coverages as the ratio of the adsorbate concentration to the density of surface substrate atoms (which you may assume to be 1019 m-2 ). In the first two questions we will assume that the sticking probability is constant over the coverage range concerned.
Calculate the surface coverage obtained after exposure to a pressure of 10-8Torr of CO for 20 s at 300 K - you may take the sticking probability of CO on this surface to have a constant value of 0.9 up to the coverage concerned.
Calculate the surface coverage of atomic nitrogen obtained by dissociative adsorption after exposure to a pressure of 10-8Torr of nitrogen gas for 20 s at 300 K - you may take the dissociative sticking probability of molecular nitrogen on this surface to be constant and equal to 0.1
In general, the sticking probability varies with coverage - most obviously, the sticking probability must tend to zero as the coverage approaches its saturation value. These calculations are not quite so easy !
Calculate the surface coverage obtained after exposure to a pressure of 10-8Torr of CO for 200 s at 300 K - the sticking probability of CO in this case should be taken to vary linearly with coverage between a value of unity at zero coverage and a value of zero at saturation coverage (which you should take to be 6.5 x 1018 molecules m-2 ).
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Surface_Science_(Nix)/04%3A_UHV_and_Effects_of_Gas_Pressure/4.E%3A_UHV_and_Effects_of_Gas_Pressure_%28Exercises%29.txt
|
• 5.1: Surface Sensitivity and Surface Specificity
In its simplest form, the question of sensitivity boils down to whether it is possible to detect the desired signal above the noise level. Assuming that a technique of sufficient sensitivity can be found, another major problem that needs to be addressed in surface spectroscopy is distinguishing between signals from the surface and the bulk of the sample.
• 5.2: Auger Electron Spectroscopy
Auger Electron Spectroscopy (Auger spectroscopy or AES) was developed in the late 1960's, deriving its name from the effect first observed by Pierre Auger, a French Physicist, in the mid-1920's. It is a surface specific technique utilising the emission of low energy electrons in the Auger process and is one of the most commonly employed surface analytical techniques for determining the composition of the surface layers of a sample.
• 5.3: Photoelectron Spectroscopy
Photoelectron spectroscopy utilizes photo-ionization and analysis of the kinetic energy distribution of the emitted photoelectrons to study the composition and electronic state of the surface region of a sample. Traditionally, when the technique has been used for surface studies it has been subdivided according to the source of exciting radiation into X-ray Photoelectron Spectroscopy (XPS) and Ultraviolet Photoelectron Spectroscopy (UPS).
• 5.4: Vibrational Spectroscopy
Vibrational spectroscopy provides the most definitive means of identifying the surface species generated upon molecular adsorption and the species generated by surface reactions. In principle, any technique that can be used to obtain vibrational data from solid state or gas phase samples (IR, Raman etc.) can be applied to the study of surfaces - in addition there are a number of techniques which have been specifically developed to study the vibrations of molecules at interfaces.
• 5.5: Secondary Ion Mass Spectrometry
The technique of Secondary Ion Mass Spectrometry (SIMS) is the most sensitive of all the commonly-employed surface analytical techniques - capable of detecting impurity elements present in a surface layer at < 1 ppm concentration, and bulk concentrations of impurities of around 1 ppb (part-per-billion) in favorable cases. This is because of the inherent high sensitivity associated with mass spectrometric-based techniques.
• 5.6: Temperature-Programmed Techniques
There are a range of techniques for studying surface reactions and molecular adsorption on surfaces which utilise temperature-programming to discriminate between processes with different activation parameters. Of these, the most useful for single crystal studies is: Temperature Programmed Desorption (TPD)
05: Surface Analytical Techniques
General Sensitivity Problems
The problems of sensitivity and detection limits are common to all forms of spectroscopy; some techniques are simply better than others in this respect! In its simplest form, the question of sensitivity boils down to whether it is possible to detect the desired signal above the noise level.
In virtually all surface studies (especially those on single crystal substrates) sensitivity is a major problem. Consider the case of a sample with a surface of size 1 cm2 - this will have ca. 1015 atoms in the surface layer. In order to detect the presence of impurity atoms present at the 1 % level, a technique must be sensitive to ca. 1013 atoms. Contrast this with a spectroscopic technique used to analyze a 1 cm3 bulk liquid sample i.e. a sample of ca. 1022 molecules. The detection of 1013 molecules in this sample would require 1 ppb (one part-per-billion) sensitivity - very few techniques can provide anything like this level of sensitivity! This is one reason why common spectroscopic techniques such as NMR (detection limit ca. 1019 molecules) cannot be used for surface studies, except on samples with very high surface areas.
Surface Sensitivity & Specificity
Assuming that a technique of sufficient sensitivity can be found, another major problem that needs to be addressed in surface spectroscopy is distinguishing between signals from the surface and the bulk of the sample. In principle, there are two ways around this problem:
1. To ensure that the surface signal is distinguishable (shifted) from the comparable bulk signal, and that the detection system has sufficient dynamic range to detect very small signals in the presence of neighboring large signals.
2. To ensure that the bulk signal is small compared to the surface signal i.e. that the vast majority of detected signal comes from the surface region of the sample.
It is the latter approach which is used by the majority of surface spectroscopic techniques - such techniques can then be said to be surface sensitive. To illustrate this, we shall look at one way in which surface sensitivity can be achieved - this makes use of the special properties of Low Energy Electrons. It is an approach employed in common surface spectroscopic techniques such as Auger Electron Spectroscopy (AES) and X-ray Photoelectron Spectroscopy (XPS).
We shall illustrate this approach by considering the latter technique (XPS) but you should remember that the ideas that we develop are more widely applicable.
XPS - A surface sensitive technique?
In answering this important question, we will actually address the topics listed below - you are strongly advised to work through them in the order in which they are given.
1. What do we really mean by surface sensitivity / specificity ?
2. How can we demonstrate that XPS is surface sensitive ?
3. Why is the XPS technique surface sensitive ?
4. How can we use knowledge of the IMFP to calculate the thickness of surface films ?
5. How can we change the degree of surface sensitivity ?
What do we really mean by surface sensitivity / specificity?
Most analytical techniques used in chemistry are "bulk" techniques in the sense that they measure all the atoms within a typical sample (be it a solid, liquid, solution or gas phase sample) - so if, for example, we were to analyse a sample consisting of a 10 nm thick layer of one material (A) deposited on a 1 mm thick substrate of another material (B) ....
… and to ask the question ….
What would be the concentration of A in parts per million (ppm) which you would expect to be revealed by a bulk analysis of such a sample ?
.... we would get the same result (10 ppm) as if we were looking at a sample in which component A was uniformly distributed throughout the bulk of B such as to give the same overall concentration in the sample as a whole.
By contrast, a surface sensitive technique is more sensitive to those atoms which are located near the surface than it is to atoms in the bulk which are well away from the surface (i.e. the main part of the signal comes from the surface region) - in the case of the first sample therefore a surface sensitive technique would produce a spectrum with enhanced signals due to component A, i.e.
$\dfrac{I_A}{I_B} \gg 10^{-5}$
where $I_A$ is the signal due to component A etc.
A truly surface specific technique should only give signals due to atoms in the surface region - but that, of course, rather depends how the "surface region" is defined!
Electron spectroscopic techniques such as XPS are not completely surface specific (although you will occasionally find this expression being used) in that whilst most of the signal comes from within a few atomic layers of the surface, a small part of the signal comes from much deeper into the solid - they are best described as being surface sensitive techniques. However, we should also note that there are a few techniques (e.g. low energy ion scattering [ISS/LEIS]) which are indeed virtually specific to the topmost layer of atoms.
How can we demonstrate that XPS is surface sensitive ?
Consider again the first of the two samples considered in the previous section - such samples are often readily prepared by depositing material A by evaporation onto a substrate of material B.
It is experimentally observed that in such a situation the XPS signals due to the substrate (material B) are rapidly attenuated, whilst those due to the condensed evaporant (material A) simultaneously increase to a limiting value.
Note
It is possible to measure deposition rates using equipment such as quartz crystal oscillators and thus obtain independent verification of the deposited film thickness. A detailed analysis of the variation of the signals due to A and B is also capable of yielding information about the mechanism of film growth.
Why is the XPS technique surface sensitive ?
The soft X-rays employed in XPS penetrate a substantial distance into the sample (~µm). Thus this method of excitation imparts no surface sensitivity at the required atomic scale. The surface sensitivity must therefore arise from the emission and detection of the photoemitted electrons. Consider, therefore, a hypothetical experiment in which electrons of a given energy, E0 , are emitted from atoms in a solid at various depths, d, below a flat surface.
We shall assume that only those electrons which reach the surface and, at the time of leaving the solid, still have their initial energy (Eo ) are detected in this experiment. This is not literally true in an XPS experiment, but in XPS experiments it is only those photoelectrons possessing characteristic emission energies and contributing to the peaks and that we consider in the subsequent analysis; we do not consider photoelectrons which have lost some energy and contribute to the background of the spectrum rather than a specific peak.
There are two things which would therefore prevent an emitted electron from being detected:
1. If it were "captured" before reaching the surface, or emitted in the wrong direction and never reached the surface !
2. If it lost energy before reaching the surface, i.e. if it escaped from the solid and reached the detector, but with E < Eo
The process by which an electron can lose energy as it travels through the solid is known as "inelastic scattering". Each inelastic scattering event leads to: (i) a reduction in the electron energy (ii) a change in the direction of travel.
In the diagram below arrows are used to represent possible trajectories of emitted electrons from a particular source atom. The orange arrows represent electrons travelling with their initial energy, Eo, whilst the blue arrows represent electron trajectories subsequent to an inelastic scattering event. Only the orange traces which extend out of the solid therefore correspond to electrons which would be detected in our hypothetical experiment.
In this case, therefore, only a small fraction of the emitted electrons would have been detected. Clearly if the source atom had been closer to the surface, then a greater fraction of the emitted electrons would have been detected since:
1. The detector would subtend a greater (solid) angle at the emitting atom (this is actually not a significant factor in real experiments).
2. For an electron emitted towards the surface, there would be less chance of inelastic scattering before it escaped from the solid.
So how does the probability of detection depend upon the distance of the emitting atom below the surface ?
Let us first simplify the problem by only considering those electrons emitted directly towards (normal to) the surface - in our hypothetical experiment we can represent this by a smaller detector located directly above the emitting atoms.
The probability of escape from a given depth, $P(d)$, is determined by the likelihood of the electron not being inelastically scattered during its journey to the surface.
Clearly, $P(d_2 ) < P(d_1 )$ but, in order to quantify this we need to know more about the inelastic scattering process - this brings us to the subject of the ..
Inelastic Mean Free Path (IMFP) of electrons.
The IMFP is a measure of the average distance travelled by an electron through a solid before it is inelastically scattered; it is dependent upon
• The initial kinetic energy of the electron.
• The nature of the solid ( but most elements show very similar IMFP v's energy relationships).
The IMFP is actually defined by the following equation which gives the probability of the electron travelling a distance, d, through the solid without undergoing scattering
$P(d) = exp ( \frac{- d}{λ} )$
where $λ$ is the IMFP for the electrons of energy E. (Note: λ = f(E), and this inelastic mean free path, which relates to the movement of electrons in the solid, is completely unrelated to the mean free path in the gas phase once they escape from the solid).
The graph below illustrates this functional form of $P(d)$ …
… and it is clear that the probability of escape decays very rapidly and is essentially zero for a distance d > 5λ . It is therefore useful to re-plot the function for a range of (d/λ) from 0-5 .
What is the probablility of escape whend = 1λ ?
What is the probablility of escape whend = 2λ ?
What is the probablility of escape whend = 3λ ?
It is also interesting to look at this same problem from a different point of view. Let us assume that there are many sources of electrons uniformly distributed at all distances from the surface of the solid (which is in effect the situation pertaining in all XPS experiments) and again arrange to only detect those unscattered electrons which emerge normal to the surface - then we can ask ...
What is the distribution of the depths of origin of those electrons that we do detect ?
This new function, let's call it P'(d), actually has the same exponential form as P(d) - i.e. the frequency of detection of electrons from different depths in the solid is directly proportional to the probability of electron escape from each depth.
Exercise $1$
So if we allow our detector to collect 100 electrons, we can ask how many electrons (on average) will have come from within a distance of one IMFP from the surface (i.e. how many originate from a distance, $d$, such that $0 < (d/λ) < 1$)?
This is a relatively simple problem in integration - try it !
Answer
In summary, we can conclude from the answer to this question that the majority of electrons detected come from within one "imfp" of the surface. Virtually all ( > 95% ) of the electrons detected come from within 3 IMFPs of the surface.
Caution
Beware of terms such as "sampling depth " and "electron escape depth ". It is not always clear whether these refer to 1, 3 or 5 IMFPs (or something else completely).
So how big is the IMFP of typical electrons within a solid such as a metal ?
The graph below shows the so-called "universal curve" for the variation of the IMFP with initial electron energy (in eV ).
It is reasonably accurate for most metals, although experimental measurements on different metals do show a substantial scatter about the general curve.
Other classes of solids also exhibit qualitatively similar universal curves, although the absolute values of the IMFP of electrons in materials such as inorganic solids and polymers do differ quite significantly from those in metals.
This IMFP data is more usually displayed on a log-log plot, since this more clearly shows how the IMFP varies at low kinetic energies - the graph below shows the general curve for metals re-plotted in this new format.
The IMFP exhibits a minimum for electrons with a kinetic energy of around 50 -100 eV ; at lower energies the probability of inelastic scattering decreases since the electron has insufficient energy to cause plasmon excitation (the main scattering mechanism in metals), and consequently the distance between inelastic collisions and the IMFP increases.
Summary
The Inelastic Mean Free Path (IMFP) in metals is typically less than:
• 10 Angstroms ( 1 nm ) for electron energies in the range 15 < E/eV < 350
• 20 Angstroms ( 2 nm ) for electron energies in the range 10 < E/eV < 1400
i.e. the IMFP of low energy electrons corresponds to only a few atomic layers. This means that any experimental technique such as XPS which involves the generation and detection of electrons of such energies will be surface sensitive.
How can we use knowledge of the IMFP to calculate the thickness of surface films ?
We can now consider a situation where a substrate (or thick film) of one material, B, is covered by a thin film of a different material, A . The XPS signal from the underlying substrate will be attenuated (i.e. reduced in intensity) due to inelastic scattering of some of the photoelectrons as they traverse through the layer of material A .
For any single photoelectron, the probability of the electron passing through this overlayer without being subject to a scattering event is given by:
$P = exp ( \frac{- t}{λ} )$
where $t$ is the thickness of the layer of material A .
It follows that the overall intensity of the XPS signal arising from B is reduced by this same factor, i.e. if the intensity of this signal in the absence of any covering layer is Io then the intensity I in the presence of the overlayer is given by:
$I = I_o exp ( \frac{ - t}{ λ} )$
It also follows that it is possible to estimate the thickness of a deposited layer (using the above equation), provided the reduction in the substrate signal is known (i.e. if spectra are acquired before and after deposition of the covering film).
How can we change the degree of surface sensitivity?
For a given technique such as XPS which relies on inelastic electron scattering to provide the surface sensitivity, we can consider how it might be possible to increase the surface sensitivity when analysing a particular substrate.
One approach would be to try and ensure that the electrons being detected have kinetic energies corresponding to the minimum in the universal curve for the IMFP. The problem with this approach is that
1. it requires us to be able to vary the X-ray radiation so that we can adjust the photon energy, hν, to give us the desired kinetic energy for the photoemitted electrons ; this is not generally possible with conventional laboratory x-ray sources (although tunable x-ray radiation is available at large synchrotron sources)
2. whilst it may be possible to adjust the kinetic energy of some particular photoelectrons to correspond to the minimum in the universal curve, this effect may not hold for all the other photoelectron peaks associated with the sample.
A much more generally useful approach is to collect photoelectrons at a more grazing emission angle.
The key point to recognise here is that it is actually the distance that the electrons have to travel through the solid (rather than the distance of the emitting atom below the surface) that is crucial in determining their probability of escape without inelastic scattering - only in the special case considered previously of normal emission are these two quantities the same.
Consider, for example, emission at some angle θ to the surface normal.
Once again 63% of the electrons will originate from atoms located such that the electrons have to travel a distance, d, which is less than 1λ through the solid (see above). Under these measurement conditions, however, all these atoms are located within a smaller distance, x, from the surface where
$x = d \cos θ$
So as the emission angle (θ) is increased, cos θ decreases, the analysed region becomes more surface localised and the surface sensitivity is increased.
One example of how this effect can be put to good use is in the study (and diagnosis) of surface segregation. The graph below shows simulated data for the variation of the XPS signal intensities with emission angle for
• a random alloy of 10% of element A in element B, with no segregation (lower blue curve)
• the same random alloy, but with surface segregation of A to give a surface monolayer of pure element A (upper red curve)
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Surface_Science_(Nix)/05%3A_Surface_Analytical_Techniques/5.01%3A_Surface_Sensitivity_and_Surface_Specificity.txt
|
Auger Electron Spectroscopy (Auger spectroscopy or AES) was developed in the late 1960's, deriving its name from the effect first observed by Pierre Auger, a French Physicist, in the mid-1920's. It is a surface specific technique utilizing the emission of low energy electrons in the Auger process and is one of the most commonly employed surface analytical techniques for determining the composition of the surface layers of a sample.
Auger spectroscopy can be considered as involving three basic steps:
1. Atomic ionization (by removal of a core electron)
2. Electron emission (the Auger process)
3. Analysis of the emitted Auger electrons
The last stage is simply a technical problem of detecting charged particles with high sensitivity, with the additional requirement that the kinetic energies of the emitted electrons must be determined. We will therefore concern ourselves only with the first two processes - before starting, however, it is useful to briefly review the electronic structure of atoms and solids, and associated nomenclature.
Electronic Structure - Isolated Atoms
The diagram below schematically illustrates the energies of the various electron energy levels in an isolated, multi-electron atom, with the conventional chemical nomenclature for these orbitals given on the right hand side.
However, scientists working with x-rays tend to use the alternative nomenclature on the left and it is this nomenclature that is used in Auger spectroscopy. The designation of levels to the K,L,M,… shells is based on their having principal quantum numbers of 1,2,3,… respectively. It is convenient to expand the part of the energy scale close to the vacuum level in order to more clearly distinguish between the higher levels ....
The numerical component of the KLM.. style of nomenclature is usually written as a subscript immediately following the main shell designation. Levels with a non-zero value of the orbital angular momentum quantum number ( l > 0), i.e. p,d,f,.. levels, show spin-orbit splitting. The magnitude of this splitting, however, is too small to be evident on this diagram - hence, the double subscript for these levels (i.e. L2,3 represents both the L2 and L3 levels).
Electronic Structure - Solid State
In the solid state the core levels of atoms are little perturbed and essentially remain as discrete, localized (i.e. atomic-like) levels. The valence orbitals, however, overlap significantly with those of neighboring atoms generating bands of spatially-delocalised energy levels. The energy level diagram for the solid is therefore closely resemblance of that of the corresponding isolated atom, except for the levels closest to the vacuum level. The diagram below shows the electronic structure of Na metal:
The Auger Process & Auger Spectroscopy
Now let us return to the subject of Auger spectroscopy - in the following discussion, the Auger process is illustrated using the K, L1 & L2,3 levels. These could be the inner core levels of an atom in either a molecular or solid-state environment.
Step I: Ionization
The Auger process is initiated by creation of a core hole - this is typically carried out by exposing the sample to a beam of high energy electrons (typically having a primary energy in the range 2 - 10 keV). Such electrons have sufficient energy to ionise all levels of the lighter elements, and higher core levels of the heavier elements.
In the diagram above, ionisation is shown to occur by removal of a K-shell electron, but in practice such a crude method of ionisation will lead to ions with holes in a variety of inner shell levels.
In some studies, the initial ionisation process is instead carried out using soft x-rays ( hν = 1000 - 2000 eV ). In this case, the acronym XAES is sometimes used. As we shall see, however, this change in the method of ionisation has no significant effect on the final Auger spectrum.
Step II: Relaxation & Auger Emission
The ionized atom that remains after the removal of the core hole electron is, of course, in a highly excited state and will rapidly relax back to a lower energy state by one of two routes: X-ray fluorescence or Auger emission. We will only consider the latter mechanism, an example of which is illustrated schematically below ....
In this example, one electron falls from a higher level to fill an initial core hole in the K-shell and the energy liberated in this process is simultaneously transferred to a second electron ; a fraction of this energy is required to overcome the binding energy of this second electron, the remainder is retained by this emitted Auger electron as kinetic energy. In the Auger process illustrated, the final state is a doubly-ionized atom with core holes in the L1 and L2,3 shells.
We can make a rough estimate of the KE of the Auger electron from the binding energies of the various levels involved. In this particular example,
$KE = ( E_K - E_{L1} ) - E_{L23} \label{eq1}$
[ Why is this answer not likely to be very accurate ?]
Note
The KE of the Auger electron is independent of the mechanism of initial core hole formation.
Equation \ref{eq1} can also be re-written in the form:
$KE = E_K - ( E_{L1} + E_{L23} )$
It should be clear from this expression that the latter two energy terms could be interchanged without any effect - i.e. it is actually impossible to say which electron fills the initial core hole and which is ejected as an Auger electron; they are indistinguishable.
An Auger transition is therefore characterized primarily by:
1. the location of the initial hole
2. the location of the final two holes
although the existence of different electronic states (terms) of the final doubly-ionized atom may lead to fine structure in high resolution spectra.
Step III: Analysis and Interpretation
When describing the transition, the initial hole location is given first, followed by the locations of the final two holes in order of decreasing binding energy, i.e. the transition illustrated is a KL1L2,3 transition. If we just consider these three electronic levels there are clearly several possible Auger transitions: specifically,
K L1 L1 K L1 L2,3 K L2,3 L2,3
In general, since the initial ionization is non-selective and the initial hole may therefore be in various shells, there will be many possible Auger transitions for a given element - some weak, some strong in intensity. AUGER SPECTROSCOPY is based upon the measurement of the kinetic energies of the emitted electrons. Each element in a sample being studied will give rise to a characteristic spectrum of peaks at various kinetic energies.
This is an Auger spectrum of Pd metal - generated using a 2.5 keV electron beam to produce the initial core vacancies and hence to stimulate the Auger emission process. The main peaks for palladium occur between 220 & 340 eV. The peaks are situated on a high background which arises from the vast number of so-called secondary electrons generated by a multitude of inelastic scattering processes.
Auger spectra are also often shown in a differentiated form: the reasons for this are partly historical, partly because it is possible to actually measure spectra directly in this form and by doing so get a better sensitivity for detection. The plot below shows the same spectrum in such a differentiated form.
Summary
Auger Electron Spectroscopy (AES) is a surface-sensitive spectroscopic technique used for elemental analysis of surfaces; it offers
• high sensitivity (typically ca. 1% monolayer) for all elements except H and He.
• a means of monitoring surface cleanliness of samples
• quantitative compositional analysis of the surface region of specimens, by comparison with standard samples of known composition.
In addition, the basic technique has also been adapted for use in:
1. Auger Depth Profiling: providing quantitative compositional information as a function of depth below the surface
2. Scanning Auger Microscopy (SAM): providing spatially-resolved compositional information on heterogeneous samples
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Surface_Science_(Nix)/05%3A_Surface_Analytical_Techniques/5.02%3A_Auger_Electron_Spectroscopy.txt
|
Photoelectron spectroscopy utilizes photo-ionization and analysis of the kinetic energy distribution of the emitted photoelectrons to study the composition and electronic state of the surface region of a sample. Traditionally, when the technique has been used for surface studies it has been subdivided according to the source of exciting radiation into:
• X-ray Photoelectron Spectroscopy (XPS): using soft x-rays (with a photon energy of 200-2000 eV) to examine core-levels.
• Ultraviolet Photoelectron Spectroscopy (UPS): using vacuum UV radiation (with a photon energy of 10-45 eV) to examine valence levels.
The development of synchrotron radiation sources has enabled high resolution studies to be carried out with radiation spanning a much wider and more complete energy range ( 5 - 5000+ eV ) but such work remains a small minority of all photoelectron studies due to the expense, complexity and limited availability of such sources.
Physical Principles
Photoelectron spectroscopy is based upon a single photon in/electron out process and from many viewpoints this underlying process is a much simpler phenomenon than the Auger process. The energy of a photon of all types of electromagnetic radiation is given by the Einstein relation:
$E = h \nu \label{5.3.1}$
where $h$ is Planck constant ( 6.62 x 10-34 J s ) and $\nu$ is the frequency (Hz) of the radiation.
Photoelectron spectroscopy uses monochromatic sources of radiation (i.e. photons of fixed energy). In XPS, the photon is absorbed by an atom in a molecule or solid, leading to ionization and the emission of a core (inner shell) electron. By contrast, in UPS the photon interacts with valence levels of the molecule or solid, leading to ionization by removal of one of these valence electrons. The kinetic energy distribution of the emitted photoelectrons (i.e. the number of emitted photoelectrons as a function of their kinetic energy) can be measured using any appropriate electron energy analyzer and a photoelectron spectrum can thus be recorded. The process of photoionization can be considered in several ways: one way is to look at the overall process as follows:
$A + hν \rightarrow A^+ + e^- \label{5.3.2}$
Conservation of energy then requires that:
$E(A) + h\nu = E(A^+ ) + E(e^-) \label{5.3.3}$
Since the electron's energy is present solely as kinetic energy (KE) this can be rearranged to give the following expression for the KE of the photoelectron:
$KE = h\nu - ( E(A^+ ) - E(A) )\label{5.3.4}$
The final term in brackets, representing the difference in energy between the ionized and neutral atoms, is generally called the binding energy (BE) of the electron - this then leads to the following commonly quoted equation:
$KE = h\nu - BE \label{5.3.5}$
An alternative approach is to consider a one-electron model along the lines of the following pictorial representation; this model of the process has the benefit of simplicity but it can be rather misleading.
The BE is now taken to be a direct measure of the energy required to just remove the electron concerned from its initial level to the vacuum level and the KE of the photoelectron is again given by:
$KE = hν - BE \label{5.3.6}$
The binding energies (BE) of energy levels in solids are conventionally measured with respect to the Fermi level of the solid, rather than the vacuum level. This involves a small correction to Equation \ref{5.3.6} to account for the work function ($φ$) of the solid. For the purposes of the discussion below this correction will be neglected.
Experimental Details
The basic requirements for a photoemission experiment (XPS or UPS) are:
1. a source of fixed-energy radiation (an x-ray source for XPS or, typically, a He discharge lamp for UPS)
2. an electron energy analyser (which can disperse the emitted electrons according to their kinetic energy, and thereby measure the flux of emitted electrons of a particular energy)
3. a high vacuum environment (to enable the emitted photoelectrons to be analyzed without interference from gas phase collisions)
Such a system is illustrated schematically below:
There are many different designs of electron energy analyzer but the preferred option for photoemission experiments is a concentric hemispherical analyser (CHA) which uses an electric field between two hemispherical surfaces to disperse the electrons according to their kinetic energy.
X-ray Photoelectron Spectroscopy (XPS)
For each and every element, there will be a characteristic binding energy associated with each core atomic orbital i.e. each element will give rise to a characteristic set of peaks in the photoelectron spectrum at kinetic energies determined by the photon energy and the respective binding energies. The presence of peaks at particular energies therefore indicates the presence of a specific element in the sample under study - furthermore, the intensity of the peaks is related to the concentration of the element within the sampled region. Thus, the technique provides a quantitative analysis of the surface composition and is sometimes known by the alternative acronym, ESCA (Electron Spectroscopy for Chemical Analysis). The most commonly employed x-ray sources are those giving rise to:
• Mg Kα radiation: hν = 1253.6 eV
• Al Kα radiation: hν = 1486.6 eV
The emitted photoelectrons will therefore have kinetic energies in the range of ca. 0 - 1250 eV or 0 - 1480 eV. Since such electrons have very short IMFPs in solids (Section 5.1), the technique is necessarily surface sensitive.
Example $1$: The XPS spectrum of Pd metal
The diagram below shows a real XPS spectrum obtained from a Pd metal sample using Mg Kα radiation
- the main peaks occur at kinetic energies of ca. 330, 690, 720, 910 and 920 eV.
Since the photon energy of the radiation is always known it is a trivial matter to transform the spectrum so that it is plotted against BE as opposed to KE.
The most intense peak is now seen to occur at a binding energy of ca. 335 eV
Working downwards from the highest energy levels ......
1. the valence band (4d, 5s) emission occurs at a binding energy of ca. 0 - 8 eV ( measured with respect to the Fermi level, or alternatively at ca. 4 - 12 eV if measured with respect to the vacuum level ).
2. the emission from the 4p and 4s levels gives rise to very weak peaks at 54 eV and 88 eV respectively
3. the most intense peak at ca. 335 eV is due to emission from the 3d levels of the Pd atoms, whilst the 3p and 3s levels give rise to the peaks at ca. 534/561 eV and 673 eV respectively.
4. the remaining peak is not an XPS peak at all ! - it is an Auger peak arising from x-ray induced Auger emission. It occurs at a kinetic energy of ca. 330 eV (in this case it is really meaningless to refer to an associated binding energy).
These assignments are summarized below ...
It may be further noted that
• there are significant differences in the natural widths of the various photoemission peaks
• the peak intensities are not simply related to the electron occupancy of the orbitals
Exercise $1$: The XPS Spectrum of NaCl
The diagram opposite shows an energy level diagram for sodium with approximate binding energies for the core levels.
If we are using Mg Kα ( hν = 1253.6 eV ) radiation ...
... at what kinetic energy will the Na 1s photoelectron peak be observed ?
(the 1s peak is that resulting from photoionisation of the 1s level)
... at what kinetic energy will the Na 2s and 2p photoelectron peaks be observed ?
Spin-Orbit Splitting
Closer inspection of the spectrum shows that emission from some levels (most obviously 3p and 3d ) does not give rise to a single photoemission peak, but a closely spaced doublet. We can see this more clearly if, for example, we expand the spectrum in the region of the 3d emission ...
The 3d photoemission is in fact split between two peaks, one at 334.9 eV BE and the other at 340.2 eV BE, with an intensity ratio of 3:2 . This arises from spin-orbit coupling effects in the final state. The inner core electronic configuration of the initial state of the Pd is:
(1s)2 (2s)2 (2p)6 (3s)2 (3p)6 (3d)10 ....
with all sub-shells completely full.
The removal of an electron from the 3d sub-shell by photo-ionization leads to a (3d)9 configuration for the final state - since the d-orbitals ( l = 2) have non-zero orbital angular momentum, there will be coupling between the unpaired spin and orbital angular momenta. Spin-orbit coupling is generally treated using one of two models which correspond to the two limiting ways in which the coupling can occur - these being the LS (or Russell-Saunders) coupling approximation and the j-j coupling approximation. If we consider the final ionised state of Pd within the Russell-Saunders coupling approximation, the (3d)9 configuration gives rise to two states (ignoring any coupling with valence levels) which differ slightly in energy and in their degeneracy ...
2D 5/2
gJ = 2x{5/2}+1 = 6
2D 3/2
gJ = 2x{3/2}+1 = 4
These two states arise from the coupling of the L = 2 and S = 1/2 vectors to give permitted J values of 3/2 and 5/2. The lowest energy final state is the one with maximum J (since the shell is more than half-full), i.e. J = 5/2, hence this gives rise to the "lower binding energy" peak. The relative intensities of the two peaks reflects the degeneracies of the final states (gJ = 2J + 1), which in turn determines the probability of transition to such a state during photoionization.
The Russell-Saunders coupling approximation is best applied only to light atoms and this splitting can alternatively be described using individual electron l-s coupling. In this case the resultant angular momenta arise from the single hole in the d-shell; a d-shell electron (or hole) has l = 2 and s = 1/2, which again gives permitted j-values of 3/2 and 5/2 with the latter being lower in energy.
The peaks themselves are conventionally annotated as indicated - note the use of lower case lettering
This spin-orbit splitting is of course not evident with s-levels (l = 0), but is seen with p,d & f core-levels which all show characteristic spin-orbit doublets.
Chemical Shifts
The exact binding energy of an electron depends not only upon the level from which photoemission is occurring, but also upon both the formal oxidation state of the atom and the local chemical and physical environment. Changes in either give rise to small shifts in the peak positions in the spectrum - so-called chemical shifts. Such shifts are readily observable and interpretable in XP spectra (unlike in Auger spectra) because the technique is of high intrinsic resolution (as core levels are discrete and generally of a well-defined energy) and is a one electron process (thus simplifying the interpretation).
Atoms of a higher positive oxidation state exhibit a higher binding energy due to the extra coulombic interaction between the photo-emitted electron and the ion core. This ability to discriminate between different oxidation states and chemical environments is one of the major strengths of the XPS technique. In practice, the ability to resolve between atoms exhibiting slightly different chemical shifts is limited by the peak widths which are governed by a combination of factors; especially
• the intrinsic width of the initial level and the lifetime of the final state
• the line-width of the incident radiation - which for traditional x-ray sources can only be improved by using x-ray monochromators
• the resolving power of the electron-energy analyser
In most cases, the second factor is the major contribution to the overall line width.
Example $2$: Oxidation States of Titanium
Titanium exhibits very large chemical shifts between different oxidation states of the metal; in the diagram below a Ti 2p spectrum from the pure metal (Tio ) is compared with a spectrum of titanium dioxide (Ti4+ ).
Oxidation states of titanium [Ti 2p spectra].
Note
1. the two spin orbit components exhibit the same chemical shift (∼ 4.6 eV);
2. metals are often characterised by an asymmetric line shape, with the peak tailing to higher binding energy' whilst insulating oxides give rise to a more symmetric peak profile;
3. the weak peak at ca. 450.7 eV in the lower spectrum arises because typical x-ray sources also emit some x-rays of a slightly higher photon energy than the main Mg Kα line; this satellite peak is a "ghost" of the main 2p3/2 peak arising from ionisation by these additional x-rays.
Angle Dependent Studies
As described previously, the degree of surface sensitivity of an electron-based technique such as XPS may be varied by collecting photoelectrons emitted at different emission angles to the surface plane. This approach may be used to perform non-destructive analysis of the variation of surface composition with depth (with chemical state specificity).
Angle-Dependent Analysis of a Silicon with a Oxide Surface Layer
A series of Si 2p photoelectron spectra recorded for emission angles of 10-90º to the surface plane. Note how the Si 2p peak of the oxide (BE ~ 103 eV) increases markedly in intensity at grazing emission angles whilst the peak from the underlying elemental silicon (BE ~ 99 eV) dominates the spectrum at near-normal emission angles.
(courtesy of Physical Electronics, Inc. (PHI))
Note: in this instance the emission angle is measured with respect to the surface plane (i.e. 90º corresponds to photoelectrons departing with a trajectory normal to the surface, whilst 10º corresponds to emission at a very grazing angle;
Ultraviolet Photoelectron Spectroscopy (UPS)
In UPS the source of radiation is normally a noble gas discharge lamp; frequently a He-discharge lamp emitting He I radiation of energy 21.2 eV. Such radiation is only capable of ionizing electrons from the outermost levels of atoms - the valence levels. The advantage of using such UV radiation over x-rays is the very narrow line width of the radiation and the high flux of photons available from simple discharge sources. The main emphasis of work using UPS has been in studying:
1. the electronic structure of solids - detailed angle resolved studies permit the complete band structure to be mapped out in k-space.
2. the adsorption of relatively simple molecules on metals - by comparison of the molecular orbitals of the adsorbed species with those of both the isolated molecule and with calculations.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Surface_Science_(Nix)/05%3A_Surface_Analytical_Techniques/5.03%3A_Photoelectron_Spectroscopy.txt
|
Vibrational spectroscopy provides the most definitive means of identifying the surface species generated upon molecular adsorption and the species generated by surface reactions. In principle, any technique that can be used to obtain vibrational data from solid state or gas phase samples (IR, Raman etc.) can be applied to the study of surfaces - in addition there are a number of techniques which have been specifically developed to study the vibrations of molecules at interfaces (EELS, SFG etc.).
There are, however, only two techniques that are routinely used for vibrational studies of molecules on surfaces - these are:
1. IR Spectroscopy (of various forms, e.g. RAIRS, MIR)
2. Electron Energy Loss Spectroscopy ( EELS )
IR Spectroscopy
There are a number of ways in which the IR technique may be implemented for the study of adsorbates on surfaces. For solid samples possessing a high surface area:
• Transmission IR Spectroscopy: employing the same basic experimental geometry as that used for liquid samples and mulls. This is often used for studies on supported metal catalysts where the large metallic surface area permits a high concentration of adsorbed species to be sampled. The solid sample must, of course, be IR transparent over an appreciable wavelength range.
• Diffuse Reflectance IR Spectroscopy ( DRIFTS ): in which the diffusely scattered IR radiation from a sample is collected, refocused and analysed. This modification of the IR technique can be employed with high surface area catalytic samples that are not sufficiently transparent to be studied in transmission.
• For studies on low surface area samples (e.g. single crystals):
• Reflection-Absorption IR Spectroscopy ( RAIRS ): where the IR beam is specularly reflected from the front face of a highly-reflective sample, such as a metal single crystal surface.
• Multiple Internal Reflection Spectroscopy ( MIR ): in which the IR beam is passed through a thin, IR transmitting sample in a manner such that it alternately undergoes total internal reflection from the front and rear faces of the sample. At each reflection, some of the IR radiation may be absorbed by species adsorbed on the solid surface - hence the alternative name of Attenuated Total Reflection (ATR).
RAIRS - the Study of Adsorbates on Metallic Surfaces by Reflection IR Spectroscopy
It can be shown theoretically that the best sensitivity for IR measurements on metallic surfaces is obtained using a grazing-incidence reflection of the IR light.
Furthermore, since it is an optical (photon in/photon out) technique it is not necessary for such studies to be carried out in vacuum. The technique is not inherently surface-specific, but
• there is no bulk signal to worry about
• the surface signal is readily distinguishable from gas-phase absorptions using polarization effects.
One major problem, is that of sensitivity (i.e. the signal is usually very weak owing to the small number of adsorbing molecules). Typically, the sampled area is ca. 1 cm2 with less than 1015 adsorbed molecules (i.e. about 1 nanomole). With modern FTIR spectrometers, however, such small signals (0.01% - 2% absorption) can still be recorded at relatively high resolution (ca. 1 cm-1 ).
For a number of practical reasons, low frequency modes ( < 600 cm-1 ) are not generally observable - this means that it is not usually possible to see the vibration of the metal-adsorbate bond and attention is instead concentrated on the intrinsic vibrations of the adsorbate species in the range 600 - 3600 cm-1.
Selection Rules
The observation of vibrational modes of adsorbates on metallic substrates is subject to the surface dipole selection rule. This states that only those vibrational modes which give rise to an oscillating dipole perpendicular (normal) to the surface are IR active and give rise to an observable absorption band.
Further information on the selection rules for surface IR spectroscopy can be found in the review by Sheppard & Erkelens [Appl. Spec. 38, 471 (1984)]. It also needs to be remembered that even if a transition is allowed it may still be very weak if the transition moment is small.
Example \(1\): Nitric Oxide (NO) adsorption on a Pt Surface
The sequence of spectra shown below demonstrate how IR spectroscopy can clearly reveal changes in the adsorption geometry of chemisorbed molecules. In this particular instance, all the bands are due to the stretching mode of the N-O bond in NO molecules adsorbed on a Pt surface, but the vibrational frequency is sensitive to changes in the coordination and molecular orientation of the adsorbed NO molecules.
Fig. ν(N-O) spectra obtained from a Pt surface subjected to a fixed exposure of NO at various temperatures
Note - the surface coverage of adsorbed NO molecules decreases as the temperature is raised and little NO remains adsorbed at temperatures of 450 K and above.
Example \(2\): HCN adsorption on a Pt Surface
The RAIRS spectra shown below were observed during HCN adsorption on Pt at sub-ambient temperatures; the surface species which are generated give rise to much weaker absorptions than NO, and signal:noise considerations become much more important. These spectra also illustrate the effect of the surface normal selection rule for metallic surfaces.
(a) 0.15 L HCN at 100 K: The HCN molecules are weakly coordinated to the surface in a linear end-on fashion via the nitrogen; the ν(H-CN) mode is seen at 3302 cm-1 but the ν(C-N) mode is too weak to be seen and the δ(HCN) mode expected at ca. 850 cm-1 is forbidden by the surface selection rule. The overtone of the bending mode, 2δ(HCN), is however allowed and is evident at ca. 1580 cm-1.
(b) 1.50 L HCN at 100 K: Higher exposures lead to the physisorption of HCN molecules into a second layer. These molecules are inclined to the surface normal and the HCN bending mode (∼ 820 cm-1) of these second layer molecules is no longer symmetry forbidden. Hydrogen bonding between molecules in the first and second layers also leads to a noticeable broadening of the ν(H-CN) band to lower wavenumbers.
(c) 30 L HCN at 200 K: At the higher temperature of 200 K only a small amount of molecular HCN remains bound in an end-on fashion to the surface. The relatively strong band at 2084 cm-1 suggests that some dissociation has also occurred to give adsorbed CN groups, which give rise to a markedly more intense ν(C-N) band than the HCN molecule itself.
Electron Energy Loss Spectroscopy (EELS)
This is a technique utilising the inelastic scattering of low energy electrons in order to measure vibrational spectra of surface species: superficially, it can be considered as the electron-analogue of Raman spectroscopy. To avoid confusion with other electron energy loss techniques it is sometimes referred to as HREELS - high resolution EELS or VELS - vibrational ELS. Since the technique employs low energy electrons, it is necessarily restricted to use in high vacuum (HV) and UHV environments - however, the use of such low energy electrons ensures that it is a surface specific technique and, arguably, it is the vibrational technique of choice for the study of most adsorbates on single crystal substrates.
The basic experimental geometry is fairly simple as illustrated schematically below - it involves using an electron monochromator to give a well-defined beam of electrons of a fixed incident energy, and then analysing the scattered electrons using an appropriate electron energy analyser.
A substantial number of electrons are elastically scattered (\( E = E_o\)) - this gives rise to a strong elastic peak in the spectrum.
On the low kinetic energy side of this main peak (\(E < E_o\)), additional weak peaks are superimposed on a mildly sloping background. These peaks correspond to electrons which have undergone discrete energy losses during the scattering from the surface.
The magnitude of the energy loss, \(ΔE = (E_o - E)\), is equal to the vibrational quantum (i.e. the energy) of the vibrational mode of the adsorbate excited in the inelastic scattering process. In practice, the incident energy ( Eo ) is usually in the range 5-10 eV (although occasionally up to 200 eV) and the data is normally plotted against the energy loss (frequently measured in meV).
Selection Rules
The selection rules that determine whether a vibrational band may be observed depend upon the nature of the substrate and also the experimental geometry: specifically the angles of the incident and (analysed) scattered beams with respect to the surface. For metallic substrates and a specular geometry, scattering is principally by a long-range dipole mechanism. In this case the loss features are relatively intense, but only those vibrations giving rise to a dipole change normal to the surface can be observed.
By contrast, in an off-specular geometry, electrons lose energy to surface species by a short-range impact scattering mechanism. In this case the loss features are relatively weak but all vibrations are allowed and may be observed. If spectra can be recorded in both specular and off-specular modes the selection rules for metallic substrates can be put to good use - helping the investigator to obtain more definitive identification of the nature and geometry of the adsorbate species. The resolution of the technique (despite the HREELS acronym !) is generally rather poor; 40-80 cm-1 is not untypical. A measure of the instrumental resolution is given by looking at the FWHM (full-width at half maximum) of the elastic peak.
This poor resolution can cause problems in distinguishing between closely similar surface species - however, recent improvements in instrumentation have opened up the possibility of much better spectral resolution ( < 10 cm-1 ) and will undoubtedly enhance the utility of the technique.
In summary, there are both advantages and disadvantages in utilising EELS, as opposed to IR techniques, for the study of surface species It offers the advantages of ...
• high sensitivity
• variable selection rules
• spectral acquisition to below 400 cm-1
but suffers from the limitations of ...
• use of low energy electrons (requiring a HV environment and hence the need for low temperatures to study weakly-bound species, and also the use of magnetic shielding to reduce the magnetic field in the region of the sample)
• requirement for flat, preferably conducting, substrates
• lower resolution
Applications of Vibrational Spectroscopy
One of the classic examples of an area in which vibrational spectroscopy has contributed significantly to the understanding of the surface chemistry of an adsorbate is that of:
Molecular Adsorption of CO on Metallic Surfaces
Adsorbed carbon monoxide usually gives rise to strong absorptions in both the IR and EELS spectra at the (C-O) stretching frequency. The metal-carbon stretching mode (ca. 400 cm-1 ) is usually also accessible to EELS.
The interpretation of spectra of CO as an adsorbed surface species draws heavily upon IR spectra from related inorganic cluster and coordination complexes - the structure of such complexes usually being available from x-ray single crystal diffraction measurements. This comparison suggests that the CO stretching frequency can provide a good indication of the surface coordination of the molecule: as a rough guideline,
ν(C-O)
CO ( gas phase )
2143 cm-1
Terminal CO
2100 - 1920 cm-1
Bridging ( 2f site )
1920 - 1800 cm-1
Bridging ( 3f / 4f site )
< 1800 cm-1
Example \(3\): CO chemisorbed on a clean Pt Surface
The RAIRS spectrum shown below was obtained for a saturation coverage of CO on a Pt surface at 300 K.
The data indicate that the majority of the CO molecules are bound in a terminal fashion to a single Pt surface atom (νCO ∼ 2090 cm-1 ), whilst a smaller number of molecules are co-ordinated in a bridging site between two Pt atoms (νCO ∼ 1870 cm-1 ).
The reduction in the stretching frequency of terminally-bound CO from the value observed for the gas phase molecule ( 2143 cm-1 ) can be explained in terms of the Dewar-Chatt or Blyholder model for the bonding of CO to metals.
This simple model considers the metal-CO bonding to consist of two main components:
A: this is a σ-bonding interaction due to overlap of a filled s -"lone pair" orbital on the carbon atom with empty metal orbitals of the correct symmetry - this leads to electron density transfer from the CO molecule to the metal centre.
B: this is a π bonding interaction due to overlap of filled metal dπ (and pπ ?) orbitals with the π* antibonding molecular orbital of the CO molecule. Since this interaction leads to the introduction of electron density into the CO antibonding orbital there is a consequent reduction in the CO bond strength and its intrinsic vibrational frequency (relative to the isolated molecule).
For a more detailed discussions on the bonding of CO to metals, you are recommended to refer to one of the following:
• " Advanced Inorganic Chemistry " by F.A. Cotton & G. Wilkinson (5th Edn.) pp. 58 - 62 .
• " Solids & Surfaces: a chemist's view of bonding in extended structures " by R. Hoffman pp. 71-74.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Surface_Science_(Nix)/05%3A_Surface_Analytical_Techniques/5.04%3A_Vibrational_Spectroscopy.txt
|
The technique of Secondary Ion Mass Spectrometry (SIMS) is the most sensitive of all the commonly-employed surface analytical techniques - capable of detecting impurity elements present in a surface layer at < 1 ppm concentration, and bulk concentrations of impurities of around 1 ppb (part-per-billion) in favorable cases. This is because of the inherent high sensitivity associated with mass spectrometric-based techniques.
There are a number of different variants of the technique:
• All of these variations on the technique are based on the same basic physical process and it is this process which is discussed here, together with a brief introduction to the field of static SIMS. Further notes on dynamic and imaging SIMS can be obtained in Section 7.4 - SIMS Imaging and Depth Profiling.
In SIMS the surface of the sample is subjected to bombardment by high energy ions - this leads to the ejection (or sputtering) of both neutral and charged (+/-) species from the surface. The ejected species may include atoms, clusters of atoms and molecular fragments.
The mass analyzer may be a quadrupole mass analyzer (with unit mass resolution), but magnetic sector mass analyzers or time-of-flight (TOF) analyzers are also often used and these can provide substantially higher sensitivity and mass resolution, and a much greater mass range (albeit at a higher cost). In general, TOF analyzers are preferred for static SIMS, whilst quadrupole and magnetic sector analyzers are preferred for dynamic SIMS.
The most commonly employed incident ions (generically denoted by I+ in the above diagram) used for bombarding the sample are noble gas ions (e.g. Ar+ ) but other ions (e.g. Cs+, Ga+ or O2+ ) are preferred for some applications. With TOF-SIMS the primary ion beam is pulsed to enable the ions to be dispersed over time from the instant of impact, and very short pulse durations are required to obtain high mass resolution.
5.06: Temperature-Programmed Techniques
There are a range of techniques for studying surface reactions and molecular adsorption on surfaces which utilize temperature-programming to discriminate between processes with different activation parameters. Of these, the most useful for single crystal studies is: Temperature Programmed Desorption (TPD). When the technique is applied to a system in which the adsorption process is, at least in part, irreversible and T-programming leads to surface reactions, then this technique is often known as: Temperature Programmed Reaction Spectroscopy (TPRS)
However, there is no substantive difference between TPRS and TPD. The basic experiment is very simple, involving
1. Adsorption of one or more molecular species onto the sample surface at low temperature (frequently 300 K, but sometimes sub-ambient).
2. Heating of the sample in a controlled manner (preferably so as to give a linear temperature ramp) whilst monitoring the evolution of species from the surface back into the gas phase.
In modern implementations of the technique the detector of choice is a small, quadrupole mass spectrometer (QMS) and the whole process is carried out under computer control with quasi-simultaneous monitoring of a large number of possible products.
The data obtained from such an experiment consists of the intensity variation of each recorded mass fragment as a function of time / temperature. In the case of a simple reversible adsorption process it may only be necessary to record one signal - that attributable to the molecular ion of the adsorbate concerned.
The graph below shows data from a TPD experiment following adsorption of CO onto a Pd(111) crystal at 300 K.
Since mass spectrometric detection is used the sensitivity of the technique is good with attainable detection limits below 0.1% of a monolayer of adsorbate.
The following points are worth noting:
1. The area under a peak is proportional to the amount originally adsorbed, i.e. proportional to the surface coverage.
2. The kinetics of desorption (obtained from the peak profile and the coverage dependence of the desorption characteristics) give information on the state of aggregation of the adsorbed species e.g. molecular v's dissociative.
3. The position of the peak (the peak temperature) is related to the enthalpy of adsorption, i.e. to the strength of binding to the surface.
One implication of the last point, is that if there is more than one binding state for a molecule on a surface (and these have significantly different adsorption enthalpies) then this will give rise to multiple peaks in the TPD spectrum.
The graph below shows data from a TPD experiment following adsorption of oxygen on Pt(111) at 80 K.
Theory of TPD
As discussed in Section 2.6, the rate of desorption of a surface species will in general be given by an expression of the form:
$R_{des} = ν N^x \exp \left( \dfrac{- E_a^{des} }{ R T} \right) \label{1}$
with
• $R_{des}$ - desorption rate ( $= -dN/dt$ )
• $x$ - kinetic order of desorption (typically 0,1 or 2)
• $E_a^{des}$ - activation energy for desorption
In a temperature programmed desorption experiment in which the temperature is increased linearly with time from some initial temperature $T_o$, then:
$T = T_o + βt \label{2a}$
and
$dT = β\,dt \label{2b}$
The intensity of the desorption signal, I(T), is proportional to the rate at which the surface concentration of adsorbed species is decreasing. This is obtained by combining [1] and [2] to give
$I(T) \propto \dfrac{dN}{dT} = \dfrac{\nu N^x}{\beta} e^{-E_a^{des}/RT} \label{3}$
This problem may also be considered in a rather simplistic graphical way -the key to this is to recognize that the expression for the desorption signal given in the above equation is basically a product of a coverage term ( $N^x$ - where $N$ depends on $T$ ) and an exponential term (involving both $E_a$ and $T$ ).
Initially, at low temperatures $E_a >> RT$ and the exponential term is vanishing small. However, as the temperature is increased this term begins to increase very rapidly when the value of RT approaches that of the activation energy, $E_a$.
By contrast, the pre-exponential term is dependent upon the coverage, N(T), at the temperature concerned - this term will remain at the initial value until the desorption rate becomes of significance as a result of the increasing exponential term. Thereafter, it will decrease ever more rapidly until the coverage is reduced to zero. The shaded area is an approximate representation of the product of these two functions, and hence also an approximate representation for the desorption signal itself - whilst this illustration may be overly simplistic it does clearly show why the desorption process gives rise to a well-defined desorption peak.
If you wish to see exactly how the various factors such as the kinetic order and initial coverage influence the desorption profile then try out the Interactive Demonstration of Temperature Programmed Desorption (note - this is based on the formulae given in Section 2.6 ); we will however continue to look at one particular case in a little more detail.
CASE I (Molecular adsorption)
In this case the desorption kinetics will usually be first order (i.e. $x = 1$ ). The maximum desorption signal in the $I(T)$ trace will occur when ( $dI / dT$ ) = 0, i.e. when
$\dfrac{d}{dT}\left[ \dfrac{\nu N}{\beta} \; \exp \left( \dfrac{-E_a^{des}}{RT} \right) \right] = 0$
Hence, remembering that the surface coverage changes with temperature i.e., $N = N(T)$,
where we have substituted $E_a$ for $E_a^{des}$ purely for clarity of presentation. Substituting for $dN/dT$ from eq. 3 then gives
The solution is given by setting the expression in square brackets to be equal to zero, i.e.,
$\dfrac{E_a}{RT^2_p}=\dfrac{\nu}{\beta}\; \exp \left(\dfrac{-E_a}{RT_p} \right)$
where we have now defined the temperature at which the desorption maximum occurs to be T = Tp (the peak temperature ).
Unfortunately, this equation cannot be re-arranged to make $T_p$ the subject (i.e. to give a simple expression of the form Tp = .....), but we can note that:
1. as $E_a^{des}$ (the activation energy for desorption) increases, then so $T_p$ (the peak temperature) increases.
2. the peak temperature is not dependent upon, and consequently does not change with, the initial coverage, $N_t=0$.
3. the shape of the desorption peak will tend to be asymmetric, with the signal decreasing rapidly after the desorption maximum.
Temperature Programmed Reaction Spectroscopy
In TPRS a number of desorption products will normally be detected in any one experiment - this is where mass spectrometric detection and multiple ion monitoring really becomes essential. A good example of TPRS results are those obtained from dosing formic acid (HCOOH) onto a copper surface.
The spectra contain two regions of interest - the first at around room temperature where there is desorption of hydrogen $H_2$ with $m/z = 2$ alone, and the second between 400 K and 500 K, where there are coincident desorption peaks attributable to hydrogen and carbon dioxide $CO_2$ with $m/z = 44$.
The lower temperature hydrogen peak is at about the same temperature at which hydrogen atoms recombine and desorb as molecular hydrogen from a hydrogen-dosed Cu(110) surface i.e. the kinetics of its evolution into the gas phase are those of the normal recombinative desorption process. By contrast, the higher temperature hydrogen peak is observed well above the normal hydrogen desorption temperature - its appearance must be governed by the kinetics of decomposition of another surface species. The carbon dioxide peak must also be decomposition limited since on clean Cu(110) carbon dioxide itself is only weakly physisorbed at very low temperatures.
Note
More generally, coincident desorption of two species at temperatures well above their normal desorption temperatures is characteristic of the decomposition of a more complex surface species.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Surface_Science_(Nix)/05%3A_Surface_Analytical_Techniques/5.05%3A_Secondary_Ion_Mass_Spectrometry.txt
|
• 6.1: Classification of Overlayer Structures
Adsorbed species on single crystal surfaces are frequently found to exhibit long-range ordering ; that is to say that the adsorbed species form a well-defined overlayer structure. Each particular structure may only exist over a limited coverage range of the adsorbate, and in some adsorbate/substrate systems a whole progression of adsorbate structures are formed as the surface coverage is gradually increased. This section deals with the classification of such ordered structures.
• 6.2: Low Energy Electron Diffraction (LEED)
LEED is the principal technique for the determination of surface structures. It may be used in one of two ways: Qualitatively : where the diffraction pattern is recorded and analysis of the spot positions yields information on the size, symmetry and rotational alignment of the adsorbate unit cell with respect to the substrate unit cell. Quantitatively : where the intensities of the various diffracted beams are recorded as a function of the incident electron beam energy to generate so-called I-V
• 6.3: Reflection High Energy Electron Diffraction (RHEED)
To extract surface structural information from the diffraction of high energy electrons, therefore, the technique has to be adapted and the easiest way of doing this is to use a reflection geometry in which the electron beam is incident at a very grazing angle - it is then known as Reflection High Energy Electron Diffraction (RHEED).
• 6.4: Examples - Surface Structures
This section provides worked exercises in the classification of surface structures using both Wood's notation and matrix notation, and in the determination of surface coverages.
06: Overlayer Structures and Surface Diffraction
Adsorbed species on single crystal surfaces are frequently found to exhibit long-range ordering ; that is to say that the adsorbed species form a well-defined overlayer structure. Each particular structure may only exist over a limited coverage range of the adsorbate, and in some adsorbate/substrate systems a whole progression of adsorbate structures are formed as the surface coverage is gradually increased.
This section deals with the classification of such ordered structures - in most cases this involves describing the overlayer structure in terms of the underlying structure of the substrate.
There are two principal methods for specifying the structure:
1. Wood's notation
2. matrix notation.
Before we start to discuss overlayer structures, however, we need to make sure that we can adequately describe the structure of the substrate !
The Concept of the Surface Unit Cell
The primitive unit cell is the simplest periodically repeating unit which can be identified in an ordered array - the array in this instance being the ordered arrangement of surface atoms. By repeated translation of a unit cell, the whole array can be constructed. Let us consider the clean surface structures of the low index surface planes of fcc metals.
The fcc (100) surface
The fcc(100) surface has 4-fold rotational symmetry ("square symmetry") - perhaps it should not surprise us therefore to find that the primitive unit cell for this surface is square in shape !
fcc(100) surface
Two possible choices of unit cell are highlighted - it is clear that a unit cell of this size is indeed going to be the simplest possible repeating unit for this surface. The two alternatives drawn are in fact but two of an infinite number of possibilities; they have the same shape/symmetry, size and orientation, differing only in their translational position or "origin".
Whichever we choose then it is clear that we can indeed generate the whole surface structure by repeated translation of the unit cell; for example ....
In fact, I personally prefer the alternative choice of unit cell which has the corners of the unit cell coincident with the atomic centres.
We now need to think how to define the unit cell shape, size and symmetry - this is best done using two vectors which have a common origin and define two sides of the unit cell ...
For this fcc(100) surface the two vectors which define the unit cell, conventionally called a1 & a2, are:
• the same length i.e. |a1| = |a2|
• mutually perpendicular
By convention, one also selects the vectors such that you go anticlockwise from a1 to get to a2 .
The length of the vectors a1 & a2 is related to the bulk unit cell parameter, a, by |a1| = |a2| = a / √2 ]
The fcc (110) surface
In the case of the fcc(110) surface, which has 2-fold rotational symmetry, the unit cell is rectangular
fcc(110) surface
By convention, |a2| > |a1| - if we also recall the convention that one goes anticlockwise to get from a1 to a2, then this leads to the choice of vectors shown.
The fcc (111) surface
With the fcc (111) surface we again have a situation where the length of the two vectors are the same i.e. | a1| = | a2| . We can either keep the angle between the vectors less than 90 degrees or let it be greater than 90 degrees. The normal convention is to choose the latter, i.e. the right hand cell of the two illustrated with an angle of 120 degrees between the two vectors.
fcc(111) surface
Overlayer Structures
If we have an ordered overlayer of adsorbed species (atoms or molecules), then we can use the same basic ideas as outlined in the previous section to define the structure. The adsorbate unit cell is usually defined by the two vectors b1 and b2 . To avoid ambiguities, it again helps if we stick to a set of conventions in choosing the unit cell vectors. In this case:
1. b2 is again selected to be anticlockwise from b1 .
2. if possible, b1 is chosen to be parallel to a1 and b2 parallel to a2 .
Once the unit cell vectors for substrate and adsorbate have been selected then it is a relatively simple matter to work out how to denote the structure.
Wood's Notation
Wood's notation is the simplest and most frequently used method for describing a surface structure - it only works, however, if the two unit cells are of the same symmetry or closely-related symmetries (more specifically, the angle between b1 & b2 must be the same as that between a1 & a2 ).
In essence, Wood's notation first involves specifying the lengths of the two overlayer vectors, b1 & b2, in terms of a1 & a2 respectively - this is then written in the format:
$( |b_1|/|a_1| \times |b_2|/|a_2| )$
i.e. a ( 2 x 2 ) structure has |b1| = 2|a1| and |b2| = 2|a2| .
The following diagram shows a ( 2 x 2 ) adsorbate overlayer on an fcc(100) surface in which the adsorbate is bonded terminally on-top of individual atoms of the substrate.
Substrate: fcc(100)
Substrate unit cell
Adsorbate unit cell
The unit cells of the (100) substrate and the ( 2 x 2 ) overlayer are both highlighted.
The next diagram shows another ( 2 x 2 ) structure, but in this case the adsorbate species is bonded in the four-fold hollows of the substrate surface. Of course, only a very limited section of the structure can be shown here - in practice, the unit cell shown would repeat to give a complete overlayer structure extending across the substrate surface.
Substrate: fcc(100)
Substrate unit cell
Adsorbate unit cell
The highlighted unit cells of the adsorbate and substrate are identical in size, shape and orientation to those of the previously illustrated ( 2 x 2 ) structure. Both this and the previous structure are examples of primitive ( 2 x 2 ), or p( 2 x 2 ), structures. That is to say that they are indeed the simplest unit cells that may be used to describe the overlayer structure, and contain only one "repeat unit". For the purposes of this tutorial I shall follow common practice and omit the preceding "p" - referring to such structures simply as ( 2 x 2 ) structures (in spoken language, "two by two" structures).
Such ( 2 x 2 ) structures are also found on other surfaces, but they may differ markedly in superficial appearance from the structure on the fcc(100) surface. The following diagram, for example, shows a ( 2 x 2 ) structure on a fcc(110) surface
Substrate: fcc(110)
Substrate unit cell
Adsorbate unit cell
The adsorbate unit cell is again twice as large as that of the substrate in both dimensions - it retains the same aspect ratio as the rectangular substrate unit cell (1: 1.414) and does not exhibit any rotation with respect to the substrate cell. The following diagram shows yet another ( 2 x 2 ) structure, in this case on the fcc(111) surface ...
Substrate: fcc(111)
Substrate unit cell
Adsorbate unit cell
Again, the adsorbate unit cell is of the same symmetry as the substrate cell but is scaled up by a factor of two in its linear dimensions (and corresponds to a surface area four times as large as that of the substrate unit cell).
The next example is a surface structure which is closely related to the ( 2 x 2 ) structure: it differs in that there is an additional atom in the middle/centre of the ( 2 x 2 ) adsorbate unit cell. Since the middle atom is "crystallographically equivalent" to those at the corners (i.e. it is not distinguishable by means of different coordination to the underlying substrate or any other structural feature), then this is no longer a primitive ( 2 x 2 ) structure.
Substrate: fcc(100)
c( 2 x 2 )
(2 x 2)R45
Instead it may be classified in one of two ways:
1. As a centered ( 2 x 2 ) structure i.e. c( 2 x 2 ) [ where we are using a non-primitive unit cell containing 2 repeat units ]
2. As a " (2 x 2)R45 " structure, where we are specifying the true primitive unit cell .
In using the latter Wood's notation we are stating that the adsorbate unit cell is a factor of 2 larger than the substrate unit cell in both directions and is also rotated by 45 degrees with respect to the substrate unit cell.
If the "central" atom is not completely crystallographically equivalent, then the structure formally remains a p(2x2) unit cell but now has a basis of two adsorbate atoms per unit cell.
In some instances it is possible to use a centred unit cell description for a structure for which the primitive unit cell cannot be described using Wood's notation - for example, the c( 2 x 2 ) structure on the fcc(110) surface shown below.
Substrate: fcc(110)
c( 2 x 2 )
As a final example, the next diagram illustrates a commonly-observed structure on fcc(111) surfaces which can be readily described using Wood's notation.
Substrate: fcc(111) (3 x 3)R30
(You should confirm for yourself that the adsorbate unit cell is indeed scaled up from the substrate cell by the factor given and rotated by 30 degrees ! )
Matrix Notation
This is a much more general system of describing surface structures which can be applied to all ordered overlayers: quite simply it relates the vectors b1 & b2 to the substrate vectors a1 & a2 using a simple matrix i.e.
Matrix Notation: remember a1, a2, b1 and b2 are vectors.
To illustrate the use of matrix notation we shall now consider two surface structures with which we are already familiar ...
Substrate: fcc (100)
(2 x 2) overlayer
For the (2 x 2) structure we have:
By contrast, for the c(2 x 2) structure:
Substrate: fcc (100) c(2 x 2) overlayer
we have
Summary
Ordered surface structures may be described by defining the adsorbate unit cell in terms of that of the underlying substrate using:
1. Wood's Notation: in which the lengths of b1 and b2 are given as simple multiples of a1 and a2 respectively, and this is followed by the angle of rotation of b1 from a1 (if this is non-zero).
2. Matrix Notation: in which b1 and b2 are independently defined as linear combinations of a1 and a2 and these relationships are expressed in a matrix format.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Surface_Science_(Nix)/06%3A_Overlayer_Structures_and_Surface_Diffraction/6.01%3A_Classification_of_Overlayer_Structures.txt
|
LEED is the principal technique for the determination of surface structures. It may be used in one of two ways:
1. Qualitatively: where the diffraction pattern is recorded and analysis of the spot positions yields information on the size, symmetry and rotational alignment of the adsorbate unit cell with respect to the substrate unit cell.
2. Quantitatively: where the intensities of the various diffracted beams are recorded as a function of the incident electron beam energy to generate so-called I-V curves which, by comparison with theoretical curves, may provide accurate information on atomic positions.
In this section, we will only consider the qualitative application of this experimental technique.
Experimental Details
The LEED experiment uses a beam of electrons of a well-defined low energy (typically in the range 20 - 200 eV) incident normally on the sample. The sample itself must be a single crystal with a well-ordered surface structure in order to generate a back-scattered electron diffraction pattern. A typical experimental set-up is shown below.
Only the elastically-scattered electrons contribute to the diffraction pattern ; the lower energy (secondary) electrons are removed by energy-filtering grids placed in front of the fluorescent screen that is employed to display the pattern.
Basic Theory of LEED
By the principles of wave-particle duality, the beam of electrons may be equally regarded as a succession of electron waves incident normally on the sample. These waves will be scattered by regions of high localised electron density, i.e. the surface atoms, which can therefore be considered to act as point scatterers.
The wavelength of the electrons is given be the de Broglie relation:
$\lambda = \dfrac{h}{p}$
where $\lambda$ is the electron wavelength and $p$ is its momentum.
Now,
$p = mv = \sqrt{2mE_k} = \sqrt{2m e V}$
where
• m - mass of electron [ kg ]
• v - velocity [ m s-1 ]
• Ek - kinetic energy
• e - electronic charge
• V - acceleration voltage ( = energy in eV )
⇒ Wavelength,
$\lambda = \dfrac{h}{\sqrt{2meV}}$
(Useful information: h = 6.62 x 10-34 J s, e = 1.60 x 10-19 C, me = 9.11 x 10-31 kg ).
From the above examples the range of wavelengths of electrons employed in LEED experiments is seen to be comparable with atomic spacings, which is the necessary condition for diffraction effects associated with atomic structure to be observed.
Consider, first, a one dimensional (1-D) chain of atoms (with atomic separation a ) with the electron beam incident at right angles to the chain. This is the simplest possible model for the scattering of electrons by the atoms in the topmost layer of a solid; in which case the diagram below would be representing the solid in cross-section with the electron beam incident normal to the surface from the vacuum above.
If you consider the backscattering of a wavefront from two adjacent atoms at a well-defined angle, $θ$, to the surface normal then it is clear that there is a "path difference" (d) in the distance the radiation has to travel from the scattering centres to a distant detector (which is effectively at infinity) - this path difference is best illustrated by considering two "ray paths" such as the right-hand pair of green traces in the above diagram.
The size of this path difference is a sin θ and this must be equal to an integral number of wavelengths for constructive interference to occur when the scattered beams eventually meet and interfere at the detector i.e.
$d = a sin θ = n λ$
where:
λ - wavelength
n - integer (..-1, 0, 1, 2,.. )
For two isolated scattering centres the diffracted intensity varies slowly between zero (complete destructive interference ; d = (n + ½) λ ) and its maximum value (complete constructive interference ; d = n λ ) - with a large periodic array of scatterers, however, the diffracted intensity is only significant when the "Bragg condition"
$a sin θ = n λ$
is satisfied exactly. The diagram below shows a typical intensity profile for this case.
There are a number of points worth noting from this simple 1-D model
1. the pattern is symmetric about θ = 0 (or sin θ = 0)
2. sin θ is proportional to 1 / V 1/2 (since λ is proportional to 1 / V 1/2 )
3. sin θ is inversely proportional to the lattice parameter, a
The aforementioned points are in fact much more general - all surface diffraction patterns show a symmetry reflecting that of the surface structure, are centrally symmetric, and of a scale showing an inverse relationship to both the square root of the electron energy and the size of the surface unit cell.
As an example we can look at the LEED pattern from an fcc(110) surface. In the diagram below the surface atomic structure is shown on the left in plan view, as if you are viewing it from the position of the electron gun in the LEED experiment (albeit greatly magnified). The primary electron beam would then be incident normally on this surface as if fired from your current viewpoint and the diffracted beams would be scattered from the surface back towards yourself. The diffraction pattern on the right illustrates how these diffracted beams would impact upon the fluorescent screen.
The pattern shows the same rectangular symmetry as the substrate surface but is "stretched" in the opposite sense to the real space structure due to the reciprocal dependence upon the lattice parameter. The pattern is also centrosymmetric about the (00) beam - this is the central spot in the diffraction pattern corresponding to the beam that is diffracted back exactly normal to the surface (i.e. the n = 0 case in our 1-D model).
The above illustration of the diffraction pattern shows only the "first-order" beams i.e. it is representative of the diffraction pattern visible at low energies when only for n = 1 is the angle of diffraction, θ, sufficiently small for the diffracted beam to be incident on the display screen.
By contrast, the diagram below shows the diffraction pattern that might be expected if the energy of the incident electrons is doubled - some of the second order spots are now visible and the pattern as a whole has apparently contracted in towards the central (00) spot.
This is what the real diffraction patterns might look like …
In the case of such simple LEED patterns, it is possible to explain the diffraction pattern in terms of scattering from rows of atoms on the surface. For example, the rows of atoms running vertically on the screen would give rise to a set of diffracted beams in the horizontal plane, perpendicular to the rows, thus leading to the row of spots running in a line horizontally across the diffraction pattern through the (00) spot. The further the rows are apart, then the closer in are the diffracted beams to the central (00) beam. This is, however, a far from satisfactory method of explaining LEED patterns from surfaces.
A much better method of looking at LEED diffraction patterns involves using the concept of reciprocal space: more specifically, it can be readily shown that -
" The observed LEED pattern is a (scaled) representation of the reciprocal net of the pseudo-2D surface structure "
(No proof given !)
The reciprocal net is determined by (defined by) the reciprocal vectors:
a1* & a2* (for the substrate) and b1* & b2* (for the adsorbate)
Initially we will consider just the substrate. The reciprocal vectors are related to the real space unit cell vectors by the scalar product relations:
a1 . a2* = a1* . a2 = 0
and
a1 . a1* = a2 . a2* = 1
For those not too keen on vector algebra these mean that:
• a1 is perpendicular to a2*, and a2 is perpendicular to a1*
• there is an inverse relationship between the lengths of a1 and a1* (and a2 and a2* ) of the form:
|a1| = 1 / ( |a1*| cos A ) , where A is the angle between the vectors a1 and a1*.
Note: when A = 0 degrees (cos A = 1) this simplifies to a simple reciprocal relationship between the lengths a1 and a1*.
Exactly analogous relations hold for the real space and reciprocal vectors of the adsorbate overlayer structure: b1, b1*, b2 and b2* .
To a first approximation, the LEED pattern for a given surface structure may be obtained by superimposing the reciprocal net of the adsorbate overlayer (generated from b1* and b2* ) on the reciprocal net of the substrate (generated from a1* and a2* )
Example $1$:
Let us now look at an example - the diagram below shows an fcc(100) surface (again in plan view) and its corresponding diffraction pattern (i.e. the reciprocal net) .
We can demonstrate how these reciprocal vectors can be determined by working through the problem in a parallel fashion for the two vectors:
a1* must be perpendicular to a2
a2* must be perpendicular to a1
a1* is parallel to a1
a2* is parallel to a2
The angle, A, between a1 & a1* is zero
The angle, A, between a2 & a2* is zero
Hence, | a1*| = 1 / | a1 |
Hence, | a2*| = 1 / | a2 |
If we let | a1 | = 1 unit, then | a1*| = 1 unit.
| a2 | = | a1 | = 1 unit, therefore | a2*| = 1 unit.
Let us now add in an adsorbate overlayer - a primitive ( 2 x 2 ) structure with the adsorbed species shown bonded in on-top sites - and apply the same logic as just used above to determine the reciprocal vectors, b1* and b2*, for this overlayer.
b1* must be perpendicular to b2
b2* must be perpendicular to b1
b1* is parallel to b1
b2* is parallel to b2
The angle, B, between b1 & b1* is zero
The angle, B, between b2 & b2* is zero
Hence, | b1*| = 1 / | b1 |
Hence, | b2*| = 1 / | b2 |
| b1 | = 2| a1 | = 2 units; ∴ | b1*| = ½ unit.
| b2 | = 2| a2 | = 2 units; ∴ | b2*| = ½ unit.
All we have to do now is to generate the reciprocal net for the adsorbate using b1* and b2* (shown in red).
That's all there is to it !
Example $2$:
For the second example, we will look at the c( 2 x 2 ) structure on the same fcc(100) surface. The diagram below shows both a real space c( 2 x 2 ) structure and the corresponding diffraction pattern:
In many respects the analysis is very similar to that for the p( 2 x 2 ) structure, except that:
1. | b1 | = | b2 | = √2 units ; consequently | b1*| = | b2*| = 1/√2 units.
2. the vectors for the adsorbate overlayer are rotated with respect to those of the substrate by 45°.
Note that the c( 2 x 2 ) diffraction pattern can also be obtained from the pattern for the primitive structure by "missing out every alternate adsorbate-derived diffraction spot". This is a common feature of diffraction patterns arising from centered structures.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Surface_Science_(Nix)/06%3A_Overlayer_Structures_and_Surface_Diffraction/6.02%3A_Low_Energy_Electron_Diffraction_%28LEED%29.txt
|
Low Energy Electron Diffraction (LEED) utilizes the inherent surface sensitivity associated with low energy electrons in order to sample the surface structure. As the primary electron energy is increased not only does the surface specificity decrease but two other effects are particularly noticeable
1. forward scattering becomes much more important (as opposed to the backward scattering observed in LEED)
2. the scattering angle (measured from the incident beam direction) tends towards 180 degrees for back-scattering and 0 degrees for forward scattering.
In order to extract surface structural information from the diffraction of high energy electrons, therefore, the technique has to be adapted and the easiest way of doing this is to use a reflection geometry in which the electron beam is incident at a very grazing angle - it is then known as Reflection High Energy Electron Diffraction (RHEED).
The diagram above shows the basic set-up for a RHEED experiment, with the sample viewed edge-on. In practice the display screen is usually a phosphor coating on the inside of a vacuum window (viewport) and the diffraction pattern can be viewed and recorded from the atmospheric side of the window. The small scattering angles involved are compensated for by using relatively large sample/screen distances.
The sample can be rotated about its normal axis so that the electron beam is incident along specific crystallographic directions on the surface.
In order to understand the diffraction process we need to consider how the electron beam can interact with the regular array of surface atoms in this experimental geometry. It is worth noting, however, that the use of glancing incidence ensures that, despite the high energy of the electrons, the component of the electron momentum perpendicular to the surface is small. Under these conditions an electron may travel a substantial distance through the solid (in accord with the much longer mean free path of such high energy electrons) without penetrating far into the solid. The technique, consequently, remains surface sensitive.
Now consider the plan view of a surface illustrated below in which we concentrate attention on just one row of atoms (shown shaded in pale blue) running in a direction perpendicular to the incident electron beam (incident from the left)
In addition to the change in momentum of the electron perpendicular to the surface, which leads to the apparent reflection, the diffraction process may also lead to a change in momentum parallel to the surface, which leads to the deflection by an angle θ when looked at in plan view. Constructive interference occurs when the path difference between adjacent scattered "rays" ( a sin θ ) is an integral number of wavelengths (i.e. the same basic condition as for LEED). This gives rise to a set of diffracted beams at various angles on either side of the straight through (specularly reflected) beam.
What, if any, advantages does RHEED offer over LEED?
In terms of the quality of the diffraction pattern absolutely none ! - moreover, diffraction patterns have to be observed for at least two sample alignments with respect to the incident beam in order to determine the surface unit cell. However, ....
1. The geometry of the experiment allows much better access to the sample during observation of the diffraction pattern. This is particularly important if it is desired to make observations of the surface structure during growth of a surface film by evaporation from sources located normal to the sample surface or simultaneous with other measurements (e.g. AES, XPS).
2. Experiments have shown that it is possible to monitor the atomic layer-by-atomic layer growth of epitaxial films by monitoring oscillations in the intensity of the diffracted beams in the RHEED pattern.
By using RHEED it is therefore possible to measure, and hence also to control, atomic layer growth rates in Molecular Beam Epitaxy (MBE) growth of electronic device structures - this is by far and away the most important application of the technique.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Surface_Science_(Nix)/06%3A_Overlayer_Structures_and_Surface_Diffraction/6.03%3A_Reflection_High_Energy_Electron_Diffraction_%28RHEED%29.txt
|
This section provides worked exercises in the classification of surface structures using both Wood's notation and matrix notation, and in the determination of surface coverages.
Note - all surface coverages discussed in this section are defined in the conventional surface science manner, by reference to the number density of the underlying layer of surface atoms of the substrate
Structure #1
Structure #2
Note - in this instance the structure illustrated is but one of two equivalent domains - related by the symmetry of the substrate. The second domain is shown below.
It differs from the first only in that the closely-packed rows of the adsorbate now run at 90 degrees to their direction in the first domain. Patches of this domain structure would exist on the surface with statistically-equal probability to the other domain. It obviously corresponds to the same adsorbate coverage and must possess identical properties (electronic, thermodynamic & reactivity).
Structure #3
Structure #4
Structure #5
This shows an alloy surface layer on an fcc(111) substrate - such a layer might be formed by two metals which alloy if one is evaporated onto a (111) single crystal surface of the other.
What are the relative concentrations of the two elements in the surface layer ( specified as the ratio of A:B atoms ) ?
Structure #6
The diagram below shows a "coadsorption" structure in which adsorption of two different species has led to the formation of an ordered surface layer containing both species - such structures ( albeit more complicated than the one shown ) are known for a number of pairs of adsorbates on a variety of surfaces e.g. benzene / CO - Rh(111)
The driving force for the formation of such structures may be an attractive dipole-dipole force between the two different adsorbed species (this might occur if, for example, one acts as an electron donor whereas the other acts as an electron acceptor on the substrate concerned).
Structure #7
The diagram below shows a molecular adsorption structure in which a diatomic molecule is bonded terminally to substrate atoms but is inclined to the surface normal. This particular structure has not been observed but similar structures involving a periodic tilting of the molecular axis have been discovered, e.g. CO / Pd(110) .
Summary
The structures of ordered adsorbate overlayers may be defined by specifying the adsorbate unit cell in terms of the ideal substrate unit cell - in many cases, such as the examples given, this can be done using the simple Wood's notation and this is the common practice. In more difficult cases it may be necessary to use matrix notation.
In all the examples studied :
1. the overlayers were simple structures in which the adsorbate unit cell dimensions corresponded to one of the atom-atom spacings in the underlying substrate structure ; this is generally the case for adsorbed overlayers of gaseous molecules. More complex "coincidence" and "incommensurate" structures are commonplace when the overlayer consists of one or more atomic layers of a distinct chemical compound e.g. oxide overlayers on metals.
2. it was assumed that the underlying substrate structure was undisturbed by the adsorption of species on the surface. Whilst such distortions may generally be small, this is far from always being the case and a number of examples involving adsorbate-induced reconstruction of the topmost layer of substrate atoms have now been well documented ( e.g. Ni(100) c(2x2)-C ).
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Surface_Science_(Nix)/06%3A_Overlayer_Structures_and_Surface_Diffraction/6.04%3A_Examples_-_Surface_Structures.txt
|
• 7.1: Basic concepts in surface imaging and localized spectroscopy
Most surface spectroscopic techniques involve probing the surface by exposing it to a flux of "particles" (hν , e- , A+ ....) and simultaneously monitoring the response to this stimulation by, for example, measuring the energy distribution of emitted electrons. In their most basic form, these techniques collect information from a relatively large area of surface (∼ mm2 ). In most cases, however, there are variations of these techniques which permit either.
• 7.2: Electron Microscopy - SEM and SAM
The two forms of electron microscopy which are commonly used to provide surface information are: Secondary Electron Microscopy ( SEM ) - which provides a direct image of the topographical nature of the surface from all the emitted secondary electrons and Scanning Auger Microscopy ( SAM ) - which provides compositional maps of a surface by forming an image from the Auger electrons emitted by a particular element.
• 7.3: Imaging XPS
The combination of the features of X-ray photoelectron spectroscopy (in particular, quantitative surface elemental analysis and chemical state information - see 5.3) together with spatial localization is a particularly desirable option in surface analysis. However, whilst much progress has been made in developing the technique of imaging XPS, there is still a considerable research effort being devoted to improving the available spatial resolution beyond that which is currently available.
• 7.4: SIMS - Imaging and Depth Profiling
Since the SIMS technique utilizes a beam of atomic ions (i.e. charged particles) as the probe, it is a relatively easy matter to focus the incident beam and then to scan it across the surface to give an imaging technique.
• 7.5: Auger Depth Profiling
Auger Spectroscopy is a surface sensitive spectroscopic technique yielding compositional information. In its basic form it provides compositional information on a relatively large area of surface using a broad-focused electron beam probe. In this manner, sufficient signal can be readily obtained whilst keeping the incident electron flux low, and thus avoiding potential electron-induced modifications of the surface. As a consequence the technique is non-destructive when used in this manner.
• 7.6: Scanning Probe Microscopy - STM and AFM
The Scanning Tunneling Microscopy (STM) prompted the development of a whole family of related techniques which, together with STM, may be classified in the general category of Scanning Probe Microscopy techniques. Of these later techniques, the most important is Atomic Force Microscopy. The development of these techniques has been the most important event in the surface science field in recent times, and opened up many new areas of science and engineering at the atomic and molecular level.
07: Surface Imaging and Depth Profiling
Most surface spectroscopic techniques involve probing the surface by exposing it to a flux of "particles" (, e-, A+ ....) and simultaneously monitoring the response to this stimulation by, for example, measuring the energy distribution of emitted electrons. In their most basic form, these techniques collect information from a relatively large area of surface (∼ mm2 ). In most cases, however, there are variations of these techniques which permit either,
1. The requirement in both cases is for spatial localisation of the spectroscopic technique. This may be achieved in one of two ways,
7.02: Electron Microscopy - SEM and SAM
The two forms of electron microscopy which are commonly used to provide surface information are
Secondary Electron Microscopy ( SEM )
- which provides a direct image of the topographical nature of the surface from all the emitted secondary electrons
Scanning Auger Microscopy ( SAM )
- which provides compositional maps of a surface by forming an image from the Auger electrons emitted by a particular element.
Both techniques employ focusing of the probe beam (a beam of high energy electrons, typically 10 - 50 keV in energy) to obtain spatial localisation.
A. Secondary Electron Microscopy ( SEM )
As the primary electron beam is scanned across the surface, electrons of a wide range of energies will be emitted from the surface in the region where the beam is incident. These electrons will include backscattered primary electrons and Auger electrons, but the vast majority will be secondary electrons formed in multiple inelastic scattering processes (these are the electrons that contribute to the background and are completely ignored in Auger spectroscopy). The secondary electron current reaching the detector is recorded and the microscope image consists of a "plot" of this current, I, against probe position on the surface. The contrast in the micrograph arises from several mechanisms, but first and foremost from variations in the surface topography. Consequently, the secondary electron micrograph is virtually a direct image of the real surface structure.
The attainable resolution of the technique is limited by the minimum spot size that can be obtained with the incident electron beam, and ultimately by the scattering of this beam as it interacts with the substrate. With modern instruments, a resolution of better than 5 nm is achievable. This is more than adequate for imaging semiconductor device structures, for example, but insufficient to enable many supported metal catalysts to be studied in any detail.
B. Scanning Auger Microscopy ( SAM )
The incident primary electrons cause ionization of atoms within the region illuminated by the focused beam. Subsequent relaxation of the ionized atoms leads to the emission of Auger electrons characteristic of the elements present in this part of the sample surface (see the description of Auger spectroscopy in Section 5.2 for more details).
As with SEM, the attainable resolution is again ultimately limited by the incident beam characteristics. More significantly, however, the resolution is also limited by the need to acquire sufficient Auger signal to form a respectable image within a reasonable time period, and for this reason the instrumental resolution achievable is rarely better than about 15-20 nm.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Surface_Science_(Nix)/07%3A_Surface_Imaging_and_Depth_Profiling/7.01%3A_Basic_concepts_in_surface_imaging_and_localized_spectroscopy.txt
|
The combination of the features of X-ray photoelectron spectroscopy (in particular, quantitative surface elemental analysis and chemical state information - see 5.3) together with spatial localization is a particularly desirable option in surface analysis. However, whilst much progress has been made in developing the technique of imaging XPS, there is still a considerable research effort being devoted to improving the available spatial resolution beyond that which is currently available.
Different manufacturers of imaging XPS systems have adopted different strategies for obtaining spatial localization - including all of those mentioned in Section 7.1 . Specifically, these include
1. Localization by selected (limited) area analysis.
2. Localization of the probe, by focusing the incident x-rays.
3. Use of array detectors, with associated imaging optics.
1. Limited Area Analysis
The simplest approach to localising an XPS analysis is to restrict the area of the sample surface from which photoelectrons are collected using a combination of lenses and apertures in the design of the electron energy analyser. The main problem with using this approach on its own is that as the sampled area is reduced, so is the collected signal - consequently, there is a direct trade-off between spatial resolution and data collection time.
The practically achievable spatial resolution is rarely better than 100 µm. Imaging of the sample surface may then be achieved by either:
1. translating the sample position under the electron energy analyser, so that the analysed region is moved across the surface.
2. incorporating electrostatic deflection plates within the electron optics to move the region from which electrons are collected across the sample surface.
The advantage of this technique is its relative simplicity (and hence relatively low cost) - but this is reflected in the relatively poor performance!
2. Imaging XPS by X-ray Focussing
Until very recently this was not a popular option in commercial instruments, since x-rays are rather difficult to focus (compared to, for example, charged particles). Nevertheless, with recent improvements in technology, an x-ray spot size of better than 10 µm has been achieved using this approach in commercially-available instruments, and significantly better resolution than this has been achieved in specialised research instruments (see state of the art" below). Images may then be obtained either by scanning the sample under the focused x-ray beam, or by scanning the microfocused x-ray beam.
Examples of spectrometers using this technology:
• PHI Quantera XPS Microprobe
3. Array Detectors and Imaging Optics
There are a number of innovative designs for imaging spectrometers which combine array (i.e. multiple-segment) detectors with sophisticated imaging optics to obtain electron-energy resolved images at much faster acquisition rates. The spatial resolution achievable using this approach is also higher than that for either of the other techniques mentioned - state-of-the-art instruments give better than 5 µm resolution
Examples of spectrometers using this technology:
• Omicron Nanotechnology "NanoESCA"
State of the Art Imaging XPS Instrumentation
This is one area in which an ultra-high intensity x-ray source offers major advantages and the highest performance imaging XPS systems (scanning photoemission microscopes) are therefore those based at synchrotron sources. Using zone-plate technology to focus the x-rays (combined with multi-segment detectors to enhance data acquisition rates) it is possible on such systems to obtain XPS images with a resolution of better than 100 nm (see, for example, the ELETTRA ESCA microscopy web-pages).
Summary
The spatial resolution currently achievable with commercial imaging XPS instruments limit the range of potential applications - nevertheless there are many areas of materials science where the information obtainable is incredibly useful and the relatively poor spatial resolution (compared, for example, with electron microscopic techniques such as SAM) is more than offset by the benefit of concurrent chemical state definition.
Selected examples of imaging XPS data:
Imaging XPS data from Omicron NanoTechnology
(Scroll-down the thumbnails in the right-hand frame, and click on each to see an enlarged image and a description of the measurement)
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Surface_Science_(Nix)/07%3A_Surface_Imaging_and_Depth_Profiling/7.03%3A_Imaging_XPS.txt
|
The basic ideas behind the SIMS technique have already been discussed in the Section on Secondary Ion Mass Spectrometry. Since the technique utilizes a beam of atomic ions (i.e. charged particles) as the probe, it is a relatively easy matter to focus the incident beam and then to scan it across the surface to give an imaging technique.
Surface Imaging using SIMS
If the aim of the measurement is to obtain compositional images of the surface formed from the secondary ion spectrum with minimum possible damage to the surface, then the main problem is to ensure that sufficient signal is obtained at the desired spatial resolution whilst minimizing the ion flux incident on any part of the surface.
This is most easily achieved by switching from the traditional instrumental approach of using continuous-flux ion guns and quadrupole mass spectrometer detectors, to using pulsed ion sources and time-of-flight (TOF) mass spectrometers. The TOF mass spectrometers are a much more efficient way of acquiring spectral data, and also provide good resolution and sensitivity up to very high masses. Using such instruments, SIMS images with a spatial resolution of better than 50 nm are obtainable.
SIMS Depth Profiling
The aim of depth profiling is to obtain information on the variation of composition with depth below the initial surface - such information is obviously particularly useful for the analysis of layered structures such as those produced in the semiconductor industry.
Since the SIMS technique itself relies upon the removal of atoms from the surface, it is by its very nature a destructive technique, but also, ideally suited for depth profiling applications. Thus a depth profile of a sample may be obtained simply by recording sequential SIMS spectra as the surface is gradually eroded away by the incident ion beam probe. A plot of the intensity of a given mass signal as a function of time, is a direct reflection of the variation of its abundance/concentration with depth below the surface.
One of the main advantages that SIMS offers over other depth profiling techniques (e.g. Auger depth profiling) is its sensitivity to very low (sub-ppm, or ppb) concentrations of elements - again this is particularly important in the semiconductor industry where dopants are often present at very low concentrations.
The depth resolution achievable (e.g. the ability to discriminate between atoms in adjacent thin layers) is dependent upon a number of factors which include:
1. the uniformity of etching by the incident ion beam
2. the absolute depth below the original surface to which etching has already been carried out
3. the nature of the ion beam utilized (i.e. the mass & energy of the ions )
as well as effects related to the physics of the sputtering process itself (e.g. ion-impact induced burial).
With TOF-SIMS instruments the best depth resolution is obtained using two separate beams; one beam is used to progressively etch a crater in the surface of the sample under study, whilst short-pulses of a second beam are used to analyze the floor of the crater. This has the advantage that one can be confident that the analysis is exclusively from the floor of the etch crater and not affected by sputtering from the crater-walls.
7.05: Auger Depth Profiling
As described in the Section on Auger Spectroscopy is a surface sensitive spectroscopic technique yielding compositional information. In its basic form it provides compositional information on a relatively large area ( ∼ 1 mm2 ) of surface using a broad-focused electron beam probe. In this manner, sufficient signal can be readily obtained whilst keeping the incident electron flux low, and thus avoiding potential electron-induced modifications of the surface. As a consequence the technique is non-destructive when used in this manner.
To obtain information about the variation of composition with depth below the surface of a sample, it is necessary to gradually remove material from the surface region being analyzed, whilst continuing to monitor and record the Auger spectra. This controlled surface etching of the analyzed region can be accomplished by simultaneously exposing the surface to an ion flux which leads to sputtering (i.e. removal) of the surface atoms.
For example, suppose there is a buried layer of a different composition several nanometres below the sample surface. As the ion beam etches away material from the surface, the Auger signals corresponding to the elements present in this layer will rise and then decrease again.
The diagram above shows the variation of the Auger signal intensity one might expect from such a system for an element that is only present in the buried layer and not in the rest of the solid. In summary, by collecting Auger spectra as the sample is simultaneously subjected to etching by ion bombardment, it is possible to obtain information on the variation of composition with depth below the surface - this technique is known by the name of Auger Depth Profiling.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Surface_Science_(Nix)/07%3A_Surface_Imaging_and_Depth_Profiling/7.04%3A_SIMS_-_Imaging_and_Depth_Profiling.txt
|
In the early 1980's two IBM scientists, Binnig & Rohrer, developed a new technique for studying surface structure - Scanning Tunneling Microscopy (STM). This invention was quickly followed by the development of a whole family of related techniques which, together with STM, may be classified in the general category of Scanning Probe Microscopy (SPM) techniques. Of these later techniques, the most important is Atomic Force Microscopy (AFM). The development of these techniques has without doubt been the most important event in the surface science field in recent times, and opened up many new areas of science and engineering at the atomic and molecular level.
Basic Principles of SPM Techniques
All of the techniques are based upon scanning a probe (typically called the tip in STM, since it literally is a sharp metallic tip) just above a surface whilst monitoring some interaction between the probe and the surface.
The interaction that is monitored in:
• STM - is the tunnelling current between a metallic tip and a conducting substrate which are in very close proximity but not actually in physical contact.
• AFM - is the van der Waals force between the tip and the surface; this may be either the short range repulsive force (in contact-mode) or the longer range attractive force (in non-contact mode).
For the techniques to provide information on the surface structure at the atomic level (which is what they are capable of doing):
1. the position of the tip with respect to the surface must be very accurately controlled (to within about 0.1 Å) by moving either the surface or the tip.
2. the tip must be very sharp - ideally terminating in just a single atom at its closest point of approach to the surface.
The attention paid to the first problem and the engineering solution to it is the difference between a good microscope and a not so good microscope - it need not worry us here, sufficient to say that it is possible to accurately control the relative positions of tip and surface by ensuring good vibrational isolation of the microscope and using sensitive piezoelectric positioning devices.
Tip preparation is a science in itself - having said that, it is largely serendipity which ensures that one atom on the tip is closer to the surface than all others.
Let us look at the region where the tip approaches the surface in greater detail ....
... the end of the tip will almost invariably show a certain amount of structure, with a variety of crystal facets exposed ...
… and if we now go down to the atomic scale ....
... there is a reasonable probability of ending up with a truly atomic tip.
If the tip is biased with respect to the surface by the application of a voltage between them then electrons can tunnel between the two, provided the separation of the tip and surface is sufficiently small - this gives rise to a tunnelling current.
The direction of current flow is determined by the polarity of the bias.
If the sample is biased -ve with respect to the tip, then electrons will flow from the surface to the tip as shown above, whilst if the sample is biased +ve with respect to the tip, then electrons will flow from the tip to the surface as shown below.
The name of the technique arises from the quantum mechanical tunnelling-type mechanism by which the electrons can move between the tip and substrate. Quantum mechanical tunnelling permits particles to tunnel through a potential barrier which they could not surmount according to the classical laws of physics - in this case electrons are able to traverse the classically-forbidden region between the two solids as illustrated schematically on the energy diagram below.
This is an over-simplistic model of the tunnelling that occurs in STM but it is a useful starting point for understanding how the technique works. In this model, the probability of tunnelling is exponentially-dependent upon the distance of separation between the tip and surface: the tunnelling current is therefore a very sensitive probe of this separation.
Imaging of the surface topology may then be carried out in one of two ways:
1. in constant height mode (in which the tunnelling current is monitored as the tip is scanned parallel to the surface)
2. in constant current mode (in which the tunnelling current is maintained constant as the tip is scanned across the surface)
If the tip is scanned at what is nominally a constant height above the surface, then there is actually a periodic variation in the separation distance between the tip and surface atoms. At one point the tip will be directly above a surface atom and the tunnelling current will be large whilst at other points the tip will be above hollow sites on the surface and the tunnelling current will be much smaller.
A plot of the tunnelling current v's tip position therefore shows a periodic variation which matches that of the surface structure - hence it provides a direct "image" of the surface (and by the time the data has been processed it may even look like a real picture of the surface !).
In practice, however, the normal way of imaging the surface is to maintain the tunnelling current constant whilst the tip is scanned across the surface. This is achieved by adjusting the tip's height above the surface so that the tunnelling current does not vary with the lateral tip position. In this mode the tip will move slightly upwards as it passes over a surface atom, and conversely, slightly in towards the surface as it passes over a hollow.
The image is then formed by plotting the tip height (strictly, the voltage applied to the z-piezo) v's the lateral tip position.
Summary
In summary, the development of the various scanning probe microscopy techniques has revolutionized the study of surface structure - atomic resolution images have been obtained not only on single crystal substrates in UHV but also on samples at atmospheric pressure and even under solution. Many problems still remain, however, and the interpretation of SPM data is not always as straightforward as it might at first seem. There is still very much a place for the more traditional surface structural techniques such as LEED.
This introduction to STM has concentrated on the non-invasive imaging applications of the technique, yet there is increasing interest in using such techniques as a tool for the actual modification of surfaces. At the moment this is still at the "gimmicky" stage, but the longer term implications of being able to manipulate surface structure and molecules at the atomic level have yet to be fully appreciated: we can but await the future with interest !
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Surface_Science_(Nix)/07%3A_Surface_Imaging_and_Depth_Profiling/7.06%3A_Scanning_Probe_Microscopy_-_STM_and_AFM.txt
|
• 1.1: Introduction to Symmetry
You will already be familiar with the concept of symmetry in an everyday sense. If we say something is ‘symmetrical’, we usually mean it has mirror symmetry, or ‘left-right’ symmetry, and would look the same if viewed in a mirror. Symmetry is also very important in chemistry. Some molecules are clearly ‘more symmetrical’ than others, but what consequences does this have, if any?
• 1.2: Symmetry Operations and Symmetry Elements
A symmetry operation is an action that leaves an object looking the same after it has been carried out. Each symmetry operation has a corresponding symmetry element, which is the axis, plane, line or point with respect to which the symmetry operation is carried out. The symmetry element consists of all the points that stay in the same place when the symmetry operation is performed. for example, in a rotation, the line of points that stay in the same place constitute a symmetry axis.
• 1.3: Symmetry Classification of Molecules- Point Groups
It is only possible for certain combinations of symmetry elements to be present in a molecule (or any other object). As a result, we may group together molecules that possess the same symmetry elements and classify molecules according to their symmetry. These groups of symmetry elements are called point groups (due to the fact that there is at least one point in space that remains unchanged no matter which symmetry operation from the group is applied).
• 1.4: Symmetry and Physical Properties
Carrying out a symmetry operation on a molecule must not change any of its physical properties. It turns out that this has some interesting consequences, allowing us to predict whether or not a molecule may be chiral or polar on the basis of its point group.
• 1.5: Combining Symmetry Operations - ‘Group Multiplication’
Now we will investigate what happens when we apply two symmetry operations in sequence. As we shall soon see, the order in which the operations are applied is important.
• 1.6: Constructing higher groups from simpler groups
A group that contains a large number of symmetry elements may often be constructed from simpler groups.
• 1.7: Mathematical Definition of a Group
A mathematical group is defined as a set of elements together with a rule for forming new combinations within that group. The number of elements is called the order of the group. For our purposes, the elements are the symmetry operations of a molecule and the rule for combining them is the sequential application of symmetry operations investigated in the previous section. The elements of the group and the rule for combining them must satisfy certain criteria.
• 1.8: Review of Matrices
An n×m matrix is a two dimensional array of numbers with n rows and m columns. This Module addresses basic definitions and operations of matrices are are particularly relavant for symmetry aspects.
• 1.9: Transformation matrices
Matrices can be used to map one set of coordinates or functions onto another set. Matrices used for this purpose are called transformation matrices. In group theory, we can use transformation matrices to carry out the various symmetry operations considered at the beginning of the course. As a simple example, we will investigate the matrices we would use to carry out some of these symmetry operations on a model vector.
• 1.10: Matrix Representations of Groups
The symmetry operations in a group may be represented by a set of transformation matrices, one for each symmetry element. Each individual matrix is called a representative of the corresponding symmetry operation, and the complete set of matrices is called a matrix representation of the group. The matrix representatives act on some chosen basis set of functions, and the actual matrices making up a given representation will depend on the basis that has been chosen.
• 1.11: Properties of Matrix Representations
Now that we’ve learnt how to create a matrix representation of a point group within a given basis, we will move on to look at some of the properties that make these representations so powerful in the treatment of molecular symmetry.
• 1.12: Reduction of Representations I
A block diagonal matrix can be written as the direct sum of the matrices that lie along the diagonal. Note that a direct sum is very different from ordinary matrix addition since it produces a matrix of higher dimensionality.
• 1.13: Irreducible representations and symmetry species
When two one-dimensional irreducible representations are seen to be identical, they have the ‘same symmetry’, transforming in the same way under all of the symmetry operations of the point group and forming bases for the same matrix representation. As such, they are said to belong to the same symmetry species. There are a limited number of ways in which an arbitrary function can transform under the symmetry operations of a group, giving rise to a limited number of symmetry species.
• 1.14: Character Tables
A character table summarizes the behavior of all of the possible irreducible representations of a group under each of the symmetry operations of the group. In many applications of group theory, we only need to know the characters of the representative matrices, rather than the matrices themselves. Luckily, when each basis function transforms as a 1D irreducible representation there is a simple shortcut to determining the characters without having to construct the entire matrix representation.
• 1.15: Reduction of representations II
The formation of bonds is dependent on symmetries of the constituent atomic orbitals. To make full use of group theory in the applications we will be considering, we need to develop a little more ‘machinery’. Specifically, given a basis set we need to find out: (1) How to determine the irreducible representations spanned by the basis functions and (2) How to construct linear combinations of the original basis functions that transform as a given irreducible representation/symmetry species.
• 1.16: Symmetry Adapted Linear Combinations (SALCs)
Once we know the irreducible representations spanned by an arbitrary basis set, we can work out the appropriate linear combinations of basis functions that transform the matrix representatives of our original representation into block diagonal form (i.e. the symmetry adapted linear combinations). Each of the SALCs transforms as one of the irreducible representations of the reduced representation.
• 1.17: Determining whether an Integral can be Non-zero
As we continue with this course, we will discover that there are many times when we would like to know whether a particular integral is necessarily zero, or whether there is a chance that it may be non-zero. We can often use group theory to differentiate these two cases.
• 1.18: Bonding in Diatomics
You will already be familiar with the idea of constructing molecular orbitals from linear combinations of atomic orbitals from previous courses covering bonding in diatomic molecules. It turns out that the rule that determines whether or not two atomic orbitals can bond is that they must belong to the same symmetry species within the point group of the molecule.
• 1.19: Bonding in Polyatomics- Constructing Molecular Orbitals from SALCs
In the previous section we showed how to use symmetry to determine whether two atomic orbitals can form a chemical bond. How do we carry out the same procedure for a polyatomic molecule, in which many atomic orbitals may combine to form a bond? Any SALCs of the same symmetry could potentially form a bond, so all we need to do to construct a molecular orbital is take a linear combination of all the SALCs of the same symmetry species.
• 1.20: Calculating Orbital Energies and Expansion Coefficients
Calculation of the orbital energies and expansion coefficients is based on the variation principle, which states that any approximate wavefunction must have a higher energy than the true wavefunction. This follows directly from the fairly common-sense idea that in general any system tries to minimize its energy. If an ‘approximate’ wavefunction had a lower energy than the ‘true’ wavefunction, we would expect the system to try and adopt this ‘approximate’ lower energy state.
• 1.21: Solving the Secular Equations
Any set of linear equations may be rewritten as a matrix equation Ax = b, which can be classified as simultaneous linear equations or homogeneous linear equations, depending on whether b is non-zero or zero. the set of equations only has a solution if the determinant of A is equal to zero. The secular equations we want to solve are homogeneous equations, and we will use this property of the determinant to determine the molecular orbital energies.
• 1.22: Summary of the Steps Involved in Constructing Molecular Orbitals
The eight steps use to construct construct arbitrary molecular orbitals of polyatomic systems.
• 1.23: A more complicated bonding example
Group theory can be used to construct the molecular orbitals of molecules using a basis set consisting of all the valence orbitals. This is demonstated for water and rquuqires consideration of proper representation and characters of matrices and the extracted SALCs.
• 1.24: Molecular Vibrations
vibrational motions of polyatomic molecules are much more complicated than those in a diatomic. Since changing one bond length in a polyatomic will often affect the length of nearby bonds, we cannot consider the vibrational motion of each bond in isolation; instead we talk of normal modes involving the concerted motion of groups of bonds. Group theory may be used to identify the symmetries of the translational, rotational and vibrational modes of motion of a molecule.
• 1.25: Summary of applying group theory to molecular motions
A summary of the steps involved in applying group theory to molecular motions is given.
• 1.26: Group theory and Molecular Electronic States
A molecular orbital is that it is a ‘one electron wavefunction’, i.e. a solution to the Schrödinger equation for the molecule. An electronic state is defined by the electron configuration of the system, and by the quantum numbers of each electron contributing to that configuration. The symmetry of an electronic state is determiend by taking the direct product of the irreducible representations for all of the electrons involved in that state.
• 1.27: Spectroscopy - Interaction of Atoms and Molecules with Light
We have already used group theory to learn about the molecular orbitals in a molecule. In this section we will show that it may also be used to predict which electronic states may be accessed by absorption of a photon. We may also use group theory to investigate how light may be used to excite the various vibrational modes of a polyatomic molecule.
• 1.28: Summary
Hopefully this text has given you a reasonable introduction to the qualitative description of molecular symmetry, and also to the way in which it can be used quantitatively within the context of group theory to predict important molecular properties.
• 1.29: Appendix A
• 1.30: Appendix B- Point Groups
01: Chapters
You will already be familiar with the concept of symmetry in an everyday sense. If we say something is ‘symmetrical’, we usually mean it has mirror symmetry, or ‘left-right’ symmetry, and would look the same if viewed in a mirror. Symmetry is also very important in chemistry. Some molecules are clearly ‘more symmetrical’ than others, but what consequences does this have, if any?
The aim of this course is to provide a systematic treatment of symmetry in chemical systems within the mathematical framework known as group theory (the reason for the name will become apparent later on). Once we have classified the symmetry of a molecule, group theory provides a powerful set of tools that provide us with considerable insight into many of its chemical and physical properties.
Some applications of group theory that will be covered in this course include:
1. Predicting whether a given molecule will be chiral, or polar.
2. Examining chemical bonding and visualising molecular orbitals.
3. Predicting whether a molecule may absorb light of a given polarisation, and which spectroscopic transitions may be excited if it does.
4. Investigating the vibrational motions of the molecule.
You may well meet some of these topics again, possibly in more detail, in later courses. However, they will be introduced here to give you a fairly broad introduction to the capabilities and applications of group theory once we have worked through the basic principles and ‘machinery’ of the theory.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Symmetry_(Vallance)/1.01%3A_Introduction_to_Symmetry.txt
|
A symmetry operation is an action that leaves an object looking the same after it has been carried out. For example, if we take a molecule of water and rotate it by 180° about an axis passing through the central O atom (between the two H atoms) it will look the same as before. It will also look the same if we reflect it through either of two mirror planes, as shown in the figure below.
Each symmetry operation has a corresponding symmetry element, which is the axis, plane, line or point with respect to which the symmetry operation is carried out. The symmetry element consists of all the points that stay in the same place when the symmetry operation is performed. In a rotation, the line of points that stay in the same place constitute a symmetry axis; in a reflection the points that remain unchanged make up a plane of symmetry.
The symmetry elements that a molecule may possess are:
1. $E$ - the identity. The identity operation consists of doing nothing, and the corresponding symmetry element is the entire molecule. Every molecule has at least this element.
2. $C_n$ - an $n$-fold axis of rotation. Rotation by $360°/n$ leaves the molecule unchanged. The $H_2O$ molecule above has a $C_2$ axis. Some molecules have more than one $C_n$ axis, in which case the one with the highest value of $n$ is called the principal axis. Note that by convention rotations are counterclockwise about the axis.
3. $\sigma$ - a plane of symmetry. Reflection in the plane leaves the molecule looking the same. In a molecule that also has an axis of symmetry, a mirror plane that includes the axis is called a vertical mirror plane and is labeled $\sigma_v$, while one perpendicular to the axis is called a horizontal mirror plane and is labeled $\sigma_h$. A vertical mirror plane that bisects the angle between two $C_2$ axes is called a dihedral mirror plane, $\sigma_d$.
4. $i$ - a center of symmetry. Inversion through the center of symmetry leaves the molecule unchanged. Inversion consists of passing each point through the center of inversion and out to the same distance on the other side of the molecule. An example of a molecule with a center of inversion is shown below.
1. $S_n$ - an n-fold improper rotation axis (also called a rotary-reflection axis). The rotary reflection operation consists of rotating through an angle $360°/n$ about the axis, followed by reflecting in a plane perpendicular to the axis. Note that $S_1$ is the same as reflection and $S_2$ is the same as inversion. The molecule shown above has two $S_2$ axes.
The identity $E$ and rotations $C_n$ are symmetry operations that could actually be carried out on a molecule. For this reason they are called proper symmetry operations. Reflections, inversions and improper rotations can only be imagined (it is not actually possible to turn a molecule into its mirror image or to invert it without some fairly drastic rearrangement of chemical bonds) and as such, are termed improper symmetry operations.
Axis Definitions
Conventionally, when imposing a set of Cartesian axes on a molecule (as we will need to do later on in the course), the $z$ axis lies along the principal axis of the molecule, the $x$ axis lies in the plane of the molecule (or in a plane containing the largest number of atoms if the molecule is non-planar), and the $y$ axis makes up a right handed axis system.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Symmetry_(Vallance)/1.02%3A_Symmetry_Operations_and_Symmetry_Elements.txt
|
It is only possible for certain combinations of symmetry elements to be present in a molecule (or any other object). As a result, we may group together molecules that possess the same symmetry elements and classify molecules according to their symmetry. These groups of symmetry elements are called point groups (due to the fact that there is at least one point in space that remains unchanged no matter which symmetry operation from the group is applied). There are two systems of notation for labeling symmetry groups, called the Schoenflies and Hermann-Mauguin (or International) systems. The symmetry of individual molecules is usually described using the Schoenflies notation, and we shall be using this notation for the remainder of the course1.
Some of the point groups share their names with symmetry operations, so be careful you do not mix up the two. It is usually clear from the context which one is being referred to.
Molecular Point Groups
1. $C_1$ - contains only the identity (a $C_1$ rotation is a rotation by 360° and is the same as the identity operation $E$) e.g. CHDFCl.
1. $C_i$ - contains the identity $E$ and a center of inversion $i$.
1. $C_S$ - contains the identity $E$ and a plane of reflection $\sigma$.
1. $C_n$ - contains the identity and an $n$-fold axis of rotation.
1. $C_{nv}$ - contains the identity, an $n$-fold axis of rotation, and $n$ vertical mirror planes $\sigma_v$.
1. $C_{nh}$ - contains the identity, an $n$-fold axis of rotation, and a horizontal reflection plane $\sigma_h$ (note that in $C_{2h}$ this combination of symmetry elements automatically implies a center of inversion).
1. $D_n$ - contains the identity, an $n$-fold axis of rotation, and $n$ 2-fold rotations about axes perpendicular to the principal axis.
1. $D_{nh}$ - contains the same symmetry elements as $D_n$ with the addition of a horizontal mirror plane.
1. $D_{nd}$ - contains the same symmetry elements as $D_n$ with the addition of $n$ dihedral mirror planes.
1. $S_n$ - contains the identity and one $S_n$ axis. Note that molecules only belong to $S_n$ if they have not already been classified in terms of one of the preceding point groups (e.g. $S_2$ is the same as $C_i$, and a molecule with this symmetry would already have been classified).
The following groups are the cubic groups, which contain more than one principal axis. They separate into the tetrahedral groups ($T_d$, $T_h$ and $T$) and the octahedral groups ($O$ and $O_h$). The icosahedral group also exists, but is not included below.
1. $T_d$ - contains all the symmetry elements of a regular tetrahedron, including the identity, 4 $C_3$ axes, 3 $C_2$ axes, 6 dihedral mirror planes, and 3 $S_4$ axes e.g. $CH_4$.
1. $T$ - as for $T_d$ but no planes of reflection.
2. $T_h$ - as for $T$ but contains a center of inversion.
3. $O_h$ - the group of the regular octahedron e.g. $SF_6$.
1. $O$ - as for $O_h$, but with no planes of reflection.
The final group is the full rotation group $R_3$, which consists of an infinite number of $C_n$ axes with all possible values of $n$ and describes the symmetry of a sphere. Atoms (but no molecules) belong to $R_3$, and the group has important applications in atomic quantum mechanics. However, we won’t be treating it any further here.
Once you become more familiar with the symmetry elements and point groups described above, you will find it quite straightforward to classify a molecule in terms of its point group. In the meantime, the flowchart shown below provides a step-by-step approach to the problem.
1Though the Hermann-Mauguin system can be used to label point groups, it is usually used in the discussion of crystal symmetry. In crystals, in addition to the symmetry elements described above, translational symmetry elements are very important. Translational symmetry operations leave no point unchanged, with the consequence that crystal symmetry is described in terms of space groups rather than point groups.
1.04: Symmetry and Physical Properties
Carrying out a symmetry operation on a molecule must not change any of its physical properties. It turns out that this has some interesting consequences, allowing us to predict whether or not a molecule may be chiral or polar on the basis of its point group.
Polarity
For a molecule to have a permanent dipole moment, it must have an asymmetric charge distribution. The point group of the molecule not only determines whether the molecule may have a dipole moment, but also in which direction(s) it may point.
If a molecule has a $C_n$ axis with $n > 1$, it cannot have a dipole moment perpendicular to the axis of rotation (for example, a $C_2$ rotation would interchange the ends of such a dipole moment and reverse the polarity, which is not allowed – rotations with higher values of $n$ would also change the direction in which the dipole points). Any dipole must lie parallel to a $C_n$ axis.
Also, if the point group of the molecule contains any symmetry operation that would interchange the two ends of the molecule, such as a $\sigma_h$ mirror plane or a $C_2$ rotation perpendicular to the principal axis, then there cannot be a dipole moment along the axis.
The only groups compatible with a dipole moment are $C_n$, $C_{nv}$ and $C_s$. In molecules belonging to $C_n$ or $C_{nv}$ the dipole must lie along the axis of rotation.
Chirality
One example of symmetry in chemistry that you will already have come across is found in the isomeric pairs of molecules called enantiomers. Enantiomers are non-superimposable mirror images of each other, and one consequence of this symmetrical relationship is that they rotate the plane of polarized light passing through them in opposite directions. Such molecules are said to be chiral,2 meaning that they cannot be superimposed on their mirror image. Formally, the symmetry element that precludes a molecule from being chiral is a rotation-reflection axis $S_n$. Such an axis is often implied by other symmetry elements present in a group.
For example, a point group that has $C_n$ and $\sigma_h$ as elements will also have $S_n$. Similarly, a center of inversion is equivalent to $S_2$. As a rule of thumb, a molecule definitely cannot have be chiral if it has a center of inversion or a mirror plane of any type ($\sigma_h$, $\sigma_v$ or $\sigma_d$), but if these symmetry elements are absent the molecule should be checked carefully for an $S_n$ axis before it is assumed to be chiral.
2 The word chiral has its origins in the Greek word for hand ($\chi$$\epsilon$$\rho$$\iota$, pronounced ‘cheri’ with a soft ch as in ‘loch’). A pair of hands is also a pair of non-superimposable mirror images, and you will often hear chirality referred to as ‘handedness’ for this reason.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Symmetry_(Vallance)/1.03%3A_Symmetry_Classification_of_Molecules-_Point_Groups.txt
|
Now we will investigate what happens when we apply two symmetry operations in sequence. As an example, consider the $NH_3$ molecule, which belongs to the $C_{3v}$ point group. Consider what happens if we apply a $C_3$rotation followed by a $\sigma_v$ reflection. We write this combined operation $\sigma_v$$C_3$ (when written, symmetry operations operate on the thing directly to their right, just as operators do in quantum mechanics – we therefore have to work backwards from right to left from the notation to get the correct order in which the operators are applied). As we shall soon see, the order in which the operations are applied is important.
The combined operation $\sigma_v$$C_3$ is equivalent to $\sigma_v''$, which is also a symmetry operation of the $C_{3v}$ point group. Now let’s see what happens if we apply the operators in the reverse order i.e. $C_3$$\sigma_v$ ($\sigma_v$ followed by $C_3$).
Again, the combined operation $C_3$$\sigma_v$ is equivalent to another operation of the point group, this time $\sigma_v'$.
There are two important points that are illustrated by this example:
1. The order in which two operations are applied is important. For two symmetry operations $A$ and $B$, $AB$ is not necessarily the same as $BA$, i.e. symmetry operations do not in general commute. In some groups the symmetry elements do commute; such groups are said to be Abelian.
2. If two operations from the same point group are applied in sequence, the result will be equivalent to another operation from the point group. Symmetry operations that are related to each other by other symmetry operations of the group are said to belong to the same class. In $NH_3$, the three mirror planes $\sigma_v$, $\sigma_v'$ and $\sigma_v''$ belong to the same class (related to each other through a $C_3$ rotation), as do the rotations $C_3^+$ and $C_3^-$ (anticlockwise and clockwise rotations about the principal axis, related to each other by a vertical mirror plane
The effects of applying two symmetry operations in sequence within a given point group are summarized in group multiplication tables. As an example, the complete group multiplication table for $C_{3v}$ using the symmetry operations as defined in the figures above is shown below. The operations written along the first row of the table are carried out first, followed by those written in the first column (note that the table would change if we chose to name $\sigma_v$, $\sigma_v'$ and $\sigma_v''$ in some different order).
$\begin{array}{l|llllll} C_{3v} & E & C_3^+ & C_3^- & \sigma_v & \sigma_v' & \sigma_v'' \ \hline E & E & C_3^+ & C_3^- & \sigma_v & \sigma_v' & \sigma_v'' \ C_3^+ & C_3^+ & C_3^- & E & \sigma_v' & \sigma_v'' & \sigma_v \ C_3^- & C_3^- & E & C_3^+ & \sigma_v'' & \sigma_v & \sigma_v' \ \sigma_v & \sigma_v & \sigma_v'' & \sigma_v' & E & C_3^- & C_3^+ \ \sigma_v' & \sigma_v' & \sigma_v & \sigma_v'' & C_3^+ & E & C_3^- \ \sigma_v'' & \sigma_v'' & \sigma_v' & \sigma_v & C_3^- & C_3^+ & E \end{array} \label{5.1}$
1.06: Constructing higher groups from simpler groups
A group that contains a large number of symmetry elements may often be constructed from simpler groups. This is probably best illustrated using an example. Consider the point groups $C_2$ and $C_S$. $C_2$ contains the elements $E$ and $C_2$, and has order 2, while $C_S$ contains $E$ and σ and also has order $2$. We can use these two groups to construct the group $C_{2v}$ by applying the symmetry operations of $C_2$ and $C_S$ in sequence.
$\begin{array}{lllll} C_2 \: \text{operation} & E & E & C_2 & C_2 \ C_S \: \text{operation} & E & \sigma(xz) & E & \sigma(xz) \ \text{Result} & E & \sigma_v(xz) & C_2 & \sigma_v'(yz) \end{array} \tag{6.1}$
Notice that $C_{2v}$ has order $4$, which is the product of the orders of the two lower-order groups. $C_{2v}$ may be described as a direct product group of $C_2$ and $C_S$. The origin of this name should become obvious when we review the properties of matrices.
1.07: Mathematical Definition of a Group
Now that we have explored some of the properties of symmetry operations and elements and their behavior within point groups, we are ready to introduce the formal mathematical definition of a group.
A mathematical group is defined as a set of elements ($g_1$, $g_2$, $g_3$...) together with a rule for forming combinations $g_i$$g_j$. The number of elements $h$ is called the order of the group. For our purposes, the elements are the symmetry operations of a molecule and the rule for combining them is the sequential application of symmetry operations investigated in the previous section. The elements of the group and the rule for combining them must satisfy the following criteria.
1. The group must include the identity $E$, for which $E g_i= g_i \tag{7.1}$ for all the elements of the group.
2. The elements must satisfy the group property that the combination of any pair of elements is also an element of the group.
3. Each element $g_i$ must have an inverse $g_i^{-1}$, which is also an element of the group, such that $g_i g_i^{-1} = g_i^{-1}g_i = E \tag{7.2}$ (e.g. in $C_{3v}$ the inverse of $C_3^+$ is $C_3^-$, the inverse of $(\sigma_v$ is $\sigma_v$', the inverse $g_i^{-1}$ effectively 'undoes’ the effect of the symmetry operation $g_i$).
4. The rule of combination must be associative i.e. $(g_i g_j )(g_k) = g_i(g_jg_k) \tag{7.3}$
The above definition does not require the elements to commute, which would require
$g_i g_k =g_k g_i \tag{7.4}$
As we discovered in the $C_{3v}$ example above, in many groups the outcome of consecutive application of two symmetry operations depends on the order in which the operations are applied. Groups for which the elements do not commute are called non-Abelian groups; those for which they elements do commute are Abelian.
Group theory is an important area in mathematics, and luckily for chemists the mathematicians have already done most of the work for us. Along with the formal definition of a group comes a comprehensive mathematical framework that allows us to carry out a rigorous treatment of symmetry in molecular systems and learn about its consequences.
Many problems involving operators or operations (such as those found in quantum mechanics or group theory) may be reformulated in terms of matrices. Any of you who have come across transformation matrices before will know that symmetry operations such as rotations and reflections may be represented by matrices. It turns out that the set of matrices representing the symmetry operations in a group obey all the conditions laid out above in the mathematical definition of a group, and using matrix representations of symmetry operations simplifies carrying out calculations in group theory. Before we learn how to use matrices in group theory, it will probably be helpful to review some basic definitions and properties of matrices.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Symmetry_(Vallance)/1.05%3A_Combining_Symmetry_Operations_-_Group_Multiplication.txt
|
Definitions
An $n \times m$ matrix is a two dimensional array of numbers with $n$ rows and $m$ columns. The integers $n$ and $m$ are called the dimensions of the matrix. If $n = m$ then the matrix is square. The numbers in the matrix are known as matrix elements (or just elements) and are usually given subscripts to signify their position in the matrix e.g. an element $a_{ij}$ would occupy the $i^{th}$ row and $j^{th}$ column of the matrix. For example:
$M = \left(\begin{array}{ccc} 1 & 2 & 3 \ 4 & 5 & 6 \ 7 & 8 & 9 \end{array}\right) \label{8.1}$
is a $3 \times 3$ matrix with $a_{11}=1$, $a_{12}=2$, $a_{13}=3$, $a_{21}=4$ etc
In a square matrix, diagonal elements are those for which $i$=$j$ (the numbers $1$, $5$, and $9$ in the above example). Off-diagonal elements are those for which $i \neq j$ ($2$, $3$, $4$, $6$, $7$, and $8$ in the above example). If all the off-diagonal elements are equal to zero then we have a diagonal matrix. We will see later that diagonal matrices are of considerable importance in group theory.
A unit matrix or identity matrix (usually given the symbol $I$) is a diagonal matrix in which all the diagonal elements are equal to $1$. A unit matrix acting on another matrix has no effect – it is the same as the identity operation in group theory and is analogous to multiplying a number by $1$ in everyday arithmetic.
The transpose $A^T$ of a matrix $A$ is the matrix that results from interchanging all the rows and columns. A symmetric matrix is the same as its transpose ($A^T=A$ i.e. $a_{ij} = a_{ji}$ for all values of $i$ and $j$). The transpose of matrix $M$ above (which is not symmetric) is
$M^{T} = \left(\begin{array}{ccc} 1 & 4 & 7 \ 2 & 5 & 8 \ 3 & 6 & 9 \end{array}\right) \label{8.2}$
The sum of the diagonal elements in a square matrix is called the trace (or character) of the matrix (for the above matrix, the trace is $\chi = 1 + 5 + 9 = 15$). The traces of matrices representing symmetry operations will turn out to be of great importance in group theory.
A vector is just a special case of a matrix in which one of the dimensions is equal to $1$. An $n \times 1$ matrix is a column vector; a $1 \times m$ matrix is a row vector. The components of a vector are usually only labeled with one index. A unit vector has one element equal to $1$ and the others equal to zero (it is the same as one row or column of an identity matrix). We can extend the idea further to say that a single number is a matrix (or vector) of dimension $1 \times 1$.
Matrix Algebra
1. Two matrices with the same dimensions may be added or subtracted by adding or subtracting the elements occupying the same position in each matrix. e.g.
$A = \begin{pmatrix} 1 & 0 & 2 \ 2 & 2 &1 \ 3 & 2 & 0 \end{pmatrix} \label{8.3}$
$B = \begin{pmatrix} 2 & 0 & -2 \ 1 & 0 & 1 \ 1 & -1 & 0 \end{pmatrix} \label{8.4}$
$A + B = \begin{pmatrix} 3 & 0 & 0 \ 3 & 2 & 2 \ 4 & 1 & 0 \end{pmatrix} \label{8.5}$
$A - B = \begin{pmatrix} -1 & 0 & 4 \ 1 & 2 & 0 \ 2 & 3 & 0 \end{pmatrix} \label{8.6}$
1. A matrix may be multiplied by a constant by multiplying each element by the constant.
$4B = \left(\begin{array}{ccc} 8 & 0 & -8 \ 4 & 0 & 4 \ 4 & -4 & 0 \end{array}\right) \label{8.7}$
$3A = \left(\begin{array}{ccc} 3 & 0 & 6 \ 6 & 6 & 3 \ 9 & 6 & 0 \end{array}\right) \label{8.8}$
1. Two matrices may be multiplied together provided that the number of columns of the first matrix is the same as the number of rows of the second matrix i.e. an $n \times m$ matrix may be multiplied by an $m \times l$ matrix. The resulting matrix will have dimensions $n \times l$. To find the element $a_{ij}$ in the product matrix, we take the dot product of row $i$ of the first matrix and column $j$ of the second matrix (i.e. we multiply consecutive elements together from row $i$ of the first matrix and column $j$ of the second matrix and add them together i.e. $c_{ij}$ = $\Sigma_k$ $a_{ik}$$b_{jk}$ e.g. in the $3 \times 3$ matrices $A$ and $B$ used in the above examples, the first element in the product matrix $C = AB$ is $c_{11}$ = $a_{11}$$b_{11}$ + $a_{12}$$b_{21}$ + $a_{13}$$b_{31}$
$AB = \begin{pmatrix} 1 & 0 & 2 \ 2 & 2 & 1 \ 3 & 2 & 0 \end{pmatrix} \begin{pmatrix} 2 & 0 & -2 \ 1 & 0 & 1 \ 1 & -1 & 0 \end{pmatrix} = \begin{pmatrix} 4 & -2 & -2 \ 7 & -1 & -2 \ 8 & 0 & -4 \end{pmatrix} \label{8.9}$
An example of a matrix multiplying a vector is
$A\textbf{v} = \begin{pmatrix} 1 & 0 & 2 \ 2 & 2 & 1 \ 3 & 2 & 0 \end{pmatrix} \begin{pmatrix} 1 \ 2 \ 3 \end{pmatrix} = \begin{pmatrix} 7 \ 9 \ 7 \end{pmatrix} \label{8.10}$
Matrix multiplication is not generally commutative, a property that mirrors the behavior found earlier for symmetry operations within a point group.
Direct Products
The direct product of two matrices (given the symbol $\otimes$) is a special type of matrix product that generates a matrix of higher dimensionality if both matrices have dimension greater than one. The easiest way to demonstrate how to construct a direct product of two matrices $A$ and $B$ is by an example:
\begin{align} A \otimes B &= \begin{pmatrix} a_{11} & a_{12} \ a_{21} & a_{22} \end{pmatrix} \otimes \begin{pmatrix} b_{11} & b_{12} \ b_{21} & b_{22} \end{pmatrix} \[4pt] &= \begin{pmatrix} a_{11}B & a_{12}B \ a_{21}B & a_{22}B \end{pmatrix} \[4pt] &= \begin{pmatrix} a_{11}b_{11} & a_{11}b_{12} & a_{12}b_{11} & a_{12}b_{12} \ a_{11}b_{21} & a_{11}b_{22} & a_{12}b_{21} & a_{12}b_{22} \ a_{21}b_{11} & a_{21}b_{12} & a_{22}b_{11} & a_{22}b_{21} \ a_{21}b_{21} & a_{21}b_{22} & a_{22}b_{21} & a_{22}b_{22} \end{pmatrix} \label{8.11} \end{align}
Though this may seem like a somewhat strange operation to carry out, direct products crop up a great deal in group theory.
Inverse Matrices and Determinants
If two square matrices $A$ and $B$ multiply together to give the identity matrix I (i.e. $AB = I$) then $B$ is said to be the inverse of $A$ (written $A^{-1}$). If $B$ is the inverse of $A$ then $A$ is also the inverse of $B$. Recall that one of the conditions imposed upon the symmetry operations in a group is that each operation must have an inverse. It follows by analogy that any matrices we use to represent symmetry elements must also have inverses. It turns out that a square matrix only has an inverse if its determinant is non-zero. For this reason (and others which will become apparent later on when we need to solve equations involving matrices) we need to learn a little about matrix determinants and their properties.
For every square matrix, there is a unique function of all the elements that yields a single number called the determinant. Initially it probably won’t be particularly obvious why this number should be useful, but matrix determinants are of great importance both in pure mathematics and in a number of areas of science. Historically, determinants were actually around before matrices. They arose originally as a property of a system of linear equations that ‘determined’ whether the system had a unique solution. As we shall see later, when such a system of equations is recast as a matrix equation this property carries over into the matrix determinant.
There are two different definitions of a determinant, one geometric and one algebraic. In the geometric interpretation, we consider the numbers across each row of an $n \times n$ matrix as coordinates in $n$-dimensional space. In a one-dimensional matrix (i.e. a number), there is only one coordinate, and the determinant can be interpreted as the (signed) length of a vector from the origin to this point. For a $2 \times 2$ matrix we have two coordinates in a plane, and the determinant is the (signed) area of the parallelogram that includes these two points and the origin. For a $3 \times 3$ matrix the determinant is the (signed) volume of the parallelepiped that includes the three points (in three-dimensional space) defined by the matrix and the origin. This is illustrated below. The idea extends up to higher dimensions in a similar way. In some sense then, the determinant is therefore related to the size of a matrix.
The algebraic definition of the determinant of an $nxn$ matrix is a sum over all the possible products (permutations) of n elements taken from different rows and columns. The number of terms in the sum is $n!$, the number of possible permutations of $n$ values (i.e. $2$ for a $2 \times 2$ matrix, $6$ for a $3 \times 3$ matrix etc). Each term in the sum is given a positive or a negative sign depending on whether the number of permutation inversions in the product is even or odd. A permutation inversion is just a pair of elements that are out of order when described by their indices. For example, for a set of four elements $\begin{pmatrix} a_1, a_2, a_3, a_4 \end{pmatrix}$, the permutation $a_1 a_2 a_3 a_4$ has all the elements in their correct order (i.e. in order of increasing index). However, the permutation $a_2 a_4 a_1 a_3$ contains the permutation inversions $a_2 a_1$, $a_4 a_1$, $a_4 a_3$.
For example, for a two-dimensional matrix
$\begin{pmatrix} a_{11} & a_{12} \ a_{21} & a_{22} \end{pmatrix} \label{8.12}$
where the subscripts label the row and column positions of the elements, there are $2$ possible products/permutations involving elements from different rows and column, $a_{11}$$a_{22}$ and $a_{12}$$a_{21}$. In the second term, there is a permutation inversion involving the column indices $2$ and $1$ (permutation inversions involving the row and column indices should be looked for separately) so this term takes a negative sign, and the determinant is $a_{11}$$a_{22}$ - $a_{12}$$a_{21}$.
For a $3 \times 3$ matrix
$\begin{pmatrix} a_{11} & a_{12} & a_{13} \ a_{21} & a_{22} & a_{23} \ a_{31} & a_{32} & a_{33} \end{pmatrix} \label{8.13}$
the possible combinations of elements from different rows and columns, together with the sign from the number of permutations required to put their indices in numerical order are:
$\begin{array}{rl}a_{11} a_{22} a_{33} & (0 \: \text{inversions}) \ -a_{11} a_{23} a_{32} & (1 \: \text{inversion -} \: 3>2 \: \text{in the column indices}) \ -a_{12} a_{21} a_{33} & (1 \: \text{inversion -} \: 2>1 \: \text{in the column indices}) \ a_{12} a_{23} a_{31} & (2 \: \text{inversions -} \: 2>1 \: \text{and} \: 3>1 \: \text{in the column indices}) \ a_{13} a_{21} a_{32} & (2 \: \text{inversions -} \: 3>1 \: \text{and} \: 3>2 \: \text{in the column indices}) \ -a_{13} a_{22} a_{31} & (3 \: \text{inversions -} \: 3>2, 3>1, \: \text{and} \: 2>1 \: \text{in the column indices}) \end{array} \label{8.14}$
and the determinant is simply the sum of these terms.
This may all seem a little complicated, but in practice there is a fairly systematic procedure for calculating determinants. The determinant of a matrix $A$ is usually written det($A$) or |$a$|.
For a $2 \times 2$ matrix
$A = \begin{pmatrix} a & b \ c & d \end{pmatrix}; det(A) = |A| = \begin{vmatrix} a & b \ c & d \end{vmatrix} = ad - bc \label{8.15}$
For a $3 \times 3$ matrix
$B = \begin{pmatrix} a & b & c \ d & e & f \ g & h & i \end{pmatrix}; det(B) = a\begin{vmatrix} e & f \ h & i \end{vmatrix} - b\begin{vmatrix} d & f \ g & i \end{vmatrix} + c\begin{vmatrix} d & e \ g & h \end{vmatrix} \label{8.16}$
For a $4x4$ matrix
$C = \begin{pmatrix} a & b & c & d \ e & f & g & h \ i & j & k & l \ m & n & o & p \end{pmatrix}; det(C) = a\begin{vmatrix} f & g & h \ j & k & l \ n & o & p \end{vmatrix} - b\begin{vmatrix} e & g & h \ i & k & l \ m & o & p \end{vmatrix} + c\begin{vmatrix} e & f & h \ i & j & l \ m & n & p \end{vmatrix} - d\begin{vmatrix} e & f & g \ i & j & k \ m & n & o \end{vmatrix} \label{8.17}$
and so on in higher dimensions. Note that the submatrices in the $3 \times 3$ example above are just the matrices formed from the original matrix $B$ that don’t include any elements from the same row or column as the premultiplying factors from the first row.
Matrix determinants have a number of important properties:
1. The determinant of the identity matrix is $1$.
$e.g. \begin{vmatrix} 1 & 0 \ 0 & 1 \end{vmatrix} = \begin{vmatrix} 1 & 0 & 0 \ 0 & 1 & 0 \ 0 & 0 & 1 \end{vmatrix} = 1 \label{8.18}$
1. The determinant of a matrix is the same as the determinant of its transpose i.e. det($a$) = det($A^{T}$)
$e.g. \begin{vmatrix} a & b \ c & d \end{vmatrix} = \begin{vmatrix} a & c \ b & d \end{vmatrix} \label{8.19}$
1. The determinant changes sign when any two rows or any two columns are interchanged
$e.g. \begin{vmatrix} a & b \ c & d \end{vmatrix} = -\begin{vmatrix} b & a \ d & c \end{vmatrix} = -\begin{vmatrix} c & d \ a & b \end{vmatrix} = \begin{vmatrix} d & c \ b & a \end{vmatrix} \label{8.20}$
1. The determinant is zero if any row or column is entirely zero, or if any two rows or columns are equal or a multiple of one another.
$e.g. \begin{vmatrix} 1 & 2 \ 0 & 0 \end{vmatrix} = 0, \begin{vmatrix} 1 & 2 \ 2 & 4 \end{vmatrix} = 0 \label{8.21}$
1. The determinant is unchanged by adding any linear combination of rows (or columns) to another row (or column).
2. The determinant of the product of two matrices is the same as the product of the determinants of the two matrices i.e. det($AB$) = det($A$)det($B$).
The requirement that in order for a matrix to have an inverse it must have a non-zero determinant follows from property vi). As mentioned previously, the product of a matrix and its inverse yields the identity matrix I. We therefore have:
$\begin{array}{rcl} det(A^{-1} A) = det(A^{-1}) det(A) & = & det(I) \ det(A^{-1}) & = & det(I)/det(A) = 1/det(A) \end{array} \label{8.22}$
It follows that a matrix $A$ can only have an inverse if its determinant is non-zero, otherwise the determinant of its inverse would be undefined.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Symmetry_(Vallance)/1.08%3A_Review_of_Matrices.txt
|
Matrices can be used to map one set of coordinates or functions onto another set. Matrices used for this purpose are called transformation matrices. In group theory, we can use transformation matrices to carry out the various symmetry operations considered at the beginning of the course. As a simple example, we will investigate the matrices we would use to carry out some of these symmetry operations on a vector $\begin{pmatrix} x, y \end{pmatrix}$.
The identity Operation
The identity operation leaves the vector unchanged, and as you may already suspect, the appropriate matrix is the identity matrix.
$\begin{pmatrix} x, y \end{pmatrix} \begin{pmatrix} 1 & 0 \ 0 & 1 \end{pmatrix} = \begin{pmatrix} x, y \end{pmatrix} \label{9.1}$
Reflection in a plane
The simplest example of a reflection matrix corresponds to reflecting the vector $\begin{pmatrix} x, y \end{pmatrix}$ in either the $x$ or $y$ axes. Reflection in the $x$ axis maps $y$ to $-y$, while reflection in the $y$ axis maps $x$ to $-x$. The appropriate matrix is very like the identity matrix but with a change in sign for the appropriate element. Reflection in the $x$ axis transforms the vector $\begin{pmatrix} x, y \end{pmatrix}$ to $\begin{pmatrix} x, -y \end{pmatrix}$, and the appropriate matrix is
$\begin{pmatrix} x, y \end{pmatrix} \begin{pmatrix} 1 & 0 \ 0 & -1 \end{pmatrix} = \begin{pmatrix} x, -y \end{pmatrix} \label{9.2}$
Reflection in the y axis transforms the vector $\begin{pmatrix} x, y \end{pmatrix}$ to $\begin{pmatrix} -x, y \end{pmatrix}$, and the appropriate matrix is
$\begin{pmatrix} x, y \end{pmatrix} \begin{pmatrix} -1 & 0 \ 0 & 1 \end{pmatrix} = \begin{pmatrix} -x, y \end{pmatrix} \label{9.3}$
More generally, matrices can be used to represent reflections in any plane (or line in 2D). For example, reflection in the 45° axis shown below maps
$\begin{pmatrix} x, y \end{pmatrix}$ onto $\begin{pmatrix} -y, -x \end{pmatrix}$.
$\begin{pmatrix} x, y \end{pmatrix} \begin{pmatrix} 0 & -1 \ -1 & 0 \end{pmatrix} = \begin{pmatrix} -y, -x \end{pmatrix} \label{9.4}$
Rotation about an Axis
In two dimensions, the appropriate matrix to represent rotation by an angle $\theta$ about the origin is
$R(\theta) = \begin{pmatrix} \cos\theta & -\sin\theta \ \sin\theta & \cos\theta \end{pmatrix} \label{9.5}$
In three dimensions, rotations about the $x$, $y$ and $z$ axes acting on a vector $\begin{pmatrix} x, y, z \end{pmatrix}$ are represented by the following matrices.
$R_{x}(\theta) = \begin{pmatrix} 1 & 0 & 0 \ 0 & \cos\theta & -\sin\theta \ 0 & \sin\theta & \cos\theta \end{pmatrix} \label{9.6a}$
$R_{y}(\theta) = \begin{pmatrix} \cos\theta & 0 & -\sin\theta \ 0 & 1 & 0 \ \sin\theta & 0 & \cos\theta \end{pmatrix} \label{9.6b}$
$R_{z}(\theta) = \begin{pmatrix} \cos\theta & -\sin\theta & 0 \ \sin\theta & \cos\theta & 0 \ 0 & 0 & 1 \end{pmatrix} \label{9.6c}$
1.10: Matrix Representations of Groups
We are now ready to integrate what we have just learned about matrices with group theory. The symmetry operations in a group may be represented by a set of transformation matrices $\Gamma$$(g)$, one for each symmetry element $g$. Each individual matrix is called a representative of the corresponding symmetry operation, and the complete set of matrices is called a matrix representation of the group. The matrix representatives act on some chosen basis set of functions, and the actual matrices making up a given representation will depend on the basis that has been chosen. The representation is then said to span the chosen basis. In the examples above we were looking at the effect of some simple transformation matrices on an arbitrary vector $\begin{pmatrix} x, y \end{pmatrix}$. The basis was therefore a pair of unit vectors pointing in the $x$ and $y$ directions. In most of the examples we will be considering in this course, we will use sets of atomic orbitals as basis functions for matrix representations. Don’t worry too much if these ideas seem a little abstract at the moment – they should become clearer in the next section when we look at some examples.
Before proceeding any further, we must check that a matrix representation of a group obeys all of the rules set out in the formal mathematical definition of a group.
1. The first rule is that the group must include the identity operation $E$ (the ‘do nothing’ operation). We showed above that the matrix representative of the identity operation is simply the identity matrix. As a consequence, every matrix representation includes the appropriate identity matrix.
2. The second rule is that the combination of any pair of elements must also be an element of the group (the group property). If we multiply together any two matrix representatives, we should get a new matrix which is a representative of another symmetry operation of the group. In fact, matrix representatives multiply together to give new representatives in exactly the same way as symmetry operations combine according to the group multiplication table. For example, in the $C_{3v}$ point group, we showed that the combined symmetry operation $C_3$$\sigma_v$ is equivalent to $\sigma_v''$. In a matrix representation of the group, if the matrix representatives of $C_3$ and $\sigma_v$ are multiplied together, the result will be the representative of $\sigma_v''$.
3. The third rule states that every operation must have an inverse, which is also a member of the group. The combined effect of carrying out an operation and its inverse is the same as the identity operation. It is fairly easy to show that matrix representatives satisfy this criterion. For example, the inverse of a reflection is another reflection, identical to the first. In matrix terms we would therefore expect that a reflection matrix was its own inverse, and that two identical reflection matrices multiplied together would give the identity matrix. This turns out to be true, and can be verified using any of the reflection matrices in the examples above. The inverse of a rotation matrix is another rotation matrix corresponding to a rotation of the opposite sense to the first.
4. The final rule states that the rule of combination of symmetry elements in a group must be associative. This is automatically satisfied by the rules of matrix multiplication.
Example: a matrix representation of the $C_{3v}$ point group (the ammonia molecule)
The first thing we need to do before we can construct a matrix representation is to choose a basis. For $NH_3$, we will select a basis $\begin{pmatrix} s_N, s_1, s_2, s_3 \end{pmatrix}$ that consists of the valence s orbitals on the nitrogen and the three hydrogen atoms. We need to consider what happens to this basis when it is acted on by each of the symmetry operations in the $C_{3v}$ point group, and determine the matrices that would be required to produce the same effect. The basis set and the symmetry operations in the $C_{3v}$ point group are summarized in the figure below.
The effects of the symmetry operations on our chosen basis are as follows:
$\begin{array}{ll} E & \begin{pmatrix} s_N, s_1, s_2, s_3 \end{pmatrix} \rightarrow \begin{pmatrix} s_N, s_1, s_2, s_3 \end{pmatrix} \ C_3^+ & \begin{pmatrix} s_N, s_1, s_2, s_3 \end{pmatrix} \rightarrow \begin{pmatrix} s_N, s_2, s_3, s_1 \end{pmatrix} \ C_3^- & \begin{pmatrix} s_N, s_1, s_2, s_3 \end{pmatrix} \rightarrow \begin{pmatrix} s_N, s_3, s_1, s_2 \end{pmatrix} \ \sigma_v & \begin{pmatrix} s_N, s_1, s_2, s_3 \end{pmatrix} \rightarrow \begin{pmatrix} s_N, s_1, s_3, s_2 \end{pmatrix} \ \sigma_v' & \begin{pmatrix} s_N, s_1, s_2, s_3 \end{pmatrix} \rightarrow \begin{pmatrix} s_N, s_2, s_1, s_3 \end{pmatrix} \ \sigma_v'' & \begin{pmatrix} s_N, s_1, s_2, s_3 \end{pmatrix} \rightarrow \begin{pmatrix} s_N, s_3, s_2, s_1 \end{pmatrix} \end{array} \label{10.1}$
By inspection, the matrices that carry out the same transformations are:
$\begin{array}{ll} \Gamma(E) & \begin{pmatrix} s_N, s_1, s_2, s_3 \end{pmatrix}\begin{pmatrix} 1 & 0 & 0 & 0 \ 0 & 1 & 0 & 0 \ 0 & 0 & 1 & 0 \ 0 & 0 & 0 & 1 \end{pmatrix} = \begin{pmatrix} s_N, s_1, s_2, s_3 \end{pmatrix} \ \Gamma(C_3^+) & \begin{pmatrix} s_N, s_1, s_2, s_3 \end{pmatrix} \begin{pmatrix} 1 & 0 & 0 & 0 \ 0 & 0 & 0 & 1 \ 0 & 1 & 0 & 0 \ 0 & 0 & 1 & 0 \end{pmatrix} = \begin{pmatrix} s_N, s_2, s_3, s_1 \end{pmatrix} \ \Gamma(C_3^-) & \begin{pmatrix} s_N, s_1, s_2, s_3 \end{pmatrix} \begin{pmatrix} 1 & 0 & 0 & 0 \ 0 & 0 & 1 & 0 \ 0 & 0 & 0 & 1 \ 0 & 1 & 0 & 0 \end{pmatrix} = \begin{pmatrix} s_N, s_3, s_1, s_2 \end{pmatrix} \ \Gamma(\sigma_v) & \begin{pmatrix} s_N, s_1, s_2, s_3 \end{pmatrix} \begin{pmatrix} 1 & 0 & 0 & 0 \ 0 & 1 & 0 & 0 \ 0 & 0 & 0 & 1 \ 0 & 0 & 1 & 0 \end{pmatrix} = \begin{pmatrix} s_N, s_1, s_3, s_2 \end{pmatrix} \ \Gamma(\sigma_v') & \begin{pmatrix} s_N, s_1, s_2, s_3 \end{pmatrix} \begin{pmatrix} 1 & 0 & 0 & 0 \ 0 & 0 & 1 & 0 \ 0 & 1 & 0 & 0 \ 0 & 0 & 0 & 1 \end{pmatrix} = \begin{pmatrix} s_N, s_2, s_1, s_3 \end{pmatrix} \ \Gamma(\sigma_v'') & \begin{pmatrix} s_N, s_1, s_2, s_3 \end{pmatrix} \begin{pmatrix} 1 & 0 & 0 & 0 \ 0 & 0 & 0 & 1 \ 0 & 0 & 1 & 0 \ 0 & 1 & 0 & 0 \end{pmatrix} = \begin{pmatrix} s_N, s_3, s_2, s_1 \end{pmatrix} \end{array} \label{10.2}$
These six matrices therefore form a representation for the $C_{3v}$ point group in the $\begin{pmatrix} s_N, s_1, s_2, s_3 \end{pmatrix}$ basis. They multiply together according to the group multiplication table and satisfy all the requirements for a mathematical group.
We have written the vectors representing our basis as row vectors. This is important. If we had written them as column vectors, the corresponding transformation matrices would be the transposes of the matrices above, and would not reproduce the group multiplication table (try it as an exercise if you need to convince yourself).
Example: a matrix representation of the $C_{2v}$ point group (the allyl radical)
In this example, we’ll take as our basis a $p$ orbital on each carbon atom $\begin{pmatrix} p_1, p_2, p_3 \end{pmatrix}$.
Note that the $p$ orbitals are perpendicular to the plane of the carbon atoms (this may seem obvious, but if you’re visualizing the basis incorrectly it will shortly cause you a not inconsiderable amount of confusion). The symmetry operations in the $C_{2v}$ point group, and their effect on the three $p$ orbitals, are as follows:
$\begin{array}{ll} E & \begin{pmatrix} p_1, p_2, p_3 \end{pmatrix} \rightarrow \begin{pmatrix} p_1, p_2, p_3 \end{pmatrix} \ C_2 & \begin{pmatrix} p_1, p_2, p_3 \end{pmatrix} \rightarrow \begin{pmatrix} -p_3, -p_2, -p_1 \end{pmatrix} \ \sigma_v & \begin{pmatrix} p_1, p_2, p_3 \end{pmatrix} \rightarrow \begin{pmatrix} -p_1, -p_2, -p_3 \end{pmatrix} \ \sigma_v' & \begin{pmatrix} p_1, p_2, p_3 \end{pmatrix} \rightarrow \begin{pmatrix} p_3, p_2, p_1 \end{pmatrix} \end{array} \label{10.3}$
The matrices that carry out the transformation are
$\begin{array}{ll} \Gamma(E) & \begin{pmatrix} p_1, p_2, p_3 \end{pmatrix} \begin{pmatrix} 1 & 0 & 0 \ 0 & 1 & 0 \ 0 & 0 & 1 \end{pmatrix} = \begin{pmatrix} p_1, p_2, p_3 \end{pmatrix} \ \Gamma(C_2) & \begin{pmatrix} p_1, p_2, p_3 \end{pmatrix} \begin{pmatrix} 0 & 0 & -1 \ 0 & -1 & 0 \ -1 & 0 & 0 \end{pmatrix} = \begin{pmatrix} -p_3, -p_2, -p_1 \end{pmatrix} \ \Gamma(\sigma_v) & \begin{pmatrix} p_1, p_2, p_3 \end{pmatrix} \begin{pmatrix} -1 & 0 & 0 \ 0 & -1 & 0 \ 0 & 0 & -1 \end{pmatrix} = \begin{pmatrix} -p_1, -p_2, -p_3 \end{pmatrix} \ \Gamma(\sigma_v') & \begin{pmatrix} p_1, p_2, p_3 \end{pmatrix} \begin{pmatrix} 0 & 0 & 1 \ 0 & 1 & 0 \ 1 & 0 & 0 \end{pmatrix} = \begin{pmatrix} p_3, p_2, p_1 \end{pmatrix} \end{array} \label{10.4}$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Symmetry_(Vallance)/1.09%3A_Transformation_matrices.txt
|
Now that we’ve learnt how to create a matrix representation of a point group within a given basis, we will move on to look at some of the properties that make these representations so powerful in the treatment of molecular symmetry.
Similarity Transforms
Suppose we have a basis set $\begin{pmatrix} x_1, x_2, x_3, ... x_n \end{pmatrix}$, and we have determined the matrix representatives for the basis in a given point group. There is nothing particularly special about the basis set we have chosen, and we could equally well have used any set of linear combinations of the original functions (provided the combinations were linearly independent). The matrix representatives for the two basis sets will certainly be different, but we would expect them to be related to each other in some way. As we shall show shortly, they are in fact related by a similarity transform. It will be far from obvious at this point why we would want to carry out such a transformation, but similarity transforms will become important later on when we use group theory to choose an optimal basis set with which to generate molecular orbitals.
Consider a basis set $\begin{pmatrix} x_1', x_2', x_3', ... x_n' \end{pmatrix}$, in which each basis function $x_i'$ is a linear combination of our original basis $\begin{pmatrix} x_1, x_2, x_3, ... x_n \end{pmatrix}$.
$x_j' = \Sigma_i x_ic_{ji} = x_1c_{j1} + x_2c_{j2} + ... \tag{11.1}$
The $c_{ji}$ appearing in the sum are coefficients; $c_{ji}$ is the coefficient multiplying the original basis function $x_i$ in the new linear combination basis function $x_j'$. We could also represent this transformation in terms of a matrix equation $\textbf{x'}$ = $\textbf{x}C$:
$\begin{pmatrix} x_1', x_2', ... x_n' \end{pmatrix} = \begin{pmatrix} x_1, x_2, ... x_n \end{pmatrix} \begin{pmatrix} c_{11} & c_{12} & ... & c_{1n} \ c_{21} & c_{22} & ... & c_{2n} \ ... & ... & ... & ... \ c_{n1} & c_{n2} & ... & c_{nn} \end{pmatrix} \tag{11.2}$
Now we look at what happens when we apply a symmetry operation g to our two basis sets. If $\Gamma(g)$ and $\Gamma'(g)$ are matrix representatives of the symmetry operation in the $\textbf{x}$ and $\textbf{x'}$ bases, then we have:
$\begin{array}{rcll} g\textbf{x'} & = & \textbf{x'}\Gamma'(g) \ g\textbf{x}C & = & \textbf{x}C\Gamma'(g) & \text{since} \: \textbf{x'} = \textbf{x}C \ g\textbf{x} & = & \textbf{x}C\Gamma'(g)C^{-1} & \text{multiplying on the right by} \: C^{-1} \text{and using} \: CC^{-1} = I \ & = & \textbf{x}\Gamma(g) \end{array} \tag{11.3}$
We can therefore identify the similarity transform relating $\Gamma(g)$, the matrix representative in our original basis, to $\Gamma'(g)$, the representative in the transformed basis. The transform depends only on the matrix of coefficients used to transform the basis functions.
$\Gamma(g) = C\Gamma'(g)C^{-1} \tag{11.4}$
Also,
$\Gamma'(g) = C^{-1}\Gamma(g)C \tag{11.5}$
Characters of Representations
The trace of a matrix representative $\Gamma(g)$ is usually referred to as the character of the representation under the symmetry operation $g$. We will soon come to see that the characters of a matrix representation are often more useful than the matrix representatives themselves. Characters have several important properties.
1. The character of a symmetry operation is invariant under a similarity transform
2. Symmetry operations belonging to the same class have the same character in a given representation. Note that the character for a given class may be different in different representations, and that more than one class may have the same character.
Proofs of the above two statements are given in the Appendix.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Symmetry_(Vallance)/1.11%3A_Properties_of_Matrix_Representations.txt
|
Let us now go back and look at the $C_{3v}$ representation we derived in $10.1$ in more detail. If we look at the matrices carefully we see that they all take the same block diagonal form (a square matrix is said to be block diagonal if all the elements are zero except for a set of submatrices lying along the diagonal).
$\begin{array}{ccc} \Gamma(E) & \Gamma(C_3^-) & \Gamma(C_3^+) & \Gamma(\sigma_v) & \Gamma(\sigma_v') & \Gamma(\sigma_v'') \ \scriptsize{\begin{pmatrix} \textbf{1} & \text{0} & \text{0} & \text{0} \ \text{0} & \textbf{1} & \textbf{0} & \textbf{0} \ \text{0} & \textbf{0} & \textbf{1} & \textbf{0} \ \text{0} & \textbf{0} & \textbf{0} & \textbf{1} \end{pmatrix}} & \scriptsize{\begin{pmatrix} \textbf{1} & 0 & 0 & 0 \ 0 & \textbf{0} & \textbf{1} & \textbf{0} \ 0 & \textbf{0} & \textbf{0} & \textbf{1} \ 0 & \textbf{1} & \textbf{0} & \textbf{0} \end{pmatrix}} & \scriptsize{\begin{pmatrix} \textbf{1} & 0 & 0 & 0 \ 0 & \textbf{0} & \textbf{0} & \textbf{1} \ 0 & \textbf{1} & \textbf{0} & \textbf{0} \ 0 & \textbf{0} & \textbf{1} & \textbf{0} \end{pmatrix}} & \scriptsize{\begin{pmatrix} \textbf{1} & 0 & 0 & 0 \ 0 & \textbf{1} & \textbf{0} & \textbf{0} \ 0 & \textbf{0} & \textbf{0} & \textbf{1} \ 0 & \textbf{0} & \textbf{1} & \textbf{0} \end{pmatrix}} & \scriptsize{\begin{pmatrix} \textbf{1} & 0 & 0 & 0 \ 0 & \textbf{0} & \textbf{1} & \textbf{0} \ 0 & \textbf{1} & \textbf{0} & \textbf{0} \ 0 & \textbf{0} & \textbf{0} & \textbf{1} \end{pmatrix}} & \scriptsize{\begin{pmatrix} \textbf{1} & 0 & 0 & 0 \ 0 & \textbf{0} & \textbf{0} & \textbf{1} \ 0 & \textbf{0} & \textbf{1} & \textbf{0} \ 0 & \textbf{1} & \textbf{0} & \textbf{0} \end{pmatrix}} \ \chi(E) = 4 & \chi(C_3^+) = 1 & \chi(C_3^-) = 1 & \chi(\sigma_v) = 2 & \chi(\sigma_v') = 2 & \chi(\sigma_v'') = 2 \end{array} \nonumber$
A block diagonal matrix can be written as the direct sum of the matrices that lie along the diagonal. In the case of the $C_{3v}$ matrix representation, each of the matrix representatives may be written as the direct sum of a $1 \times1$ matrix and a $3 \times3$ matrix.
$\Gamma^{(4)}(g) = \Gamma^{(1)}(g) \otimes \Gamma^{(3)}(g) \label{12.1}$
in which the bracketed superscripts denote the dimensionality of the matrices. Note that a direct sum is very different from ordinary matrix addition since it produces a matrix of higher dimensionality. A direct sum of two matrices of orders $n$ and $m$ is performed by placing the matrices to be summed along the diagonal of a matrix of order $n + m$ and filling in the remaining elements with zeroes.
The reason why this result is useful in group theory is that the two sets of matrices $\Gamma^{(1)}(g)$ and $\Gamma^{(3)}(g)$ also satisfy all of the requirements for a matrix representation. Each set contains the identity and an inverse for each member, and the members multiply together associatively according to the group multiplication table$^3$. Recall that the basis for the original four-dimensional representation had the $s$ orbitals $\begin{pmatrix} s_N, s_1, s_2, s_3 \end{pmatrix}$ of ammonia as its basis. The first set of reduced matrices, $\Gamma^{(1)}(g)$, forms a one-dimensional representation with $\begin{pmatrix} s_N \end{pmatrix}$ as its basis. The second set, $\Gamma^{(3)}(g)$ forms a three-dimensional representation with the basis $\begin{pmatrix}s_1, s_2, s_3 \end{pmatrix}$. Separation of the original representation into representations of lower dimensionality is called reduction of the representation. The two reduced representations are shown below.
$\begin{array}{cccccccl} g & E & C_3^+ & C_3^- & \sigma_v & \sigma_v' & \sigma_v'' & \ \Gamma^{(1)}(g) & (1) & (1) & (1) & (1) & (1) & (1) & \begin{array}{l} \small \text{1D representation} \ \small \text{spanned by} \: (s_N) \end{array} \ \Gamma^{(3)}(g) & \scriptsize{\begin{pmatrix} 1 & 0 & 0 \ 0 & 1 & 0 \ 0 & 0 & 1 \end{pmatrix}} & \scriptsize{\begin{pmatrix} 0 & 1 & 0 \ 0 & 0 & 1 \ 1 & 0 & 0 \end{pmatrix}} & \scriptsize{\begin{pmatrix} 0 & 0 & 1 \ 1 & 0 & 0 \ 0 & 1 & 0 \end{pmatrix}} & \scriptsize{\begin{pmatrix} 1 & 0 & 0 \ 0 & 0 & 1 \ 0 & 1 & 0 \end{pmatrix}} & \scriptsize{\begin{pmatrix} 0 & 1 & 0 \ 1 & 0 & 0 \ 0 & 0 & 1 \end{pmatrix}} & \scriptsize{\begin{pmatrix} 0 & 0 & 1 \ 0 & 1 & 0 \ 1 & 0 & 0 \end{pmatrix}} & \begin{array}{l} \small \text{3D representation} \ \small \text{spanned by} \: \begin{pmatrix} s_1, s_2, s_3 \end{pmatrix} \end{array} \end{array} \nonumber$
The logical next step is to investigate whether or not the three dimensional representation $\Gamma^{(3)}(g)$ can be reduced any further. As it stands, the matrices making up this representation are not in block diagonal form (some of you may have noted that the matrices representing $E$ and $\sigma_v$ are block diagonal, but in order for a representation to be reducible all of the matrix representatives must be in the same block diagonal form) so the representation is not reducible. However, we can carry out a similarity transformation (see $10.1$) to a new representation spanned by a new set of basis functions (made up of linear combinations of $\begin{pmatrix} s_1, s_2, s_3 \end{pmatrix}$), which is reducible. In this case, the appropriate (normalized) linear combinations to use as our new basis functions are
$\begin{array}{c} s_1' = \dfrac{1}{\sqrt{3}}(s_1 + s_2 + s_3) \ s_2' = \dfrac{1}{\sqrt{6}}(2s_1 - s_2 - s_3) \ s_3' = \dfrac{1}{\sqrt{2}}(s_2 - s_3) \end{array} \label{12.2}$
or in matrix form
$\begin{array}{cccc} \begin{pmatrix} s_1', s_2', s_3' \end{pmatrix} & = & \begin{pmatrix} s_1, s_2, s_3 \end{pmatrix} & \begin{pmatrix} \dfrac{1}{\sqrt{3}} & \dfrac{2}{\sqrt{6}} & 0 \ \dfrac{1}{\sqrt{3}} & -\dfrac{1}{\sqrt{6}} & \dfrac{1}{\sqrt{2}} \ \dfrac{1}{\sqrt{3}} & -\dfrac{1}{\sqrt{6}} & -\dfrac{1}{\sqrt{2}} \end{pmatrix} \ \textbf{x'} & = & \textbf{x} & C \end{array} \label{12.3}$
The matrices in the new representation are found from $\Gamma'(g)$ = $C^{-1}\Gamma(g)C$ to be
$\begin{array}{lcccccc} & E & C_3^+ & C_3^- & \sigma_v & \sigma_v' & \sigma_v'' \ \Gamma^{(3),}(g) & \scriptsize{\begin{pmatrix} 1 & 0 & 0 \ 0 & 1 & 0 \ 0 & 0 & 1 \end{pmatrix}} & \scriptsize{\begin{pmatrix} 1 & 0 & 0 \ 0 & -\dfrac{1}{2} & \dfrac{\sqrt{3}}{2} \ 0 & -\dfrac{\sqrt{3}}{2} & -\dfrac{1}{2} \end{pmatrix}} & \scriptsize{\begin{pmatrix} 1 & 0 & 0 \ 0 & -\dfrac{1}{2} & -\dfrac{\sqrt{3}}{2} \ 0 & \dfrac{\sqrt{3}}{2} & -\dfrac{1}{2} \end{pmatrix}} & \scriptsize{\begin{pmatrix} 1 & 0 & 0 \ 0 & 1 & 0 \ 0 & 0 & -1 \end{pmatrix}} & \scriptsize{\begin{pmatrix} 1 & 0 & 0 \ 0 & -\dfrac{1}{2} & \dfrac{\sqrt{3}}{2} \ 0 & \dfrac{\sqrt{3}}{2} & \dfrac{1}{2} \end{pmatrix}} & \scriptsize{\begin{pmatrix} 1 & 0 & 0 \ 0 & -\dfrac{1}{2} & -\dfrac{\sqrt{3}}{2} \ 0 & -\dfrac{\sqrt{3}}{2} & \dfrac{1}{2} \end{pmatrix}} \end{array}$
We see that each matrix is now in block diagonal form, and the representation may be reduced into the direct sum of a 1 x 1 representation spanned by $\begin{pmatrix} s_1' \end{pmatrix}$ and a 2x2 representation spanned by $\begin{pmatrix} s_2', s_3' \end{pmatrix}$. The complete set of reduced representations obtained from the original 4D representation is:
$\begin{array}{ccccccl} E & C_3^+ & C_3^- & \sigma_v & \sigma_v' & \sigma_v'' & \ (1) & (1) & (1) & (1) & (1) & (1) & \begin{array}{l} \text{1D representation spanned} \ \text{by} \: \begin{pmatrix} s_N \end{pmatrix} \end{array} \ (1) & (1) & (1) & (1) & (1) & (1) & \begin{array}{l} \text{1D representation spanned} \ \text{by} \: \begin{pmatrix} s_1' \end{pmatrix} \end{array} \ \scriptsize{\begin{pmatrix} 1 & 0 \ 0 & 1 \end{pmatrix}} & \scriptsize{\begin{pmatrix} -\dfrac{1}{2} & \dfrac{\sqrt{3}}{2} \ -\dfrac{\sqrt{3}}{2} & -\dfrac{1}{2} \end{pmatrix}} & \scriptsize{\begin{pmatrix} -\dfrac{1}{2} & -\dfrac{\sqrt{3}}{2} \ \dfrac{\sqrt{3}}{2} & -\dfrac{1}{2} \end{pmatrix}} & \scriptsize{\begin{pmatrix} 1 & 0 \ 0 & -1 \end{pmatrix}} & \scriptsize{\begin{pmatrix} -\dfrac{1}{2} & \dfrac{\sqrt{3}}{2} \ \dfrac{\sqrt{3}}{2} & \dfrac{1}{2} \end{pmatrix}} & \scriptsize{\begin{pmatrix} -\dfrac{1}{2} & -\dfrac{\sqrt{3}}{2} \ -\dfrac{\sqrt{3}}{2} & \dfrac{1}{2} \end{pmatrix}} & \begin{array}{l} \text{2D representation spanned} \ \text{by} \: \begin{pmatrix} s_2', s_3' \end{pmatrix} \end{array} \end{array}$
This is as far as we can go in reducing this representation. None of the three representations above can be reduced any further, and they are therefore called irreducible representations, of the point group. Formally, a representation is an irreducible representation if there is no similarity transform that can simultaneously convert all of the representatives into block diagonal form. The linear combination of basis functions that converts a matrix representation into block diagonal form, allowing reduction of the representation, is called a symmetry adapted linear combination (SALC).
$^3$ The 1x1 representation in which all of the elements are equal to 1 is sometimes called the unfaithful representation, since it satisfies the group properties in a fairly trivial way without telling us much about the symmetry properties of the group.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Symmetry_(Vallance)/1.12%3A_Reduction_of_Representations_I.txt
|
The two one-dimensional irreducible representations spanned by \(s_N\) and \(s_1'\) are seen to be identical. This means that \(s_N\) and \(s_1'\) have the ‘same symmetry’, transforming in the same way under all of the symmetry operations of the point group and forming bases for the same matrix representation. As such, they are said to belong to the same symmetry species. There are a limited number of ways in which an arbitrary function can transform under the symmetry operations of a group, giving rise to a limited number of symmetry species. Any function that forms a basis for a matrix representation of a group must transform as one of the symmetry species of the group. The irreducible representations of a point group are labeled according to their symmetry species as follows:
1. 1D representations are labeled \(A\) or \(B\), depending on whether they are symmetric (character \(+1\)) or antisymmetric (character \(-1\)) under rotation about the principal axis.
2. 2D representations are labeled \(E\), 3D representations are labeled \(T\).
3. In groups containing a center of inversion, \(g\) and \(u\) labels (from the German gerade and ungerade, meaning symmetric and antisymmetric) denote the character of the irreducible representation under inversion (\(+1\) for \(g\), \(-1\) for \(u\))
4. In groups with a horizontal mirror plane but no center of inversion, the irreducible representations are given prime and double prime labels to denote whether they are symmetric (character \(+1\) or antisymmetric (character \(-1\)) under reflection in the plane.
5. If further distinction between irreducible representations is required, subscripts \(1\) and \(2\) are used to denote the character with respect to a \(C_2\) rotation perpendicular to the principal axis, or with respect to a vertical reflection if there are no \(C_2\) rotations.
The 1D irreducible representation in the \(C_{3v}\) point group is symmetric (has character \(+1\)) under all the symmetry operations of the group. It therefore belongs to the irreducible representation \(A_1\). The 2D irreducible representation has character \(2\) under the identity operation, \(-1\) under rotation, and \(0\) under reflection, and belongs to the irreducible representation \(E\).
Sometimes there is confusion over the relationship between a function \(f\) and its irreducible representation, but it is quite important that you understand the connection. There are several different ways of stating the relationship. For example, the following statements all mean the same thing:
• "\(f\) has \(A_2\) symmetry"
• "\(f\) transforms as \(A_2\)"
• "\(f\) has the same symmetry
1.14: Character Tables
A character table summarizes the behavior of all of the possible irreducible representations of a group under each of the symmetry operations of the group. The character table for $C_{3v}$ is shown below.
$\begin{array}{lllll} \hline C_{3v},3m & E & 2C_3 & 3\sigma_v & h=6 \ \hline A_1 & 1 & 1 & 1 & z, z^2, x^2+y^2 \ A_2 & 1 & 1 & -1 & R_z \ E & 2 & -1 & 0 & \begin{pmatrix} x, y \end{pmatrix}, \begin{pmatrix} xy, x^2+y^2 \end{pmatrix}, \begin{pmatrix} xz, yz \end{pmatrix}, \begin{pmatrix} R_x, R_y \end{pmatrix} \ \hline \end{array} \label{14.1}$
The various sections of the table are as follows:
1. The first element in the table gives the name of the point group, usually in both Schoenflies ($C_{3v}$) and Hermann-Mauguin ($3m$) notation.
2. Along the first row are the symmetry operations of the group, $E$, $2C_3$ and $3\sigma_v$, followed by the order of the group. Because operations in the same class have the same character, symmetry operations are grouped into classes in the character table and not listed separately.
3. In the first column are the irreducible representations of the group. In $C_{3v}$ the irreducible representations are $A_1$, $A_2$ and $E$ (the representation we considered above spans $2A_1$ + $E$).
4. The characters of the irreducible representations under each symmetry operation are given in the bulk of the table.
5. The final column of the table lists a number of functions that transform as the various irreducible representations of the group. These are the Cartesian axes $\begin{pmatrix} x, y, z \end{pmatrix}$, the Cartesian products $\begin{pmatrix} z^2, x^2 + y^2, xy, yz \end{pmatrix}$, and the rotations $\begin{pmatrix} R_x, R_y, R_z \end{pmatrix}$.
The functions listed in the final column of the table are important in many chemical applications of group theory, particularly in spectroscopy. For example, by looking at the transformation properties of $x$, $y$ and $z$ (sometimes given in character tables as $T_x$, $T_y$, $T_z$) we can discover the symmetry of translations along the $x$, $y$, and $z$ axes. Similarly, $R_x$, $R_y$ and $R_z$ represent rotations about the three Cartesian axes. As we shall see later, the transformation properties of $x$, $y$, and $z$ can also be used to determine whether or not a molecule can absorb a photon of $x$-, $y$-, or $z$-polarized light and undergo a spectroscopic transition. The Cartesian products play a similar role in determining selection rules for Raman transitions, which involve two photons.
Character tables for common point groups are given in Appendix B.
A simple way to determine the characters of a representation
In many applications of group theory, we only need to know the characters of the representative matrices, rather than the matrices themselves. Luckily, when each basis function transforms as a 1D irreducible representation (which is true in many cases of interest) there is a simple shortcut to determining the characters without having to construct the entire matrix representation. All we have to do is to look at the way the individual basis functions transform under each symmetry operation. For a given operation, step through the basis functions as follows:
1. Add $1$ to the character if the basis function is unchanged by the symmetry operation (i.e. the basis function is mapped onto itself);
2. Add $-1$ to the character if the basis function changes sign under the symmetry operation (i.e the basis function is mapped onto minus itself);
3. Add $0$ to the character if the basis function moves when the symmetry operation is applied (i.e the basis function is mapped onto something different from itself).
Try this for the $s$ orbital basis we have been using for the $C_{3v}$ group. You should find you get the same characters as we obtained from the traces of the matrix representatives.
We can also work out the characters fairly easily when two basis functions transform together as a 2D irreducible representation. For example, in the $C_{3v}$ point group $x$ and $y$ axes transform together as $E$. If we carry out a rotation about $z$ by an angle $\theta$, our $x$ and $y$ axes are transformed onto new axes $x'$ and $y'$. However, the new axes can each be written as a linear combination of our original $x$ and $y$ axes. Using the rotation matrices introduced in Section 9, we see that:
$\begin{array}{ccc}x' & = & \cos\theta \: x + \sin\theta \: y \ y' & = & -\sin\theta \: x + \cos\theta \: y \end{array} \label{14.2}$
For one-dimensional irreducible representations we asked if a basis function/axis was mapped onto itself, minus itself, or something different. For two-dimensional irreducible representations we need to ask how much of the ‘old’ axis is contained in the new one. From the above we see that the $x'$ axis contains a contribution $\cos$$\theta$ from the $x$ axis, and the $y'$ axis contains a contribution $\cos$$\theta$ from the $y$ axis. The characters of the $x$ and $y$ axes under a rotation through $\theta$ are therefore $\cos$$\theta$, and the overall character of the $E$ irreducible representation is therefore $\cos$$\theta$ $+$ $\cos$$\theta$ $= 2$ $\cos$$\theta$. For a $C_3$ rotation through 120 degrees, the character of the $E$ irreducible representation is therefore $2\cos120$° $=$ $-1$.
In general, when an axis is rotated by an angle $\theta$ by a symmetry operation, its contribution to the character for that operation is $\cos$$\theta$.
Irreducible representations with complex characters
In many cases (see Appendix B), the characters for rotations $C_n$ and improper rotations $S_n$ are complex numbers, usually expressed in terms of the quantity $\epsilon$ = exp(2$\pi$i/n). It is fairly straightforward to reconcile this with the fact that in chemistry we are generally using group theory to investigate physical problems in which all quantities are real. It turns out that whenever our basis spans an irreducible representation whose characters are complex, it will also span a second irreducible representation whose characters are the complex conjugates of the first irreducible representation i.e. complex irreducible representations occur in pairs. According to the strict mathematics of group theory, each irreducible representation in the pair should be considered as a separate representation. However, when applying such irreducible representations in physical problems, we add the characters for the two irreducible representations together to get a single irreducible representation whose characters are real.
As an example, the ‘correct’ character table for the group $C_3$ takes the form:
$\begin{array}{l|l} C_3 & E \: \: \: \: \: \: \: \: C_3 \: \: \: \: \: \: \: \: C_3^2 \ \hline A & 1 \: \: \: \: \: \: \: \: \: \: 1 \: \: \: \: \: \: \: \: \: \: 1 \ \hline E & \begin{Bmatrix} 1 & \epsilon & \epsilon* \ 1 & \epsilon* & \epsilon \end{Bmatrix} \end{array} \label{14.3}$
Where $\epsilon$ = exp(2$\pi$i/3). However, as chemists we would usually combine the two parts of the $E$ irreducible representation to give:
$\begin{array}{l|lll} C_3 & E & C_3 & C_3^2 \ \hline A & 1 & 1 & 1 \ E & 2 & -1 & 1 \end{array} \label{14.4}$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Symmetry_(Vallance)/1.13%3A_Irreducible_representations_and_symmetry_species.txt
|
By making maximum use of molecular symmetry, we often greatly simplify problems involving molecular properties. For example, the formation of chemical bonds is strongly dependent on the atomic orbitals involved having the correct symmetries. To make full use of group theory in the applications we will be considering, we need to develop a little more ‘machinery’. Specifically, given a basis set (of atomic orbitals, for example) we need to find out:
1. How to determine the irreducible representations spanned by the basis functions
2. How to construct linear combinations of the original basis functions that transform as a given irreducible representation/symmetry species.
It turns out that both of these problems can be solved using something called the ‘Great Orthogonality Theorem’ (GOT for short). The GOT summarizes a number of orthogonality relationships implicit in matrix representations of symmetry groups, and may be derived in a somewhat qualitative fashion by considering these relationships in turn.
Some of you might find the next section a little hard going. In it, we will derive two important expressions that we can use to achieve the two goals we have set out above. It is not important that you understand every step in these derivations; they have mainly been included just so you can see where the equations come from. However, you will need to understand how to use the results. Hopefully you will not find this too difficult once we’ve worked through a few examples.
General concepts of Orthogonality
You are probably already familiar with the geometric concept of orthogonality. Two vectors are orthogonal if their dot product (i.e. the projection of one vector onto the other) is zero. An example of a pair of orthogonal vectors is provided by the $\textbf{x}$ and $\textbf{y}$ Cartesian unit vectors.
$\textbf{x}, \textbf{y} = 0\label{15.1}$
A consequence of the orthogonality of $\textbf{x}$ and $\textbf{y}$ is that any general vector in the $xy$ plane may be written as a linear combination of these two basis vectors.
$\textbf{r} = a\textbf{x} + b\textbf{y} \label{15.2}$
Mathematical functions may also be orthogonal. Two functions, $f_1(x)$ and $f_2(x)$, are defined to be orthogonal if the integral over their product is equal to zero i.e.
$\int f_1(x) f_2(x) dx = \delta_{12}$
This simply means that there must be ‘no overlap’ between orthogonal functions, which is the same as the orthogonality requirement for vectors, above. In the same way as for vectors, any general function may be written as a linear combination of a suitably chosen set of orthogonal basis functions. For example, the Legendre polynomials $P_n(x)$ form an orthogonal basis set for functions of one variable $x$.
$f(x) = \sum_n c_n P_n(x) \label{15.3}$
Orthogonality relationships in Group Theory
The irreducible representations of a point group satisfy a number of orthogonality relationships:
1. If corresponding matrix elements in all of the matrix representatives of an irreducible representation are squared and added together, the result is equal to the order of the group divided by the dimensionality of the irreducible representation. i.e.
$\sum _g \Gamma_k(g)_{ij} \Gamma_k(g)_{ij} = \dfrac{h}{d_k} \label{15.4}$
where $k$ labels the irreducible representation, $i$ and $j$ label the row and column position within the irreducible representation, $h$ is the order of the group, and $d_k$ is the order of the irreducible representation. e.g. The order of the group $C_{3v}$ is 6. If we apply the above operation to the first element in the 2x2 ($E$) irreducible representation derived in Section 12, the result should be equal to $\dfrac{h}{d_k}$ = $\dfrac{6}{2}$ = 3. Carrying out this operation gives:
$(1)^2 + (-\dfrac{1}{2})^2 + (-\dfrac{1}{2})^2 + (1)^2 + (-\dfrac{1}{2})^2 +(-\dfrac{1}{2})^2 = 1 + \dfrac{1}{4} + \dfrac{1}{4} + 1 + \dfrac{1}{4} + \dfrac{1}{4} = 3 \label{15.5}$
2. If instead of summing the squares of matrix elements in an irreducible representation, we sum the product of two different elements from within each matrix, the result is equal to zero. i.e.
$\sum _g \Gamma_k(g)_{ij} \Gamma_k(g)_{i'j'} = 0 \label{15.6}$
where $i \neq i'$ and/or $j \neq j'$. E.g. if we perform this operation using the two elements in the first row of the 2D irreducible representation used in 1, we get:
$(1)(0) + (-\dfrac{1}{2})(\dfrac{\sqrt{3}}{2}) + (-\dfrac{1}{2})(-\dfrac{\sqrt{3}}{2}) + (1)(0) + (-\dfrac{1}{2})(\dfrac{\sqrt{3}}{2}) + (-\dfrac{1}{2})(-\dfrac{\sqrt{3}}{2}) = 0 + \dfrac{\sqrt{3}}{4} - \dfrac{\sqrt{3}}{4} + 0 - \dfrac{\sqrt{3}}{4} + \dfrac{\sqrt{3}}{4} = 0 \label{15.7}$
3. If we sum the product of two elements from the matrices of two different irreducible representations $k$ and $m$, the result is equal to zero. i.e.
$\sum_g \Gamma_k(g)_{ij} \Gamma_m(g)_{i'j'} = 0 \label{15.8}$
where there is now no restriction on the values of the indices $i$, $i'$, $j$, $j'$ (apart from the rather obvious restriction that they must be less than or equal to the dimensions of the irreducible representation). e.g. Performing this operation on the first elements of the $A_1$ and $E$ irreducible representations we derived for $C_{3v}$ gives:
$(1)(1) + (1)(-\dfrac{1}{2}) + (1)(-\dfrac{1}{2}) + (1)(1) + (1)(-\dfrac{1}{2}) + (1)(-\dfrac{1}{2}) = 1 - \dfrac{1}{2} - \dfrac{1}{2} + 1 - \dfrac{1}{2} - \dfrac{1}{2} = 0 \label{15.9}$
We can combine these three results into one general equation, the Great Orthogonality Theorem$^4$.
$\sum_g \Gamma_k(g)_{ij} \Gamma_m(g)_{i'j'} = \dfrac{h}{\sqrt{d_kd_m}} \delta_{km} \delta_{ii'} \delta_{jj'} \label{15.10}$
For most applications we do not actually need the full Great Orthogonality Theorem. A little mathematical trickery transforms Equation $\ref{15.10}$ into the ‘Little Orthogonality Theorem’ (or LOT), which is expressed in terms of the characters of the irreducible representations rather than the irreducible representations themselves.
$\sum_g \chi_k(g) \chi_m(g) = h\delta_{km} \label{15.11}$
Since the characters for two symmetry operations in the same class are the same, we can also rewrite the sum over symmetry operations as a sum over classes.
$\sum_C n_C \chi_k(C) \chi_m(C) = h \delta_{km} \label{15.12}$
where $n_C$ is the number of symmetry operations in class $C$.
In all of the examples we’ve considered so far, the characters have been real. However, this is not necessarily true for all point groups, so to make the above equations completely general we need to include the possibility of imaginary characters. In this case we have:
$\sum_C n_C \chi_k^*(C) \chi_m(C) = h \delta_{km} \label{15.13}$
where $\chi_k^*(C)$ is the complex conjugate of $\chi_k(C)$. Equation $\ref{15.13}$ is of course identical to Equation $\ref{15.12}$ when all the characters are real.
Using the LOT to Determine the Irreducible Representations Spanned by a Basis
In Section $12$ we discovered that we can often carry out a similarity transform on a general matrix representation so that all the representatives end up in the same block diagonal form. When this is possible, each set of submatrices also forms a valid matrix representation of the group. If none of the submatrices can be reduced further by carrying out another similarity transform, they are said to form an irreducible representation of the point group. An important property of matrix representatives is that their character is invariant under a similarity transform. This means that the character of the original representatives must be equal to the sum of the characters of the irreducible representations into which the representation is reduced. e.g. if we consider the representative for the $C_3^-$ symmetry operation in our $NH_3$ example, we have:
$\begin{array}{ccccc} \begin{pmatrix} 1 & 0 & 0 & 0 \ 0 & 0 & 0 & 1 \ 0 & 1 & 0 & 0 \ 0 & 0 & 1 & 0 \end{pmatrix} & \begin{array}{c} \text{similarity transform} \ \longrightarrow \end{array} & \begin{pmatrix} 1 & 0 & 0 & 0 \ 0 & 1 & 0 & 0 \ 0 & 0 & -\dfrac{1}{2} & -\dfrac{\sqrt{3}}{2} \ 0 & 0 & \dfrac{\sqrt{3}}{2} & -\dfrac{1}{2} \end{pmatrix} & = & (1) \otimes (1) \otimes \begin{pmatrix} -\dfrac{1}{2} -\dfrac{\sqrt{3}}{2} \ \dfrac{\sqrt{3}}{2} -\dfrac{1}{2} \end{pmatrix} \ \chi = 1 & & \chi = 1 & & \chi = 1 + 1 - 1 = 1 \end{array} \label{15.14}$
It follows that we can write the characters for a general representation $\Gamma(g)$ in terms of the characters of the irreducible representations $\Gamma_k(g)$ into which it can be reduced.
$\chi(g) = \sum_k a_k \chi_k(g) \label{15.15}$
where the coefficients $a_k$ in the sum are the number of times each irreducible representation appears in the representation. This means that in order to determine the irreducible representations spanned by a given basis. all we have to do is determine the coefficients $a_k$ in the above equation. This is where the Little Orthogonality Theorem comes in handy. If we take the LOT in the form of Equation $\ref{15.15}$, and multiply each side through by $a_k$, we get
$\Sigma_g a_k \chi_k(g) \chi_m(g) = h a_k \delta_{km} \label{15.16}$
Summing both sides of the above equation over $k$ gives
$\Sigma_g \Sigma_k a_k \chi_k(g) \chi_m(g) = h \Sigma_k a_k \delta_{km} \label{15.17}$
We can use Equation $\ref{15.15}$ to simplify the left hand side of this equation. Also, the sum on the right hand side reduces to $a_m$ because $\delta{km}$ is only non-zero (and equal to $1$) when $k$ = $m$
$\Sigma_g \chi(g) \chi_m(g) = h a_m \label{15.18}$
Dividing both sides through by $h$ (the order of the group), gives us an expression for the coefficients $a_m$ in terms of the characters $\chi(g)$ of the original representation and the characters $\chi_m(g)$ of the $m^{th}$ irreducible representation.
$a_m = \dfrac{1}{h} \Sigma_g \chi(g) \chi_m(g) \label{15.19}$
We can of course write this as a sum over classes rather than a sum over symmetry operations.
$a_m = \dfrac{1}{h} \Sigma_C n_C \chi(g) \chi_m(g) \label{15.20}$
As an example, in Section $12$ we showed that the matrix representatives we derived for the $C_{3v}$ group could be reduced into two irreducible representations of $A_1$ symmetry and one of $E$ symmetry. i.e. $\Gamma$ = 2$A_1$ + $E$. We could have obtained the same result using Equation $\ref{15.20}$). The characters for our original representation and for the irreducible representations of the $C_{3v}$ point group ($A_1$, $A_2$ and $E$) are given in the table below.
$\begin{array}{llll} \hline C_{3v} & E & 2C_3 & 3\sigma_v \ \hline \chi & 4 & 1 & 2 \ \hline \chi(A_1) & 1 & 1 & 1 \ \chi(A_2) & 1 & 1 & -1 \ \chi(E) & 2 & -1 & 0 \ \hline \end{array} \label{15.21}$
From Equation $\ref{15.20}$, the number of times each irreducible representation occurs for our chosen basis $\begin{pmatrix} s_n, s_1, s_2, s_3 \end{pmatrix}$ is therefore
$\begin{array}{l} a(A_1) = \dfrac{1}{6}(1x4x1 + 2x1x1 + 3x2x1) = 2 \ a(A_2) = \dfrac{1}{6}(1x4x1 + 2x1x1 + 3x2x-1) = 0 \ a(E) = \dfrac{1}{6}(1x4x2 + 2x1x-1 + 3x2x0) = 1 \end{array} \label{15.22}$
i.e. Our basis is spanned by $2A_1$ + $E$, as we found before.
$^4$The $\delta_{ij}$ appearing in Equation $\ref{15.10}$ are called Dirac delta functions. They are equal to $1$ if $i$ = $j$ and $0$ otherwise.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Symmetry_(Vallance)/1.15%3A_Reduction_of_representations_II.txt
|
Once we know the irreducible representations spanned by an arbitrary basis set, we can work out the appropriate linear combinations of basis functions that transform the matrix representatives of our original representation into block diagonal form (i.e. the symmetry adapted linear combinations). Each of the SALCs transforms as one of the irreducible representations of the reduced representation. We have already seen this in our $NH_3$ example. The two linear combinations of $A_1$ symmetry were $s_N$ and $s_1 + s_2 + s_3$, both of which are symmetric under all the symmetry operations of the point group. We also chose another pair of functions, $2s_1 - s_2 - s_3$ and $s_2 - s_3$, which together transform as the symmetry species $E$.
To find the appropriate SALCs to reduce a matrix representation, we use projection operators. You will be familiar with the idea of operators from quantum mechanics. The operators we will be using here are not quantum mechanical operators, but the basic principle is the same. The projection operator to generate a SALC that transforms as an irreducible representation $k$ is $\Sigma_g \chi_k(g) g$. Each term in the sum means ‘apply the symmetry operation $g$ and then multiply by the character of $g$ in irreducible representation $k'$. Applying this operator to each of our original basis functions in turn will generate a complete set of SALCs, i.e. to transform a basis function $f_i$ into a SALC $f_i'$, we use
$f_i' = \sum_g \chi_k(g) g f_i \tag{16.1}$
The way in which this operation is carried out will become much more clear if we work through an example. We can break down the above equation into a fairly straightforward ‘recipe’ for generating SALCs:
1. Make a table with columns labeled by the basis functions and rows labeled by the symmetry operations of the molecular point group. In the columns, show the effect of the symmetry operations on the basis functions (this is the $g f_i$ part of Equation 16.1).
2. For each irreducible representation in turn, multiply each member of the table by the character of the appropriate symmetry operation (we now have $\chi_k(g) g f_i$ for each operation). Summing over the columns (symmetry operations) generates all the SALCs that transform as the chosen irreducible representation.
3. Normalize the SALCs.
Earlier (see Section $10$), we worked out the effect of all the symmetry operations in the $C_{3v}$ point group on the $\begin{pmatrix} s_N, s_1, s_2, s_3 \end{pmatrix}$ basis.
$\begin{array}{lccc} E & \begin{pmatrix} s_N, s_1, s_2, s_3 \end{pmatrix} & \rightarrow & \begin{pmatrix} s_N, s_1, s_2 , s_3 \end{pmatrix} \ C_3^+ & \begin{pmatrix} s_N, s_1, s_2, s_3 \end{pmatrix} & \rightarrow & \begin{pmatrix} s_N, s_2, s_3, s_1 \end{pmatrix} \ C_3^- & \begin{pmatrix} s_N, s_1, s_2, s_3 \end{pmatrix} & \rightarrow & \begin{pmatrix} s_N, s_3, s_1, s_2 \end{pmatrix} \ \sigma_v & \begin{pmatrix} s_N, s_1, s_2, s_3 \end{pmatrix} & \rightarrow & \begin{pmatrix} s_N, s_1, s_3, s_2 \end{pmatrix} \ \sigma_v' & \begin{pmatrix} s_N, s_1, s_2, s_3 \end{pmatrix} & \rightarrow & \begin{pmatrix} s_N, s_2, s_1, s_3 \end{pmatrix} \ \sigma_v'' & \begin{pmatrix} s_N, s_1, s_2, s_3 \end{pmatrix} & \rightarrow & \begin{pmatrix} s_N, s_3, s_2, s_1 \end{pmatrix} \end{array} \tag{16.2}$
This is all we need to construct the table described in 1. above.
$\begin{array}{l|llll} & s_N & s_1 & s_2 & s_3 \ \hline E & s_N & s_1 & s_2 & s_3 \ C_3^+ & s_N & s_2 & s_3 & s_1 \ C_3^- & s_N & s_3 & s_1 & s_2 \ \sigma_v & s_N & s_1 & s_3 & s_2 \ \sigma_v' & s_N & s_2 & s_1 & s_3 \ \sigma_v'' & s_N & s_3 & s_2 & s_1 \end{array} \tag{16.3}$
To determine the SALCs of $A_1$ symmetry, we multiply the table through by the characters of the $A_1$ irreducible representation (all of which take the value $1$). Summing the columns gives
$\begin{array}{rcl} s_N + s_N + s_N + s_N + s_N + s_N & = & 6s_N \ s_1 + s_2 + s_3 + s_1 + s_2 + s_3 & = & 2(s_1 + s_2 + s_3) \ s_2 + s_3 + s_1 + s_3 + s_1 + s_2 & = & 2(s_1 + s_2 + s_3) \ s_3 + s_1 + s_2 + s_2 + s_1 + s_3 & = & 2(s_1 + s_2 + s_3) \end{array} \tag{16.4}$
Apart from a constant factor (which doesn’t affect the functional form and therefore doesn’t affect the symmetry properties), these are the same as the combinations we determined earlier. Normalizing gives us two SALCs of $A_1$ symmetry.
$\begin{array}{rcl} \phi_1 & = & s_N \ \phi_2 & = & \frac{1}{\sqrt{3}}(s_1 + s_2 + s_3) \end{array} \tag{16.5}$
We now move on to determine the SALCs of $E$ symmetry. Multiplying the table above by the appropriate characters for the $E$ irreducible representation gives
$\begin{array}{l|llll} & s_N & s_1 & s_2 & s_3 \ \hline E & 2s_N & 2s_1 & 2s_2 & 2s_3 \ C_3^+ & -s_N & -s_2 & -s_3 & -s_1 \ C_3^- & -s_N & -s_3 & -s_1 & -s_2 \ \sigma_v & 0 & 0 & 0 & 0 \ \sigma_v' & 0 & 0 & 0 & 0 \ \sigma_v'' & 0 & 0 & 0 & 0 \end{array} \tag{16.6}$
Summing the columns yields
$\begin{array}{l} 2s_N - s_N - s_N = 0 \ 2s_1 - s_2 - s_3 \ 2s_2 - s_3 - s_1 \ 2s_3 - s_1 - s_2 \end{array} \tag{16.7}$
We therefore get three SALCs from this procedure. This is a problem, since the number of SALCs must match the dimensionality of the irreducible representation, in this case two. Put another way, we should end up with four SALCs in total to match our original number of basis functions. Added to our two SALCs of $A_1$ symmetry, three SALCs of $E$ symmetry would give us five in total.
The resolution to our problem lies in the fact that the three SALCs above are not linearly independent. Any one of them can be written as a linear combination of the other two e.g. $\begin{pmatrix} 2s_1 - s_2 - s_3 \end{pmatrix} = -\begin{pmatrix} 2s_2 - s_3 s_1 \end{pmatrix} - \begin{pmatrix} 2s_3 - s_1 - s_2 \end{pmatrix}$. To solve the problem, we can either throw away one of the SALCs, or better, make two linear combinations of the three SALCs that are orthogonal to each other.$^5$ e.g. if we take $2s_1 - s_2 - s_3$ as one of our SALCs and find an orthogonal combination of the other two (which turns out to be their difference), we have (after normalization)
$\begin{array}{rcl} \phi_3 & = & \frac{1}{\sqrt{6}}(2s_1 - s_2 - s_3) \ \phi_4 & = & \frac{1}{\sqrt{2}}(s_2 - s_3) \end{array} \tag{16.8}$
These are the same linear combinations used in Section $12$.
We now have all the machinery we need to apply group theory to a range of chemical problems. In our first application, we will learn how to use molecular symmetry and group theory to help us understand chemical bonding.
$^5$ If we write the coefficients of $s_1$, $s_2$ and $s_3$ for each SALC as a vector $\begin{pmatrix} a_1, a_2, a_3 \end{pmatrix}$, then when two SALCs are orthogonal, the dot product of their coefficient vectors $\begin{pmatrix} a_1, a_2, a_3 \end{pmatrix} \cdot \begin{pmatrix} b_1, b_2, b_3 \end{pmatrix} = \begin{pmatrix} a_1b_1 + a_2b_2 + a_3b_3 \end{pmatrix}$ is equal to zero.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Symmetry_(Vallance)/1.16%3A_Symmetry_Adapted_Linear_Combinations_%28SALCs%29.txt
|
As we continue with this course, we will discover that there are many times when we would like to know whether a particular integral is necessarily zero, or whether there is a chance that it may be non-zero. We can often use group theory to differentiate these two cases.
You will have already used symmetry properties of functions to determine whether or not a one-dimensional integral is zero. For example, sin(x) is an ‘odd’ function (antisymmetric with respect to reflection through the origin), and it follows from this that
$\int^{\infty}_{-\infty} \cos(x) dx = 0$
In general, an integral between these limits for any other odd function will also be zero.
In the general case we may have an integral of more than one dimension. The key to determining whether a general integral is necessarily zero lies in the fact that because an integral is just a number, it must be invariant to any symmetry operation. For example, bonding in a diatomic (see next section) depends on the presence of a non-zero overlap between atomic orbitals on adjacent atoms, which may be quantified by an overlap integral. You would not expect the bonding in a molecule to change if you rotated the molecule through some angle $\theta$, so the integral must be invariant to rotation, and indeed to any other symmetry operation.
In group theoretical terms, for an integral to be non-zero, the integrand must transform as the totally symmetric irreducible representation in the appropriate point group. In practice, the integrand may not transform as a single irreducible representation, but it must include the totally symmetric irreducible representation. These ideas should become more clear in the next section.
Note
It should be noted that even when the irreducible representations spanned by the integrand do include the totally symmetric irreducible representation, it is still possible for the integral to be zero. All group theory allows us to do is identify integrals that are necessarily zero based on the symmetry (or lack thereof) of the integrand.
1.18: Bonding in Diatomics
You will already be familiar with the idea of constructing molecular orbitals from linear combinations of atomic orbitals from previous courses covering bonding in diatomic molecules. By considering the symmetries of $s$ and $p$ orbitals on two atoms, we can form bonding and antibonding combinations labeled as having either $\sigma$ or $\pi$ symmetry depending on whether they resemble $s$ or $p$ orbitals when viewed along the bond axis (see diagram below). In all of the cases shown, only atomic orbitals that have the same symmetry when viewed along the bond axis $z$ can form a chemical bond e.g. two $s$ orbitals, two $p_z$ orbitals , or an $s$ and a $p_z$ can form a bond, but a $p_z$ and a $p_x$ or an $s$ and a $p_x$ or a $p_y$ cannot. It turns out that the rule that determines whether or not two atomic orbitals can bond is that they must belong to the same symmetry species within the point group of the molecule.
We can prove this mathematically for two atomic orbitals $\phi_i$ and $\phi_j$ by looking at the overlap integral between the two orbitals.
$S_{ij} = \langle \phi_i|\phi_j \rangle = \int \phi_i^* \phi_j d\tau \: \: \: \: \: \: \: \: \: \: \: \: \: \: \: \: \text(18.1)$
In order for bonding to be possible, this integral must be non-zero. The product of the two functions $\phi_1$ and $\phi_2$ transforms as the direct product of their symmetry species i.e. $\Gamma_{12}$ = $\Gamma_1 \otimes \Gamma_2$. As explained above, for the overlap integral to be non-zero, $\Gamma_{12}$ must contain the totally symmetric irreducible representation ($A_{1g}$ for a homonuclear diatomic, which belongs to the point group $D_{\infty h}$). As it happens, this is only possible if $\phi_1$ and$\phi_2$ belong to the same irreducible representation. These ideas are summarized for a diatomic in the table below.
$\begin{array}{lllll} \hline \text{First Atomic Orbital} & \text{Second Atomic Orbital} & \Gamma_1 \otimes \Gamma_2 & \text{Overlap Integral} & \text{Bonding?} \ \hline s \: (A_{1g}) & s \: (A_{1g}) & A_{1g} & \text{Non-zero} & \text{Yes} \ s \: (A_{1g}) & p_x \: (E_{1u}) & E_{1u} & \text{Zero} & \text{No} \ s \: (A_{1g}) & p_z \: (A_{1u}) & A_{1u} & \text{Zero} & \text{No} \ p_x \: (E_{1u}) & p_x \: (E_{1u}) & A_{1g} + A_{2g} + E_{2g} & \text{Non-zero} & \text{Yes} \ p_X \: (E_{1u}) & p_z \: (A_{1u}) & E_{1g} & \text{Zero} & \text{No} \ p_z \: (A_{1u}) & p_z \: (A_{1u}) & A_{1g} & \text{Non-zero} & \text{Yes} \end{array} \tag{18.2}$
1.19: Bonding in Polyatomics- Constructing Molecular Orbitals from SALCs
In the previous section we showed how to use symmetry to determine whether two atomic orbitals can form a chemical bond. How do we carry out the same procedure for a polyatomic molecule, in which many atomic orbitals may combine to form a bond? Any SALCs of the same symmetry could potentially form a bond, so all we need to do to construct a molecular orbital is take a linear combination of all the SALCs of the same symmetry species. The general procedure is:
1. Use a basis set consisting of valence atomic orbitals on each atom in the system.
2. Determine which irreducible representations are spanned by the basis set and construct the SALCs that transform as each irreducible representation.
3. Take linear combinations of irreducible representations of the same symmetry species to form the molecular orbitals. E.g. in our $NH_3$ example we could form a molecular orbital of $A_1$ symmetry from the two SALCs that transform as $A_1$,
$\begin{array}{rcl} \Psi(A_1) & = & c_1 \phi_1 + c_2 \phi_2 \ & = & c_1 s_N + c_2 \dfrac{1}{\sqrt{3}}(s_1 + s_2 + s_3) \end{array} \tag{19.1}$
Unfortunately, this is as far as group theory can take us. It can give us the functional form of the molecular orbitals but it cannot determine the coefficients $c_1$ and $c_2$. To go further and obtain the expansion coefficients and orbital energies, we must turn to quantum mechanics. The material we are about to cover will be repeated in greater detail in later courses on quantum mechanics and valence, but they are included here to provide you with a complete reference on how to construct molecular orbitals and determine their energies.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Symmetry_(Vallance)/1.17%3A_Determining_whether_an_Integral_can_be_Non-zero.txt
|
Calculation of the orbital energies and expansion coefficients is based on the variation principle, which states that any approximate wavefunction must have a higher energy than the true wavefunction. This follows directly from the fairly common-sense idea that in general any system tries to minimize its energy. If an ‘approximate’ wavefunction had a lower energy than the ‘true’ wavefunction, we would expect the system to try and adopt this ‘approximate’ lower energy state, rather than the ‘true’ state. That all approximations to the true wavefunction must have a higher energy than the true wavefunction is the only scenario that makes physical sense. A mathematical proof of the variation principle is given in the Appendix.
We apply the variation principle as follows:
Molecular energy levels, or orbital energies, are eigenvalues of the molecular Hamiltonian $\hat{H}$. Using a standard result from quantum mechanics, it follows that the energy $E$ of a molecular orbital $\Psi$ is
$\begin{array}{lll} & E = \dfrac{\langle \Psi|\hat{H}|\Psi\rangle}{\langle \Psi|\Psi\rangle} & \text{(unnormalized} \: \Psi) \ \text{or} & E = \langle \Psi|\hat{H}|\Psi\rangle & \text{(normalized} \: \Psi \text{, for which} \langle \Psi|\Psi\rangle = 1) \end{array} \label{20.1}$
If the true wavefunction has the lowest energy, then to find the closest approximation we can to the true wavefunction, all we have to do is find the coefficients in our expansion of SALCs that minimize the energy in the above expressions. In practice, we substitute our wavefunction and minimize the resulting expression with respect to the coefficients. To show how this is done, we’ll use our $NH_3$ wavefunction of $A_1$ symmetry from the previous section. Substituting into Equation $\ref{20.1}$ gives:
$\begin{array}{rcl} E & = & \dfrac{\langle c_1\phi_1 + c_2\phi_2|\hat{H}|c_1\phi_1 + c_2\phi_2\rangle}{\langle c_1\phi_1 + c_2\phi_2| c_1\phi_1 + c_2\phi_2\rangle} \ & = & \dfrac{\langle c_1\phi_1|\hat{H}|c_1\phi_1\rangle + \langle c_1\phi_1|\hat{H}|c_2\phi_2\rangle + \langle c_2\phi_2|\hat{H}|c_1\phi_1\rangle + \langle c_2\phi_2|\hat{H}|c_2\phi_2\rangle}{\langle c_1\phi_1|c_1\phi_1\rangle + \langle c_1\phi_1|c_2\phi_2\rangle + \langle c_2\phi_2|c_1\phi_1\rangle + \langle c_2\phi_2|c_2\phi_2\rangle} \ & = & \dfrac{c_1^2\langle \phi_1|\hat{H}|\phi_1\rangle + c_1c_2\langle \phi_1|\hat{H}|\phi_2\rangle + c_2c_1\langle \phi_2|\hat{H}|\phi_1\rangle + c_2^2\langle \phi_2|\hat{H}|\phi_2\rangle}{c_1^2\langle \phi_1|\phi_1\rangle + c_1c_2\langle \phi_1|\phi_2\rangle + c_2c_1\langle \phi_2|\phi_1\rangle + c_2^2,\phi_2|\phi_2\rangle} \end{array} \label{20.2}$
If we now define a Hamiltonian matrix element $H_{ij}$ = $\langle \phi_i|\hat{H}|\phi_j\rangle$ and an overlap integral $S_{ij}$ = $\langle \phi_i|\phi_j\rangle$ and note that $H_{ij}$ = $H_{ji}$ and $S_{ij}$ = $S_{ji}$, this simplifies to
$E = \dfrac{c_1^2 H_{11} + 2c_1c_2 H_{12} + c_2^2 H_{22}}{c_1^2 S_{11} + 2c_1c_2 S_{12} + c_2^2 S_{22}} \label{20.3}$
To get this into a simpler form for carrying out the energy minimization, we multiply both sides through by the denominator to give
$E(c_1^2 S_{11} + 2c_1c_2 S_{12} + c_2^2 S_{22}) = c_1^2 H_{11} + 2c_1c_2 H_{12} + c_2^2 H_{22} \label{20.4}$
Now we need to minimize the energy with respect to $c_1$ and $c_2$, i.e., we require
$\dfrac{\partial E}{\partial c_1} = 0 \label{20.5a}$
and
$\dfrac{\partial E}{\partial c_2} = 0 \label{20.5b}$
If we differentiate the above equation through separately by $c_1$ and $c_2$ and apply this condition, we will end up with two equations in the two unknowns $c_1$ and $c_2$, which we can solve to determine the coefficients and the energy.
Differentiating Equation $\ref{20.4}$ with respect to $c_1$ (via the product rule of differentiation) gives
$\dfrac{\partial E}{\partial c_1}(c_1^2 S_{11} + 2c_1c_2 S_{12} + c_2^2 S_{22}) + E(2c_1 S_{11} + 2c_2 S_{12}) = 2c_1 H_{11} + 2c_2 H_{12} \label{20.6}$
Differentiating Equation $\ref{20.4}$ with respect to $c_2$ gives
$\dfrac{\partial E}{\partial c_2}(c_1^2 S_{11} + 2c_1c_2 S_{12} + c_2^2 S_{22}) + E(2c_1 S_{12} + 2c_2 S_{22}) = 2c_1 H_{12} + 2c_2 H_{22} \label{20.7}$
Because
$\dfrac{\partial E}{\partial c_1} = \dfrac{\partial E}{\partial c_2} = 0 \label{20.8}$
the first term on the left hand side of both equations is zero, leaving us with
$\begin{array}{rcl} E(2c_1 S_{11} + 2c_2 S_{12}) & = & 2c_1 H_{11} + 2c_2 H_{12} \ E(2c_1 S_{12} + 2c_2 S_{22}) & = & 2c_1 H_{12} + 2c_2 H_{22} \end{array} \label{20.9}$
These are normally rewritten slightly, in the form
$\begin{array}{rcl} c_1(H_{11} - ES_{11}) + c_2(H_{12} -ES_{12}) & = & 0 \ c_1(H_{12} - ES_{12}) + c_2(H_{22} - ES_{22}) & = & 0 \end{array} \label{20.10}$
Equations $\ref{20.10}$ are known as the secular equations and are the set of equations we need to solve to determine $c_1$, $c_2$, and $E$. In the general case (derived in the Appendix), when our wavefunction is a linear combination of $N$ SALCs (i.e. $\Psi = \Sigma_{i=1}^N c_i\phi_i$) we get $N$ equations in $N$ unknowns, with the $k^{th}$ equation given by
$\sum_{i=1}^N c_i(H_{ki} - ES_{ki}) = 0 \label{20.11}$
Note that we can use any basis functions we like together with the linear variation method described here to construct approximate molecular orbitals and determine their energies, but choosing to use SALCs simplifies things considerably when the number of basis functions is large. An arbitrary set of $N$ basis functions leads to a set of $N$ equations in $N$ unknowns, which must be solved simultaneously. Converting the basis into a set of SALCs separates the equations into several smaller sets of secular equations, one for each irreducible representation, which can be solved independently. It is usually easier to solve several sets of secular equations of lower dimensionality than one set of higher dimensionality.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Symmetry_(Vallance)/1.20%3A_Calculating_Orbital_Energies_and_Expansion_Coefficients.txt
|
Matrix formulation of a set of linear equations
As we have seen already, any set of linear equations may be rewritten as a matrix equation $A\textbf{x}$ = $\textbf{b}$. Linear equations are classified as simultaneous linear equations or homogeneous linear equations, depending on whether the vector $\textbf{b}$ on the RHS of the equation is non-zero or zero.
For a set of simultaneous linear equations (non-zero $\textbf{b}$) it is fairly apparent that if a unique solution exists, it can be found by multiplying both sides by the inverse matrix $A^{-1}$ (since $A^{-1}A$ on the left hand side is equal to the identity matrix, which has no effect on the vector $\textbf{x}$)
$\begin{array}{rcl} A\textbf{x} & = & \textbf{b} \ A^{-1}A\textbf{x} & = & A^{-1}\textbf{b} \ \textbf{x} & = & A^{-1}\textbf{b} \end{array} \label{21.1}$
In practice, there are easier matrix methods for solving simultaneous equations than finding the inverse matrix, but these need not concern us here. In Section 8.4, we discovered that in order for a matrix to have an inverse, it must have a non-zero determinant. Since $A^{-1}$ must exist in order for a set of simultaneous linear equations to have a solution, this means that the determinant of the matrix $A$ must be non-zero for the equations to be solvable.
The reverse is true for homogeneous linear equations. In this case the set of equations only has a solution if the determinant of $A$ is equal to zero. The secular equations we want to solve are homogeneous equations, and we will use this property of the determinant to determine the molecular orbital energies. An important property of homogeneous equations is that if a vector $\textbf{x}$ is a solution, so is any multiple of $\textbf{x}$, meaning that the solutions (the molecular orbitals) can be normalized without causing any problems.
Solving for the orbital energies and expansion coefficients
Recall the secular equations for the $A_1$ orbitals of $NH_3$ derived in the previous section
$\begin{array}{rcl} c_1(H_{11} - ES_{11}) + c_2(H_{12} - ES_{12}) & = & 0 \ c_1(H_{12} - ES_{12}) + c_2(H_{22} - ES_{22}) & = & 0 \end{array} \label{21.2}$
where $c_1$ and $c_2$ are the coefficients in the linear combination of the SALCs $\phi_1$ = $s_N$ and $\phi_2$ = $\dfrac{1}{\sqrt{3}}(s_1 + s_2 + s_3)$ used to construct the molecular orbital. Writing this set of homogeneous linear equations in matrix form gives
$\begin{pmatrix} H_{11} - ES_{11} & H_{12} - ES_{12} \ H_{12} - ES_{12} & H_{22} - ES_{22} \end{pmatrix} \begin{pmatrix} c_1 \ c_2 \end{pmatrix} = \begin{pmatrix} 0 \ 0 \end{pmatrix} \label{21.3}$
For the equations to have a solution, the determinant of the matrix must be equal to zero. Writing out the determinant will give us a polynomial equation in $E$ that we can solve to obtain the orbital energies in terms of the Hamiltonian matrix elements $H_{ij}$ and overlap integrals $S_{ij}$. The number of energies obtained by ‘solving the secular determinant’ in this way is equal to the order of the matrix, in this case two.
The secular determinant for Equation $\ref{21.3}$ is (noting that $S_{11}$ = $S_{22} = 1$ since the SALCs are normalized)
$(H_{11} - E)(H_{22} - E) - (H_{12} - ES_{12})^2 = 0 \label{21.4}$
Expanding and collecting terms in $E$ gives
$E^2(1-S_{12}^2) + E(2H_{12}S_{12} - H_{11} - H_{22}) + (H_{11}H_{22} - H_{12}^2) = 0 \label{21.5}$
which can be solved using the quadratic formula to give the energies of the two molecular orbitals.
$E_\pm = \dfrac{-(2H_{12}S_{12} - H_{11} - H_{22}) \pm \sqrt{(2H_{12}S_{12} - H_{11} - H_{22})^2 - 4(1-S_{12}^2)(H_{11}H_{22} - H_{12}^2)}}{2(1-S_{12}^2)} \label{21.6}$
To obtain numerical values for the energies, we need to evaluate the integrals $H_{11}$, $H_{22}$, $H_{12}$, and $S_{12}$. This would be quite a challenge to do analytically, but luckily there are a number of computer programs that can be used to calculate the integrals. One such program gives the following values.
$\begin{array}{rcl} H_{11} & = & -26.0000 \: eV \ H_{22} & = & -22.2216 \: eV \ H_{12} & = & -29.7670 \: eV \ S_{12} & = & \: 0.8167 \: eV \end{array} \label{21.7}$
When we substitute these into our equation for the energy levels, we get:
$\begin{array}{rcl} E_+ & = & \: 29.8336 \: eV \ E_- & = & -31.0063 \: eV \end{array} \label{21.8}$
We now have the orbital energies and the next step is to find the orbital coefficients. The coefficients for an orbital of energy $E$ are found by substituting the energy into the secular equations and solving for the coefficients $c_i$. Since the two secular equations are not linearly independent (i.e. they are effectively only one equation), when we solve them to find the coefficients what we will end up with is the relative values of the coefficients. This is true in general: in a system with $N$ coefficients, solving the secular equations will allow all $N$ of the coefficients $c_i$ to be obtained in terms of, say, $c_1$. The absolute values of the coefficients are found by normalizing the wavefunction.
Since the secular equations for the orbitals of energy $E_+$ and $E_-$ are not linearly independent, we can choose to solve either one of them to find the orbital coefficients. We will choose the first.
$(H_{11} - E_{\pm})c_1 + (H_{12} - E_{\pm}S_{12})c_2 = 0 \label{21.9}$
For the orbital with energy $E_-$ = -31.0063 eV, substituting numerical values into this equation gives
$\begin{array}{rcl} 5.0063 c_1 - 4.4442 c_2 & = & 0 \ c_2 & = & 1.1265 c_1 \end{array} \label{21.10}$
The molecular orbital is therefore
$\Psi = c_1(\phi_1 + 1.1265\phi_2) \label{21.11}$
Normalizing to find the constant $c_1$ (by requiring $\label\Psi|\Psi \rangle$ = 1) gives
$\begin{array}{rcll} \Psi_1 & = & 0.4933\phi_1 + 0.5557\phi_2 & \ & = & 0.4933s_N + 0.3208(s_1 + s_2 + s_3) & (\text{substituting the SALCs for} \: \phi_1 \: \text{and} \: \phi_2) \end{array} \label{21.12}$
For the second orbital, with energy $E_+$ = 29.8336 eV, the secular equation is
$\begin{array}{rcl} -55.8336c_1 - 54.1321c_2 & = & 0 \ c_2 & = & -1.0314c_1 \end{array} \label{21.13}$
giving
$\begin{array}{rcll} \Psi_2 & = & c_1(\phi_1 - 1.0314\phi_2) & \ & = & 1.6242\phi_1 - 1.6752\phi_2 & \text{(after normalization)} \ & = & 1.6242s_N -0.9672(s_1 + s_2 + s_3) \end{array} \label{21.14}$
These two $A_1$ molecular orbitals $\Psi_1$ and $\Psi_2$, one bonding and one antibonding, are shown below.
The remaining two SALCs arising from the $s$ orbitals of $NH_3$:
$\phi_3 = \dfrac{1}{\sqrt{6}}\begin{pmatrix} 2s_1 - s_2 - s_3 \end{pmatrix}$
and
$\phi_4 = \dfrac{1}{\sqrt{2}} \begin{pmatrix} s_2 - s_3 \end{pmatrix}$
form an orthogonal pair of molecular orbitals of $E$ symmetry. We can show this by solving the secular determinant to find the orbital energies. The secular equations in this case are:
$\begin{array}{rcl} c_1(H_{33} - ES_{33}) + c_2(H_{34} -ES_{34}) & = & 0 \ c_1(H_{34} -ES_{34}) + c_2(H_{44} - ES_{44}) & = & 0 \end{array} \label{21.15}$
Solving the secular determinant gives
$E_\pm = \dfrac{-(2H_{34}S_{34} - H_{33} - H_{44}) \pm \sqrt{(2H_{34}S_{34} - H_{33} - H_{44})^2 - 4(1-S_{34}^2)(H_{33}H_{44} - H_{34}^2)}}{2(1-S_{34}^2)} \label{21.16}$
The integrals required are
$\begin{array}{rcl} H_{33} & = & -9.2892 \: eV \ H_{44} & = & -9.2892 \: eV \ H_{34} & = & 0 \ S_{34} & = & 0 \end{array} \label{21.17}$
Using the fact that $H_{34}$ = $S_{34} = 0$, the expression for the energies reduces to
$E_\pm = \dfrac{(H_{33} + H_{44}) \pm (H_{33} - H_{44})}{2} \label{21.18}$
giving $E_+$ = $H_{33}$ = -9.2892 eV and $E_-$ = $H_{44}$ = -9.2892 eV. Each SALC therefore forms a molecular orbital by itself, and the two orbitals have the same energy; the two SALCs form an orthogonal pair of degenerate orbitals. These two molecular orbitals of $E$ symmetry are shown below.
1.22: Summary of the Steps Involved in Constructing Molecular Orbitals
1. Choose a basis set of functions $f_i$ consisting of the valence atomic orbitals on each atom in the system, or some chosen subset of these orbitals.
2. With the help of the appropriate character table, determine which irreducible representations are spanned by the basis set using Equation (15.20) to determine the number of times $a_k$ that the $k^{th}$ irreducible representation appears in the representation. $a_k = \dfrac{1}{h}\sum_C n_C \chi(g) \chi_k(g) \label{22.1}$
3. Construct the SALCs $\phi_i$ that transform as each irreducible representation using Equation 16.1 $\phi_i = \sum_g \chi_k(g) g f_i \label{22.2}$
4. Write down expressions for the molecular orbitals by taking linear combinations of all the irreducible representations of the same symmetry species.
5. Write down the secular equations for the system.
6. Solve the secular determinant to obtain the energies of the molecular orbitals.
7. Substitute each energy in turn back into the secular equations and solve to obtain the coefficients appearing in your molecular orbital expressions in step 4.
8. Normalize the orbitals.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Symmetry_(Vallance)/1.21%3A_Solving_the_Secular_Equations.txt
|
As another example, we will use group theory to construct the molecular orbitals of $H_2O$ (point group $C_{2v}$) using a basis set consisting of all the valence orbitals. The valence orbitals are a $1s$ orbital on each hydrogen, which we will label $s_H$ and $s_H'$, and a $2s$ and three $2p$ orbitals on the oxygen, which we will label $s_O$, $p_x$, $p_y$, $p_z$ giving a complete basis $\begin{pmatrix} s_H, s_H', s_O, p_x, p_y, p_z \end{pmatrix}$.
The first thing to do is to determine how each orbital transforms under the symmetry operations of the $C_{2v}$ point group ($E$, $C_2$, $\sigma_v$ and $\sigma_v'$), construct a matrix representation and determine the characters of each operation. The symmetry operations and axis system we will be using are shown below.
The orbitals transform in the following way
$\begin{array}{lrcl} E & \begin{pmatrix} s_H, s_H', s_O, p_x, p_y, p_z \end{pmatrix} & \rightarrow & \begin{pmatrix} s_H, s_H', s_O, p_x, p_y, p_z \end{pmatrix} \ C_2 & \begin{pmatrix} s_H, s_H', s_O, p_x, p_y, p_z \end{pmatrix} & \rightarrow & \begin{pmatrix} s_H', s_H, s_O, -p_x, -p_y, p_z \end{pmatrix} \ \sigma_v(xz) & \begin{pmatrix} s_H, s_H', s_O, p_x, p_y, p_z \end{pmatrix} & \rightarrow & \begin{pmatrix} s_H, s_H', s_O, p_x, -p_y, p_z \end{pmatrix} \ \sigma_v'(yz) & \begin{pmatrix} s_H, s_H', s_O, p_x, p_y, p_z \end{pmatrix} & \rightarrow & \begin{pmatrix} s_H', s_H, s_O, -p_x, p_y, p_z \end{pmatrix} \end{array} \label{23.1}$
A short aside on constructing matrix representatives
After a little practice, you will probably be able to write matrix representatives straight away just by looking at the effect of the symmetry operations on the basis. However, if you are struggling a little the following procedure might help.
Remember that the matrix representatives are just the matrices we would have to multiply the left hand side of the above equations by to give the right hand side. In most cases they are very easy to work out. Probably the most straightforward way to think about it is that each column of the matrix shows where one of the original basis functions ends up. For example, the first column transforms the basis function $s_H$ to its new position. The first column of the matrix can be found by taking the result on the right hand side of the above expressions, replacing every function that isn’t $s_H$ with a zero, putting the coefficient of $s_H$ ($1$ or $-1$ in this example) in the position at which it occurs, and taking the transpose to give a column vector.
Rotation
Consider the representative for the $C_2$ operation. The original basis $\begin{pmatrix} s_H, s_H', s_O, p_x, p_y, p_z \end{pmatrix}$ transforms into $\begin{pmatrix} s_H', s_H, s_O, -p_x, -p_y, p_z \end{pmatrix}$. The first column of the matrix therefore transforms $s_H$ into $s_H'$. Taking the result and replacing all the other functions with zeroes gives $\begin{pmatrix} 0, s_H, 0, 0, 0, 0 \end{pmatrix}$. The coefficient of $s_H$ is $1$, so the first column of the $C_2$ matrix representative is
$\begin{pmatrix} 0 \ 1 \ 0 \ 0 \ 0 \ 0 \end{pmatrix} \label{23.2}$
Matrix representation, characters and SALCs
The matrix representatives and their characters are
$\begin{array}{cccc} E & C_2 & \sigma_v & \sigma_v' \ \scriptsize{\begin{pmatrix} 1 & 0 & 0 & 0 & 0 & 0 \ 0 & 1 & 0 & 0 & 0 & 0 \ 0 & 0 & 1 & 0 & 0 & 0 \ 0 & 0 & 0 & 1 & 0 & 0 \ 0 & 0 & 0 & 0 & 1 & 0 \ 0 & 0 & 0 & 0 & 0 & 1 \end{pmatrix}} & \scriptstyle{\begin{pmatrix} 0 & 1 & 0 & 0 & 0 & 0 \ 1 & 0 & 0 & 0 & 0 & 0 \ 0 & 0 & 1 & 0 & 0 & 0 \ 0 & 0 & 0 & -1 & 0 & 0 \ 0 & 0 & 0 & 0 & -1 & 0 \ 0 & 0 & 0 & 0 & 0 & 1 \end{pmatrix}} & \scriptstyle{\begin{pmatrix} 1 & 0 & 0 & 0 & 0 & 0 \ 0 & 1 & 0 & 0 & 0 & 0 \ 0 & 0 & 1 & 0 & 0 & 0 \ 0 & 0 & 0 &1 & 0 & 0 \ 0 & 0 & 0 & 0 & -1 & 0 \ 0 & 0 & 0 & 0 & 0 & 1 \end{pmatrix}} & \scriptstyle{\begin{pmatrix} 0 & 1 & 0 & 0 & 0 & 0 \ 1 & 0 & 0 & 0 & 0 & 0 \ 0 & 0 & 1 & 0 & 0 & 0 \ 0 & 0 & 0 & -1 & 0 & 0 \ 0 & 0 & 0 & 0 & 1 & 0 \ 0 & 0 & 0 & 0 & 0 & 1 \end{pmatrix}} \ \chi(E) = 6 & \chi(C_2) = 0 & \chi(\sigma_v) = 4 & \chi(\sigma_v') = 2 \end{array} \label{23.3}$
Now we are ready to work out which irreducible representations are spanned by the basis we have chosen. The character table for $C_{2v}$ is:
$\begin{array}{l|cccc|l} C_{2v} & E & C_2 & \sigma_v & \sigma_v' & h = 4 \ \hline A_1 & 1 & 1 & 1 & 1 & z, x^2, y^2, z^2 \ A_2 & 1 & 1 & -1 & -1 & xy, R_z \ B_1 & 1 & -1 & 1 & -1 & x, xz, R_y \ B_2 & 1 & -1 & -1 & 1 & y, yz, R_x \ \hline \end{array}$
As before, we use Equation (15.20) to find out the number of times each irreducible representation appears.
$a_k = \dfrac{1}{h}\sum_C n_C \chi(g) \chi_k(g) \label{23.4}$
We have
$\begin{array}{rcll} a(A_1) & = & \dfrac{1}{4}(1 \times 6 \times 1 + 1 \times 0 \times 1 + 1\times 4\times 1 + 1\times 2\times 1) & = 3 \ a(A_2) & = & \dfrac{1}{4}(1\times 6\times 1 + 1\times 0\times 1 + 1\times 4\times -1 + 1\times 2\times -1) & = 0 \ a(B_1) & = & \dfrac{1}{4}(1\times 6\times 1 + 1\times 0\times -1 + 1\times 4\times 1 + 1\times 2\times -1) & = 2 \ a(B_2) & = & \dfrac{1}{4}(1\times 6\times 1 + 1\times 0\times -1 + 1\times 4\times -1 + 1\times 2\times 1) & = 1 \end{array} \label{23.5}$
so the basis spans $3A_1 + 2B_1 + B_2$. Now we use the projection operators applied to each basis function $f_i$ in turn to determine the SALCs $\phi_i = \Sigma_g \chi_k(g) g f_i$
The SALCs of $A_1$ symmetry are:
$\begin{array}{rclll} \phi(s_H) & = & s_H + s_H' + s_H + s_H' & = & 2(s_H + s_H') \ \phi(s_H') & = & s_H' + s_H + s_H' + s_H & = & 2(s_H + s_H') \ \phi(s_O) & = & s_O + s_O + s_O + s_O & = & 4s_O \ \phi(p_x) & = & p_x - p_x + p_x - p_x & = & 0 \ \phi(p_y) & = & p_y - p_y + p_y - p_y & = & 0 \ \phi(p_z) & = & p_z + p_z + p_z + p_z & = & 4p_z \end{array} \label{23.6}$
The SALCs of $B_1$ symmetry are:
$\begin{array}{rclll} \phi(s_H) & = & s_H - s_H' + s_H - s_H' & = & 2(s_H - s_H') \ \phi(s_H') & = & s_H' - s_H + s_H' - s_H & = & 2(s_H' - s_H) \ \phi(s_O) & = & s_O - s_O + s_O - s_O & = & 0 \ \phi(p_x) & = & p_x + p_x + p_x + p_x & = & 4p_x \ \phi(p_y) & = & p_y + p_y - p_y - p_y & = & 0 \ \phi(p_z) & = & p_z - p_z + p_z - p_z & = & 0 \end{array} \label{23.7}$
The SALCs of $B_2$ symmetry are:
$\begin{array}{rclll} \phi(s_H) & = & s_H - s_H' -s_H = s_H' & = & 0 \ \phi(s_H') & = & s_H' - s_H - s_H' + s_H & = & 0 \ \phi(s_O) & = & s_O - s_O - s_O + s_O & = & 0 \ \phi(p_x) & = & p_x + p_x - p_x - p_x & = & 0 \ \phi(p_y) & = & p_y + p_y + p_y + p_y & = & 4p_y \ \phi(p_z) & = & p_z - p_z - p_z + p_z & = & 0 \end{array} \label{23.8}$
After normalization, our SALCs are therefore:
A1 symmetry
$\begin{array}{rcl} \phi_1 & = & \dfrac{1}{\sqrt{2}}(s_H + s_H') \ \phi_2 & = & s_O \ \phi_3 & = & p_z \end{array} \label{23.9}$
B1 symmetry
$\begin{array}{rcl} \phi_4 & = & \dfrac{1}{\sqrt{2}}(s_H - s_H') \ \phi_5 & = & p_x \end{array} \label{23.10}$
B2 symmetry
$\begin{array}{rcl} \phi_6 & = & p_y \end{array} \label{23.11}$
Note that we only take one of the first two SALCs generated by the $B_1$ projection operator since one is a simple multiple of the other (i.e. they are not linearly independent). We can therefore construct three molecular orbitals of $A_1$ symmetry, with the general form
$\begin{array}{rcll} \Psi(A_1) & = & c_1 \phi_1 + c_2 \phi_2 + c_3 \phi_3 & \ & = & c_1'(s_H + s_H') + c_2 s_O + c_3 p_z & \text{where} \: c_1' = \dfrac{c_1}{\sqrt{2}} \end{array} \label{23.12}$
two molecular orbitals of $B_1$ symmetry, of the form
$\begin{array}{rcl} \Psi(B_1) & = & c_4 \phi_4 + c_5 \phi_5 \ & = & c_4'(s_H - s_H') + c_5 p_z \end{array} \label{23.13}$
and one molecular orbital of $B_2$ symmetry
$\begin{array}{rcl} \Psi(B_2) & = & \phi_6 \ & = & p_y \end{array} \label{23.14}$
To work out the coefficients $c_1$ - $c_5$ and determine the orbital energies, we would have to solve the secular equations for each set of orbitals in turn. We are not dealing with a conjugated $p$ system, so in this case Hückel theory cannot be used and the various $H_{ij}$ and $S_{ij}$ integrals would have to be calculated numerically and substituted into the secular equations. This involves a lot of tedious algebra, which we will leave out for the moment. The LCAO orbitals determined above are an approximation of the true molecular orbitals of water, which are shown on the right. As we have shown using group theory, the $A_1$ molecular orbitals involve the oxygen $2s$ and $2p_z$ atomic orbitals and the sum $s_H + s_H'$ of the hydrogen $1s$ orbitals. The $B_1$ molecular orbitals involve the oxygen $2p_x$ orbital and the difference $s_H -s_H'$ of the two hydrogen $1s$ orbitals, and the $B_2$ molecular orbital is essentially an oxygen $2p_y$ atomic orbital.
Contributors and Attributions
Claire Vallance (University of Oxford)
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Symmetry_(Vallance)/1.23%3A_A_more_complicated_bonding_example.txt
|
Vibrational motion in diatomic molecules is often discussed within the context of the simple harmonic oscillator in quantum mechanics. A diatomic molecule has only a single bond that can vibrate; we say it has a single vibrational mode. As you may expect, the vibrational motions of polyatomic molecules are much more complicated than those in a diatomic. Firstly, there are more bonds that can vibrate; and secondly, in addition to stretching vibrations, the only type of vibration possible in a diatomic, we can also have bending and torsional vibrational modes. Since changing one bond length in a polyatomic will often affect the length of nearby bonds, we cannot consider the vibrational motion of each bond in isolation; instead we talk of normal modes involving the concerted motion of groups of bonds. As a simple example, the normal modes of a linear triatomic molecule are shown below.
Once we know the symmetry of a molecule at its equilibrium structure, group theory allows us to predict the vibrational motions it will undergo using exactly the same tools we used above to investigate molecular orbitals. Each vibrational mode transforms as one of the irreducible representations of the molecule’s point group. Before moving on to an example, we will quickly review how to determine the number of vibrational modes in a molecule.
Molecular degrees of freedom – determining the number of normal vibrational modes
An atom can undergo only translational motion, and therefore has three degrees of freedom corresponding to motion along the $x$, $y$, and $z$ Cartesian axes. Translational motion in any arbitrary direction can always be expressed in terms of components along these three axes. When atoms combine to form molecules, each atom still has three degrees of freedom, so the molecule as a whole has $3N$ degrees of freedom, where $N$ is the number of atoms in the molecule. However, the fact that each atom in a molecule is bonded to one or more neighboring atoms severely hinders its translational motion, and also ties its motion to that of the atoms to which it is attached. For these reasons, while it is entirely possible to describe molecular motions in terms of the translational motions of individual atoms (we will come back to this in the next section), we are often more interested in the motions of the molecule as a whole. These may be divided into three types: translational; rotational and vibrational.
Just as for an individual atom, the molecule as a whole has three degrees of translational freedom, leaving $3N - 3$ degrees of freedom in rotation and vibration.
The number of rotational degrees of freedom depends on the structure of the molecule. In general, there are three possible rotational degrees of freedom, corresponding to rotation about the $x$, $y$, and $z$ Cartesian axes. A non-linear polyatomic molecule does indeed have three rotational degrees of freedom, leaving $3N - 6$ degrees of freedom in vibration (i.e $3N - 6$ vibrational modes). In a linear molecule, the situation is a little different. It is generally accepted that to be classified as a true rotation, a motion must change the position of one or more of the atoms. If we define the $z$ axis as the molecular axis, we see that spinning the molecule about the axis does not move any of the atoms from their original position, so this motion is not truly a rotation. Consequently, a linear molecule has only two degrees of rotational freedom, corresponding to rotations about the $x$ and $y$ axis. This type of molecule has $3N - 5$ degrees of freedom left for vibration, or $3N - 5$ vibrational modes.
In summary:
• A linear molecule has $3N - 5$ vibrational modes
• A non-linear molecule has $3N - 6$ vibrational modes.
Determining the Symmetries of Molecular Motions
We mentioned above that the procedure for determining the normal vibrational modes of a polyatomic molecule is very similar to that used in previous sections to construct molecular orbitals. In fact, virtually the only difference between these two applications of group theory is the choice of basis set.
As we have already established, the motions of a molecule may be described in terms of the motions of each atom along the $x$, $y$ and $z$ axis. Consequently, it probably won’t come as too much of a surprise to discover that a very useful basis for describing molecular motions comprises a set of $\begin{pmatrix} x, y, z \end{pmatrix}$ axes centered on each atom. This basis is usually known as the $\textit{3N}$ Cartesian basis (since there are $3N$ Cartesian axes, $3$ axes for each of the $N$ atoms in the molecule). Note that each molecule will have a different $3N$ Cartesian basis, just as every molecule has a different atomic orbital basis.
Our first task in investigating motions of a particular molecule is to determine the characters of the matrix representatives for the $3N$ Cartesian basis under each of the symmetry operations in the molecular point group. We will use the $H_2O$ molecule, which has $C_{2v}$ symmetry, as an example.
$H_2O$ has three atoms, so the $3N$ Cartesian basis will have $9$ elements. The basis vectors are shown in the diagram below.
One way of determining the characters would be to construct all of the matrix representatives and take their traces. While you are more than welcome to try this approach if you want some practice at constructing matrix representatives, there is an easier way. Recall that we can also determine the character of a matrix representative under a particular symmetry operation by stepping through the basis functions and applying the following rules:
1. Add $1$ to the character if the basis function is unchanged by the symmetry operation;
2. Add $-1$ to the character if the basis function changes sign under the symmetry operation;
3. Add $0$ to the character if the basis function moves when the symmetry operation is applied.
For $H_2O$, this gives us the following characters for the $3N$ Cartesian basis (check that you can obtain this result using the rules above and the basis vectors as drawn in the figure):
$\begin{array}{lcccc} \text{Operation:} & E & C_2 & \sigma_v(xz) & \sigma_v'(yz) \ \chi_{3N}: & 9 & -1 & 3 & 1 \end{array} \tag{24.1}$
There is an even quicker way to work out the characters of the $3N$ Cartesian basis if you have a character table in front of you. The character for the Cartesian basis is simply the sum of the characters for the $x$, $y$, and $z$ (or $T_x$, $T_y$, and $T_z$) functions listed in the character table. To get the character for the $\textit{3N}$ Cartesian basis, simply multiply this by the number of atoms in the molecule that are unshifted by the symmetry operation.
The $C_{2v}$ character table is shown below.
$\begin{array}{l|cccc|l} C_{2v} & E & C_2 & \sigma_v & \sigma_v' & h = 4 \ \hline A_1 & 1 & 1 & 1 & 1 & z, x^2, y^2, z^2 \ A_2 & 1 & 1 & -1 & -1 & xy, R_z \ B_1 & 1 & -1 & 1 & -1 & x, xz, R_y \ B_2 & 1 & -1 & -1 & 1 & y, yz, R_x \ \hline \end{array} \tag{24.2}$
$x$ transforms as $B_1$, $y$ as $B_2$, and $z$ as $A_1$, so the characters for the Cartesian basis are
$\begin{array}{lcccc} \text{Operation:} & E & C_2 & \sigma_v(xz) & \sigma_v'(yz) \ \chi_{3N}: & 3 & -1 & 1 & 1 \end{array} \tag{24.3}$
We multiply each of these by the number of unshifted atoms ($3$ for the identity operation, $1$ for $C_2$, $3$ for $\sigma_v$ and $1$ for $\sigma_v'$) to obtain the characters for the $3N$ Cartesian basis.
$\begin{array}{lcccc} \chi_{3N}: & 9 & -1 & 3 & 1 \end{array} \tag{24.4}$
Reassuringly, we obtain the same characters as we did previously. Which of the three methods you use to get to this point is up to you.
We now have the characters for the molecular motions (described by the $3N$ Cartesian basis) under each symmetry operation. At this point, we want to separate these characters into contributions from translation, rotation, and vibration. This turns out to be a very straightforward task. We can read the characters for the translational and rotational modes directly from the character table, and we obtain the characters for the vibrations simply by subtracting these from the $3N$ Cartesian characters we’ve just determined. The characters for the translations are the same as those for $\chi_{Cart}$. We find the characters for the rotations by adding together the characters for $R_x$, $R_y$, and $R_z$ from the character table (or just $R_x$ and $R_y$ if the molecule is linear). For $H_2O$, we have:
$\begin{array}{lcccc} \text{Operation:} & E & C_2 & \sigma_v(xz) & \sigma_v'(yz) \ \chi_{3N}: & 9 & -1 & 3 & 1 \ \chi_{\text{Trans}}: & 3 & -1 & 1 & 1 \ \chi_{\text{Rot}}: & 3 & -1 & -1 & -1 \ \chi_{\text{Vib}} = \chi_{3N} - \chi_{\text{Trans}} - \chi_{\text{Rot}}: & 3 & 1 & 3 & 1 \end{array} \tag{24.5}$
The characters in the final row are the sums of the characters for all of the molecular vibrations. We can find out the symmetries of the individual vibrations by using the reduction equation (Equation (15.20)) to determine the contribution from each irreducible representation.
In many cases you won’t even need to use the equation, and can work out which irreducible representations are contributing just by inspection of the character table. In the present case, the only combination of irreducible representations that can give the required values for $\chi_{\text{Vib}}$ is $2A_1 + B_1$. As an exercise, you should make sure you are also able to obtain this result using the reduction equation.
So far this may all seem a little abstract, and you probably want to know is what the vibrations of $H_2O$ actually look like. For a molecule with only three atoms, it is fairly easy to identify the possible vibrational modes and to assign them to the appropriate irreducible representation.
For a larger molecule, the problem may become much more complex, and in that case we can generate the SALCs of the $3N$ Cartesian basis, which will tell us the atomic displacements associated with each vibrational mode. We will do this now for $H_2O$.
Atomic displacements using the 3N Cartesian basis
As before, we generate the SALCs of each symmetry by applying the appropriate projection operator to each of the basis functions (or in this case, basis vectors) $f_i$ in turn.
$\phi_i = \sum_g \chi_k(g) g f_i \tag{24.6}$
In this case we have $9$ basis vectors, which we will label $x_H$, $y_H$, $z_H$, $x_O$, $y_O$, $z_O$, $x_{H'}$, $y_{H'}$, $z_{H'}$, describing the displacements of the two $H$ atoms and the $O$ atom along Cartesian axes. For the SALCs of $A_1$ symmetry, applying the projection operator to each basis vector in turn gives (check that you can obtain this result):
$\begin{array}{rclll} \phi_1(x_H) & = & x_H - x_{H'} + x_H - x_{H'} & = & 2x_H - 2x_{H'} \ \phi_2(y_H) & = & y_H - y_{H'} - y_H + y_{H'} & = & 0 \ \phi_3(z_H) & = & z_H + z_{H'} + z_H + z_{H'} & = & 2z_H + 2z_{H'} \ \phi_4(x_O) & = & x_O - x_O + x_O - x_O & = & 0 \ \phi_5(y_O) & = & y_O - y_O - y_O + y_O & = & 0 \ \phi_6(z_O) & = & z_O + z_O + z_O + z_O & = & 4z_O \ \phi_7(x_{H'}) & = & x_{H'} - x_H + x_{H'} - x_H & = & 2x_{H'} - 2x_H \ \phi_8(y_{H'}) & = & y_{H'} - y_H - y_{H'} + y_H & = & 0 \ \phi_9(z_{H'}) & = & z_{H'} + z_H + z_{H'} + z_H & = & 2z_{H'} + 2z_H \end{array} \tag{24.7}$
We see that the motion characteristic of an $A_1$ vibration (which we have identified as the symmetric stretch and the bending vibration) may be summarized as follows:
1. $2(x_H - x_{H'})$ - the two hydrogen atoms move in opposite directions along the $x$ axis.
2. $2(z_H + z_{H'})$ - the two hydrogen atoms move in the same direction along the $z$ axis.
3. $4z_O$ - the oxygen atom moves along the $z$ axis.
4. There is no motion of any of the atoms in the $y$ direction.
The asymmetric stretch has $B_1$ symmetry, and applying the projection operator in this case gives:
$\begin{array}{rclll} \phi_1(x_H) & = & x_H + x_{H'} + x_H + x_{H'} & = & 2x_H + 2x_{H'} \ \phi_2(y_H) & = & y_H + y_{H'} - y_H - y_{H'} & = & 0 \ \phi_3(z_H) & = & z_H - z_{H'} + z_H - z_{H'} & = & 2z_H - 2z_{H'} \ \phi_4(x_O) & = & x_O + x_O + x_O + x_O & = & 4x_O \ \phi_5(y_O) & = & y_O + y_O - y_O - y_O & = & 0 \ \phi_6(z_O) & = & z_O - z_O + z_O - z_O & = & 0 \ \phi_7(x_{H'}) & = & x_{H'} + x_H + x_{H'} + x_H & = & 2x_{H'} + 2x_H \ \phi_8(y_{H'}) & = & y_{H'} + y_H - y_{H'} - y_H & = & 0 \ \phi_(z_{H'}) & = & z_{H'} - z_H + z_{H'} - z_H & = & 2z_{H'} - 2z_H \end{array} \tag{24.8}$
In this vibrational mode, the two $H$ atoms move in the same direction along the $x$ axis and in opposite directions along the $z$ axis.
We have now shown how group theory may be used together with the $3N$ Cartesian basis to identify the symmetries of the translational, rotational and vibrational modes of motion of a molecule, and also to determine the atomic displacements associated with each vibrational mode.
Molecular vibrations using internal coordinates
While it was fairly straightforward to investigate the atomic displacements associated with each vibrational mode of $H_2O$ using the $3N$ Cartesian basis, this procedure becomes more complicated for larger molecules. Also, we are often more interested in how bond lengths and angles change in a vibration, rather than in the Cartesian displacements of the individual atoms. If we are only interested in looking at molecular vibrations, we can use a different procedure from that described above, and start from a basis of internal coordinates. Internal coordinates are simply a set of bond lengths and bond angles, which we can use as a basis for generating representations and, eventually, SALCs. Since bond lengths and angles do not change during translational or rotational motion, no information will be obtained on these types of motion.
For $H_2O$, the three internal coordinates of interest are the two $OH$ bond lengths, which we will label $r$ and $r'$, and the $HOH$ bond angle, which we will label $\theta$. If we wanted to, we could separate our basis into two different bases, one consisting only of bond lengths, to describe stretching vibrations, and one consisting of only bond angles, to describe bending vibrations. However, the current example is simple enough to treat all the basis functions together.
As usual, our first step is to work out the characters of the matrix representatives for this basis under each symmetry operation. The effects of the various transformations on our chosen basis, and the characters of the corresponding representatives, are:
$\begin{array}{lc} E(r, r', \theta) = (r, r', \theta) & \chi(E) = 3 \ C_2(r, r', \theta) = (r', r, \theta) & \chi(C_2) = 1 \ \sigma_v(xz)(r, r', \theta) = (r, r', \theta) & \chi(\sigma_v) = 3 \ \sigma_v'(yz)(r, r', \theta) = (r', r, \theta) & \chi(\sigma_v') = 1 \end{array} \tag{24.9}$
These are the same characters as we found before using the $3N$ Cartesian basis, and as before, we can see by inspection of the character table that the representation may be reduced down to the sum of irreducible representations $2A_1 + B_1$. We can now work out the symmetry adapted linear combinations of our new basis set to see how the bond lengths and angle change as $H_2O$ vibrates in each of the three vibrational modes.
Again, we will use the projection operator $\phi_i = \Sigma_g \chi_k(g) g f_i$ applied to each basis function in turn.
Firstly, the $A_1$ vibrations:
$\begin{array}{rclll} \phi_1(r) & = & r + r' + r + r' & = & 2(r + r') \ \phi_2(r') & = & r' + r + r' + r & = & 2(r' + r) \ \phi_3(\theta) & = & \theta + \theta + \theta + \theta & = & 4\theta \end{array} \tag{24.10}$
From these SALCs, we can identify $\phi_1$ (and $\phi_2$, which is identical) with the symmetric stretch, in which both bond lengths change in phase with each other, and $\phi_3$ with the bend.
Now for the $B_1$ vibration:
$\begin{array}{rclll} \phi_4(r) & = & r - r' + r - r' & = & 2(r - r') \ \phi_5(r') & = & r' - r + r' - r & = & 2(r' - r) \ \phi_6(\theta) & = & \theta - \theta + \theta - \theta & = & 0 \end{array} \tag{24.11}$
$\phi_4$ and $\phi_5$ are not linearly independent, and either one may be chosen to describe the asymmetric stretch, in which one bond lengthens as the other shortens.
Note: When using internal coordinates, it is important that all of the coordinates in the basis are linearly independent. If this is the case then the number of internal coordinates in the basis will be the same as the number of vibrational modes ($3N - 5$ or $3N - 6$, depending on whether the molecule is linear or non-linear). This requirement is satisfied in the $H_2O$ example above. For a less straightforward example, consider the methane molecule, $CH_4$. It might appear that we could choose a basis made up of the four $C$-$H$ bond lengths and the six $H$-$C$-$H$ bond angles. However, this would give us $10$ basis functions, and $CH_4$ has only $9$ vibrational modes. This is due to the fact that the bond angles are not all independent of each other. It can be tricky to come up with the appropriate internal coordinate basis to describe all of the molecular motions, but all is not lost. Even if you can’t work out the appropriate bond angles to choose, you can always take a basis of bond lengths to investigate the stretching vibrations of a molecule. If you want to know the symmetries of the bending vibrations, you can use the $3N$ Cartesian basis method to determine the symmetries of all of the vibrational modes and compare these with the stretching mode symmetries to identify the bending modes.
Contributors and Attributions
Claire Vallance (University of Oxford)
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Symmetry_(Vallance)/1.24%3A_Molecular_Vibrations.txt
|
1. Atomic or molecular translations transform in the same way as the $x$, $y$, $z$ (or $T_x$, $T_y$, $T_z$) functions listed in the character tables.
2. Molecular rotations transform in the same way as the $R_x$, $R_y$, $R_z$ functions listed in the character tables.
3. The irreducible representations spanned by the motions of a polyatomic molecule may be determined using the $3N$ Cartesian basis, made up of $x$, $y$, $z$ axes on each atom. The characters of the matrix representatives are best determined using a table as follows: $\begin{array}{ll} \text{Operation:} & \text{List the symmetry operations in the point group} \ \Gamma_{\text{Cart}} & \text{List the characters for} \: x + y + z \: \text{(from the character table) for each operation} \ N_{\text{unshifted}} & \text{List the number of atoms in the molecule that are unshifted by each symmetry operation} \ \Gamma_{3N} & \text{Take the product of the previous two rows to give the characters for} \: \Gamma_{3N} \end{array}$
4. The irreducible representations spanned by the molecular vibrations are determined by first subtracting the characters for rotations and translations from the characters for $\Gamma_{3N}$ to give the characters for $\Gamma_{\text{vib}}$ and then using the reduction formula or inspection of the character table to identify the irreducible representations contributing to $\Gamma_{\text{vib}}$.
5. The molecular displacements for the vibrations of each symmetry may be determined by using projection operators on the $3N$ Cartesian basis vectors to generate SALCs.
6. Alternatively, a basis of internal coordinates (bond lengths and angles) may be used to investigate stretching and bending vibrations. Determine the characters, identify the irreducible representations, and construct SALCs.
Contributors and Attributions
Claire Vallance (University of Oxford)
1.26: Group theory and Molecular Electronic States
Firstly, it is important that you understand the difference between a molecular orbital and an electronic state.
A strict definition of a molecular orbital is that it is a ‘one electron wavefunction’, i.e. a solution to the Schrödinger equation for the molecule. A complete one electron wavefunction (orbital) is a product of a spatial function, describing the orbital angular momentum and ‘shape’ of the orbital, and a spin function, describing the spin angular momentum.
$\Psi = \Psi_{\text{spatial}} \Psi_{\text{spin}} \tag{26.1}$
In common usage, the word ‘orbital’ is often used to refer only to the spatial part of the ‘true’ orbital. For example, in atoms we generally talk about ‘$s$ orbitals’ or ‘$p$ orbitals’ rather than ‘$s$ spatial wavefunctions’ and ‘$p$ spatial wavefunctions’. In this context, two electrons with opposite spins may occupy one spatial orbital. A more rigorous way of saying this would be to state that a given spatial wavefunction may be paired with two different spin wavefunctions (one corresponding to a ‘spin up’ electron and one to a ‘spin down’ electron).
An electronic state is defined by the electron configuration of the system, and by the quantum numbers of each electron contributing to that configuration. Each electronic state corresponds to one of the energy levels of the molecule. These energy levels will obviously depend on the molecular orbitals that are occupied, and their energies, but they also depend on the way in which the electrons within the various molecular orbitals interact with each other. Interactions between the electrons are essentially determined by the relative orientations of the magnetic moments associated with their orbital and spin angular momenta, which is where the dependence on quantum numbers comes in. A given electron configuration will often give rise to a number of different electronic states if the electrons may be arranged in different ways (with different quantum numbers) within the occupied orbitals.
Last year you were introduced to the idea of atomic states, and learnt how to label the states arising from a given electron configuration using term symbols of the form $^{2S+1}L_J$. Term symbols of this form define the spin, orbital and total angular momenta of the state, which in turn determine its energy. Molecular states, containing contributions from a number of molecular orbitals, are more complicated. For example, a given molecular orbital will generally contain contributions from several different atomic orbitals, and as a result, electrons cannot easily be assigned an l quantum number. Instead of using term symbols, molecular states are usually labeled according to their symmetry (the exception to this is linear molecules, for which conventional term symbols may still be used, albeit with a few modifications from the atomic case).
We can determine the symmetry of an electronic state by taking the direct product of the irreducible representations for all of the electrons involved in that state (the irreducible representation for each electron is simply the irreducible representation for the molecular orbital that it occupies). Usually we need only consider unpaired electrons. Closed shell species, in which all electrons are paired, almost always belong to the totally symmetric irreducible representation in the point group of the molecule.
An example is the molecular orbitals of butadiene, which belongs to the $C_{2h}$ point group. Since all electrons are paired, the overall symmetry of the state is $A_g$, and the label for the state once the spin multiplicity is included is $^1A_g$. We could have arrived at the same result by taking the direct product of the irreducible representations for each electron. There are two electrons in orbitals with $A_u$ symmetry, and two in orbitals with $B_g$ symmetry, so overall we have:
$A_u \otimes A_u \otimes B_g \otimes B_g = A_g \tag{26.2}$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Symmetry_(Vallance)/1.25%3A_Summary_of_applying_group_theory_to_molecular_motions.txt
|
In our final application of group theory, we will investigate the way in which symmetry considerations influence the interaction of light with matter. We have already used group theory to learn about the molecular orbitals in a molecule. In this section we will show that it may also be used to predict which electronic states may be accessed by absorption of a photon. We may also use group theory to investigate how light may be used to excite the various vibrational modes of a polyatomic molecule.
Last year, you were introduced to spectroscopy in the context of electronic transitions in atoms. You learned that a photon of the appropriate energy is able to excite an electronic transition in an atom, subject to the following selection rules:
$\begin{array}{rcl} \Delta n & = & \text{integer} \ \Delta l & = & \pm 1 \ \Delta L & = & 0, \pm 1 \ \Delta S & = & 0 \ \Delta J & = & 0, \pm 1; J=0 \not \leftrightarrow J=0 \end{array} \tag{27.1}$
What you may not have learned is where these selection rules come from. In general, different types of spectroscopic transition obey different selection rules. The transitions you have come across so far involve changing the electronic state of an atom, and involve absorption of a photon in the UV or visible part of the electromagnetic spectrum. There are analogous electronic transitions in molecules, which we will consider in more detail shortly. Absorption of a photon in the infrared (IR) region of the spectrum leads to vibrational excitation in molecules, while photons in the microwave (MW) region produce rotational excitation. Each type of excitation obeys its own selection rules, but the general procedure for determining the selection rules is the same in all cases. It is simply to determine the conditions under which the probability of a transition is not identically zero.
The first step in understanding the origins of selection rules must therefore be to learn how transition probabilities are calculated. This requires some quantum mechanics.
Last year, you learned about operators, eigenvalues and eigenfunctions in quantum mechanics. You know that if a function is an eigenfunction of a particular operator, then operating on the eigenfunction with the operator will return the observable associated with that state, known as the eigenvalue (i.e. $\hat{A} \Psi = a \Psi$). What you may not know is that operating on a function that is NOT an eigenfunction of the operator leads to a change in state of the system. In the transitions we will be considering, the molecule interacts with the electric field of the light (as opposed to NMR spectroscopy, in which the nuclei interact with the magnetic field of the electromagnetic radiation). These transitions are called electric dipole transitions, and the operator we are interested in is the electric dipole operator, usually given the symbol $\hat{\boldsymbol{\mu}}$, which describes the electric field of the light.
If we start in some initial state $\Psi_i$, operating on this state with $\hat{\boldsymbol{\mu}}$ gives a new state, $\Psi = \hat{\boldsymbol{\mu}} \Psi$. If we want to know the probability of ending up in some particular final state $\Psi_f$, the probability amplitude is simply given by the overlap integral between $\Psi$ and $\Psi_f$. This probability amplitude is called the transition dipole moment, and is given the symbol $\boldsymbol{\mu}_{fi}$..
$\hat{\boldsymbol{\mu}}_{fi} = \langle\Psi_f | \Psi\rangle = \langle\Psi_f | \hat{\boldsymbol{\mu}} | \Psi_i\rangle \tag{27.2}$
Physically, the transition dipole moment may be thought of as describing the ‘kick’ the electron receives or imparts to the electric field of the light as it undergoes a transition. The transition probability is given by the square of the probability amplitude.
$P_{fi} = \hat{\boldsymbol{\mu}}_{fi}^2 = |\langle\Psi_f | \hat{\boldsymbol{\mu}} | \Psi_i\rangle|^2 \tag{27.3}$
Hopefully it is clear that in order to determine the selection rules for an electric dipole transition between states $\Psi_i$ and $\Psi_f$, we need to find the conditions under which $\boldsymbol{\mu}_{fi}$ can be non-zero. One way of doing this would be to write out the equations for the two wavefunctions (which are functions of the quantum numbers that define the two states) and the electric dipole moment operator, and just churn through the integrals. By examining the result, it would then be possible to decide what restrictions must be imposed on the quantum numbers of the initial and final states in order for a transition to be allowed, leading to selection rules of the type listed above for atoms. However, many selection rules may be derived with a lot less work, based simply on symmetry considerations.
In section $17$, we showed how to use group theory to determine whether or not an integral may be non-zero. This forms the basis of our consideration of selection rules.
Electronic transitions in molecules
Assume that we have a molecule in some initial state $\Psi_i$. We want to determine which final states $\Psi_f$ can be accessed by absorption of a photon. Recall that for an integral to be non-zero, the representation for the integrand must contain the totally symmetric irreducible representation. The integral we want to evaluate is
$\hat{\boldsymbol{\mu}}_{fi} = \int \Psi_f^* \hat{\boldsymbol{\mu}} \Psi_i d\tau \tag{27.4}$
so we need to determine the symmetry of the function $\Psi_f^* \hat{\boldsymbol{\mu}} \Psi_i$. As we learned in Section $18$, the product of two functions transforms as the direct product of their symmetry species, so all we need to do to see if a transition between two chosen states is allowed is work out the symmetry species of $\Psi_f$, $\hat{\boldsymbol{\mu}}$ and $\Psi_i$ , take their direct product, and see if it contains the totally symmetric irreducible representation for the point group of interest. Equivalently (as explained in Section $18$), we can take the direct product of the irreducible representations for $\hat{\boldsymbol{\mu}}$ and $\Psi_i$ and see if it contains the irreducible representation for $\Psi_f$. This is best illustrated using a couple of examples.
Earlier in the course, we learned how to determine the symmetry molecular orbitals. The symmetry of an electronic state is found by identifying any unpaired electrons and taking the direct product of the irreducible representations of the molecular orbitals in which they are located. The ground state of a closed-shell molecule, in which all electrons are paired, always belongs to the totally symmetric irreducible representation$^7$. As an example, the electronic ground state of $NH_3$, which belongs to the $C_{3v}$ point group, has $A_1$ symmetry. To find out which electronic states may be accessed by absorption of a photon, we need to determine the irreducible representations for the electric dipole operator $\hat{\boldsymbol{\mu}}$. Light that is linearly polarized along the $x$, $y$, and $z$ axes transforms in the same way as the functions $x$, $y$, and $z$ in the character table$^8$. From the $C_{3v}$ character table, we see that $x$- and $y$-polarized light transforms as $E$, while $z$-polarized light transforms as $A_1$. Therefore:
1. For $x$- or $y$-polarized light, $\Gamma_\hat{\boldsymbol{\mu}} \otimes \Gamma_{\Psi 1}$ transforms as $E \otimes A_1 = E$. This means that absorption of $x$- or $y$-polarized light by ground-state $NH_3$ (see figure below left) will excite the molecule to a state of $E$ symmetry.
2. For $z$-polarized light, $\Gamma_\hat{\boldsymbol{\mu}} \otimes \Gamma_{\Psi 1 }$ transforms as $A_1 \otimes A_1 = A_1$. Absorption of $z$-polarized light by ground state $NH_3$ (see figure below right) will excite the molecule to a state of $A_1$ symmetry.
Of course, the photons must also have the appropriate energy, in addition to having the correct polarization to induce a transition.
We can carry out the same analysis for $H_2O$, which belongs to the $C_{2v}$ point group. We showed previously that $H_2O$ has three molecular orbitals of $A_1$ symmetry, two of $B_1$ symmetry, and one of $B_2$ symmetry, with the ground state having $A_1$ symmetry. In the $C_{2v}$ point group, $x$-polarized light has $B_1$ symmetry, and can therefore be used to excite electronic states of this symmetry; $y$-polarized light has $B_2$ symmetry, and may be used to access the $B_2$ excited state; and $z$-polarized light has $A_1$ symmetry, and may be used to access higher lying $A_1$ states. Consider our previous molecular orbital diagram for $H_2O$.
The electronic ground state has two electrons in a $B_2$ orbital, giving a state of $A_1$ symmetry ($B_2 \otimes B_2 = A_1$). The first excited electronic state has the configuration $(1B_2)^1(3A_1)^1$ and its symmetry is $B_2 \otimes A_1 = B_2$. It may be accessed from the ground state by a $y$-polarized photon. The second excited state is accessed from the ground state by exciting an electron to the $2B_1$ orbital. It has the configuration $(1B_2)^1(2B_1)^1$, its symmetry is $B_2 \otimes B_1 = A_2$. Since neither $x$-, $y$- or $z$-polarized light transforms as $A_2$, this state may not be excited from the ground state by absorption of a single photon.
Vibrational transitions in molecules
Similar considerations apply for vibrational transitions. Light polarized along the $x$, $y$, and $z$ axes of the molecule may be used to excite vibrations with the same symmetry as the $x$, $y$ and $z$ functions listed in the character table.
For example, in the $C_{2v}$ point group, $x$-polarized light may be used to excite vibrations of $B_1$ symmetry, $y$-polarized light to excite vibrations of $B_2$ symmetry, and $z$-polarized light to excite vibrations of $A_1$ symmetry. In $H_2O$, we would use $z$-polarized light to excite the symmetric stretch and bending modes, and $x$-polarized light to excite the asymmetric stretch. Shining $y$-polarized light onto a molecule of $H_2O$ would not excite any vibrational motion.
Raman Scattering
If there are vibrational modes in the molecule that may not be accessed using a single photon, it may still be possible to excite them using a two-photon process known as Raman scattering$^9$. An energy level diagram for Raman scattering is shown below.
The first photon excites the molecule to some high-lying intermediate state, known as a virtual state. Virtual states are not true stationary states of the molecule (i.e. they are not eigenfunctions of the molecular Hamiltonian), but they can be thought of as stationary states of the ‘photon + molecule’ system. These types of states are extremely short lived, and will quickly emit a photon to return the system to a stable molecular state, which may be different from the original state. Since there are two photons (one absorbed and one emitted) involved in Raman scattering, which may have different polarizations, the transition dipole for a Raman transition transforms as one of the Cartesian products $x^2$, $y^2$, $z^2$, $xy$, $xz$, $yz$ listed in the character tables.
Vibrational modes that transform as one of the Cartesian products may be excited by a Raman transition, in much the same way as modes that transform as $x$, $y$, or $z$ may be excited by a one-photon vibrational transition.
In $H_2O$, all of the vibrational modes are accessible by ordinary one-photon vibrational transitions. However, they may also be accessed by Raman transitions. The Cartesian products transform as follows in the $C_{2v}$ point group.
$\begin{array}{clcl} A_1 & x^2, y^2, z^2 & B_1 & xz \ A_2 & sy & B_2 & yz \end{array} \tag{27.5}$
The symmetric stretch and the bending vibration of water, both of $A_1$ symmetry, may therefore be excited by any Raman scattering process involving two photons of the same polarization ($x$-, $y$- or $z$-polarized). The asymmetric stretch, which has $B_1$ symmetry, may be excited in a Raman process in which one photon is $x$-polarized and the other $z$-polarized.
$^7$It is important not to confuse molecular orbitals (the energy levels that individual electrons may occupy within the molecule) with electronic states (arising from the different possible arrangements of all the molecular electrons amongst the molecular orbitals, e.g. the electronic states of $NH_3$ are NOT the same thing as the molecular orbitals we derived earlier in the course. These orbitals were an incomplete set, based only on the valence $s$ electrons in the molecule. Inclusion of the $p$ electrons is required for a full treatment of the electronic states. The $H_2O$ example above should hopefully clarify this point.
$^8$‘$x$-polarized’ means that the electric vector of the light (an electromagnetic wave) oscillates along the direction of the $x$ axis.
$^9$You will cover Raman scattering (also known as Raman spectroscopy) in more detail in later courses. The aim here is really just to alert you to its existence and to show how it may be used to access otherwise inaccessible vibrational modes.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Symmetry_(Vallance)/1.27%3A_Spectroscopy_-_Interaction_of_Atoms_and_Molecules_with_Light.txt
|
Hopefully this course has given you a reasonable introduction to the qualitative description of molecular symmetry, and also to the way in which it can be used quantitatively within the context of group theory to predict important molecular properties. These main things you should have learnt in this text are:
1. How to identify the symmetry elements possessed by a molecule and assign it to a point group.
2. The consequences of symmetry for chirality and polarity of molecules.
3. The effect of applying two or more symmetry operations consecutively (group multiplication)
4. How to construct a matrix representation of a group, starting from a suitable set of basis functions.
5. How to determine the irreducible representations spanned by a basis set, and construct symmetry adapted linear combinations (SALCs) of the original basis functions that transform as the irreducible representations of the group.
6. How to construct molecular orbitals by taking linear combinations of SALCs of the same symmetry species.
7. How to set up and solve the secular equations for the molecule in order to find the molecular energy levels and orbital coefficients – “Extra for experts”, though you will cover this in later courses
8. How to determine the symmetries of the various modes of motion (translational, rotational and vibrational) of a polyatomic molecule, and the symmetries of individual vibrational modes.
9. How to determine the atomic displacements in a given vibrational mode by constructing SALCs in the \(3N\)Cartesian basis.
10. How to determine atomic displacements in stretching and bending vibrations using internal coordinates.
11. The consequences of symmetry for the selection rules governing excitation to different electronic and vibrational states.
Contributors and Attributions
Claire Vallance (University of Oxford)
1.29: Appendix A
Proof that the character of a matrix representative is invariant under a similarity transform
A property of traces of matrix products is that they are invariant under cyclic permutation of the matrices.
i.e. $tr \begin{bmatrix} ABC \end{bmatrix} = tr \begin{bmatrix} BCA \end{bmatrix} = tr \begin{bmatrix} CAB \end{bmatrix}$. For the character of a matrix representative of a symmetry operation $g$, we therefore have:
$\chi(g) = tr \begin{bmatrix} \Gamma(g) \end{bmatrix} = tr \begin{bmatrix} C \Gamma'(g) C^{-1} \end{bmatrix} = tr \begin{bmatrix} \Gamma'(g) C^{-1} C \end{bmatrix} = tr \begin{bmatrix} \Gamma'(g) \end{bmatrix} = \chi'(g) \tag{29.1}$
The trace of the similarity transformed representative is therefore the same as the trace of the original representative.
Proof that the characters of two symmetry operations in the same class are identical
The formal requirement for two symmetry operations $g$ and $g'$ to be in the same class is that there must be some symmetry operation $f$ of the group such that $g' = f^{-1} gf$ (the elements $g$ and $g'$ are then said to be conjugate). If we consider the characters of $g$ and $g'$ we find:
$\chi(g') = tr \begin{bmatrix} \Gamma(g') \end{bmatrix} = tr \begin{bmatrix} \Gamma^{-1}(f) \Gamma(g) \Gamma(f) \end{bmatrix} = tr \begin{bmatrix} \Gamma(g) \Gamma(f) \Gamma^{-1}(f) \end{bmatrix} = tr \begin{bmatrix} \Gamma(g) \end{bmatrix} = \chi(g) \tag{29.2}$
The characters of $g$ and $g'$ are identical.
Proof of the Variation Theorem
The variation theorem states that given a system with a Hamiltonian $H$, then if $\phi$ is any normalized, well-behaved function that satisfies the boundary conditions of the Hamiltonian, then
$\langle\phi | H | \phi\rangle \geq E_0 \tag{29.3}$
where $E_0$ is the true value of the lowest energy eigenvalue of $H$. This principle allows us to calculate an upper bound for the ground state energy by finding the trial wavefunction $\phi$ for which the integral is minimized (hence the name; trial wavefunctions are varied until the optimum solution is found). Let us first verify that the variational principle is indeed correct.
We first define an integral
$\begin{array}{rcll} I & = & \langle\phi | -E_0 | \phi\rangle & \ & = & \langle\phi | H | \phi\rangle - \langle\phi | E_0 | \phi\rangle & \ & = & \langle\phi | H | \phi\rangle - E_0 \langle\phi | \phi\rangle & \ & = & \langle\phi | H | \phi\rangle - E_0 & \text{since} \: \phi \: \text{is normalized} \end{array} \tag{29.4}$
If we can prove that $I \geq 0$ then we have proved the variation theorem.
Let $\Psi_i$ and $E_i$ be the true eigenfunctions and eigenvalues of $H$, so $H \Psi_i = E_i \Psi_i$. Since the eigenfunctions $\Psi_i$ form a complete basis set for the space spanned by $H$, we can expand any wavefunction $\phi$ in terms of the $\Psi_i$ (so long as $\phi$ satisfies the same boundary conditions as $\Psi_i$).
$\phi = \sum_k a_k \Psi_k \tag{29.5}$
Substituting this function into our integral $I$ gives
$\begin{array}{rcl} I & = & \left \langle \sum_k a_k \Psi_k | H-E_0 | \sum_j a_j \Psi_j \right \rangle \ & = & \langle\sum_k a_k \Psi_k | \sum_j (H-E_0) a_j \Psi_j\rangle \end{array} \tag{29.6}$
If we now use $H \Psi + E \Psi$, we obtain
$\begin{array}{rcl} I & = & \langle\sum_k a_k \Psi_k | \sum_j a_j (E_j - E_0) \Psi_j\rangle \ & = & \sum_k \sum_j a_k^* a_j (E_j - E_0) \langle\Psi_k | \Psi_j\rangle \ & = & \sum_k \sum_j a_k^* a_j (E_j - E_0) \delta_{jk} \end{array} \tag{29.7}$
We now perform the sum over $j$, losing all terms except the $j = k$ term, to give
$\begin{array}{rcl} I & = & \sum_k a_k^* a_k (E_k - E_0) \ & = & \sum_k |a_k|^2 (E_k- E_0) \end{array} \tag{29.8}$
Since $E_0$ is the lowest eigenvalue, $E_k -E_0$ must be positive, as must $|a_k|^2$. This means that all terms in the sum are non-negative and $I \geq 0$ as required.
For wavefunctions that are not normalized, the variational integral becomes:
$\frac{\langle\phi | H | \phi\rangle}{\langle\phi | \phi\rangle} \geq E_0 \tag{29.9}$
Derivation of the secular equations – the general case of the linear variation method
In the study of molecules, the variation principle is often used to determine the coefficients in a linear variation function, a linear combination of $n$ linearly independent functions $\begin{pmatrix} f_1, f_2, ..., f_n \end{pmatrix}$ (often atomic orbitals) that satisfy the boundary conditions of the problem. i.e. $\phi = \sum_i c_i f_i$. The coefficients $c_i$ are parameters to be determined by minimizing the variational integral. In this case, we have:
$\begin{array}{rcll} \langle\phi | H | \phi\rangle & = & \langle\sum_i c_i f_i | H | \sum_j c_j f_j\rangle & \ & = & \sum_i \sum_j c_i^* c_j \langle f_i | H | f_j\rangle & \ & = & \sum_i \sum_j c_i^* c_j H_{ij} \end{array} \tag{29.10}$
where $H_{ij}$ is the Hamiltonian matrix element.
$\begin{array}{rcll} \langle\phi | \phi\rangle & = & \langle\sum_i c_i f_i | \sum_j c_j f_j\rangle & \ & = & \sum_i \sum_j c_i^* c_j \langle f_i | f_j\rangle & \ & = & \sum_i \sum_j c_i^* c_j S_{ij} \end{array} \tag{29.11}$
where $S_{ij}$ is the overlap matrix element.
The variational energy is therefore
$E = \dfrac{\sum_i \sum_jci^* c_j H_{ij}}{\sum_i \sum_J c_i^* c_j S_{ij}} \tag{29.12}$
which rearranges to give
$E \sum_i \sum_j c_i^* c_j S_{ij} = \sum_i \sum_j c_i^* c_j H_{ij} \tag{29.13}$
We want to minimize the energy with respect to the linear coefficients $c_i$, requiring that $\dfrac{\partial E}{\partial c_i}$for all $i$. Differentiating both sides of the above expression gives,
$\frac{\partial E}{\partial c_k}\Sigma_i \Sigma_j c_i^* c_j S_{ij} + E \Sigma_i \Sigma_j \begin{bmatrix} \frac{\partial c_i^*}{\partial c_k} c_j + \frac{\partial c_j}{\partial c_k} c_i^* \end{bmatrix} S_{ij} + \Sigma_i \Sigma_j \begin{bmatrix} \frac{\partial c_i^*}{\partial c_k}c_j + \frac{\partial c_j}{\partial c_k}c_i^* \end{bmatrix} H_{ij} \tag{29.14}$
Since $\frac{\partial c_i^*}{\partial c_k} = \delta_{ik}$ and $S_{ij} = S_{ji}$, $H_{ij} = H_{ji}$, we have
$\frac{\partial E}{\partial c_k} \Sigma_i \Sigma_j c_i^* c_j S_{ij} + 2E \Sigma_i S_{ik} = 2 \Sigma_i c_i H_{ik} \tag{29.15}$
When $\frac{\partial E}{\partial c_k} = 0$ , this gives
$\begin{array}{cll} \boxed{\Sigma_i c_i (H_{ik} - ES_{ik}) = 0} & \text{for all k} & \text{SECULAR EQUATIONS} \end{array} \tag{29.16}$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Symmetry_(Vallance)/1.28%3A_Summary.txt
|
Non axial groups
$\begin{array}{l|c} C_1 & E \ \hline A_1 & 1 \end{array} \label{30.1}$
$\begin{array}{l|cc|l|l} C_s & E & \sigma_h & & \ \hline A & 1 & 1 & x, y , R_z & x^2, y^2, z^2, xy \ A' & 1 & -1 & z, R_x, R_y & yz, xz \end{array} \label{30.2}$
$\begin{array}{l|cc|l|l} C_1 & E & i & & \ \hline A_g & 1 & 1 & R_x, R_y, R_z & x^2, y^2, z^2, xy, xz, yz \ A_u & 1 & -1 & x, y, z & \end{array} \label{30.3}$
$C_n$ groups
$\begin{array}{l|cc|l|l} C_2 & E & C_2 & & \ \hline A & 1 & 1 & z, R_z & x^2, y^2, z^2, xy \ B & 1 & -1 & x, y , R_x, R_y & yz, xz \end{array} \label{30.4}$
$\begin{array}{l|c|l|l} C_3 & E \: \: \: \: \: C_3 \: \: \: \: \: C_3^2 & & c=e^{2\pi/3} \ \hline A & 1 \: \: \: \: \: \: \: \: \: 1 \: \: \: \: \: \: \: \: \: 1 & x, R_z & x^2 + y^2, z^2 \ E & \begin{Bmatrix} 1 & c & c^* \ 1 & c^* & c \end{Bmatrix} & x, y, R_x, R_y, & x^2-y^2, xy, xz, yz \end{array} \label{30.5}$
$\begin{array}{l|c|l|l} C_4 & E \: \: \: \: \: C_4 \: \: \: \: \: C_2 \: \: \: \: \: C_4^3 & & \ \hline A & 1 \: \: \: \: \: \: \: \: \: 1 \: \: \: \: \: \: \: \: \: 1 \: \: \: \: \: \: \: \: \: 1 & z, R_z & x^2 + y^2, z^2 \ B & 1 \: \: \: \: -1 \: \: \: \: \: \: \: \: 1 \: \: \: \: -1 & & x^2 - y^2, xy \ E & \begin{Bmatrix} 1 & i & -1 & -i \ 1 & -i & -1 & i \end{Bmatrix} & x, y, R_x, R_y & yz, xz \end{array} \label{30.6}$
$C_{nv}$ groups
$\begin{array}{l|cccc|l|l} C_{2v} & E & C_2 & \sigma_v(xz) & \sigma_v'(yz) & & \ \hline A_1 & 1 & 1 & 1 & 1 & z & x^2, y^2, z^2 \ A_2 & 1 & 1 & -1 & -1 & R_z & xy \ B_1 & 1 & -1 & 1 & -1 & x, R_y & xz \ B_2 & 1 & -1 & -1 & 1 & y, R_x & yz \end{array} \label{30.7}$
$\begin{array}{l|ccc|l|l} C_{3v} & E & 2C_3 & 3\sigma_v & & \ \hline A_1 & 1 & 1 & 1 & z & x^2 + y^2, z^2 \ A_2 & 1 & 1 & -1 & R_z & \ E & 2 & -1 & 0 & x, y, R_x, R_y & x^2 - y^2, xy, xz, yz \end{array} \label{30.8}$
$C_{nh}$ groups
$\begin{array}{l|cccc|l|l} C_{2h} & E & C_2 & i & \sigma_h & & \ \hline A_g & 1 & 1 & 1 & 1 & R_z & x^2, y^2, z^2, xy \ B_g & 1 & -1 & 1 & -1 & R_x, R_y & xz, yz \ A_u & 1 & 1 & -1 & -1 & z & \ B_u & 1 & -1 & -1 & 1 & x, y & \end{array} \label{30.9}$
$\begin{array}{l|c|l|l}C_{3h} & E \: \: \: \: \: C_3 \: \: \: \: \: C_3^2 \: \: \: \: \: \sigma_h \: \: \: \: \: S_3 \: \: \: \: \: S_3^5 & & c = e^{2\pi/3} \ \hline A & 1 \: \: \: \: \: \: \: \: 1 \: \: \: \: \: \: \: \: 1 \: \: \: \: \: \: \: \: 1 \: \: \: \: \: \: \: \: 1 \: \: \: \: \: \: \: \: 1 & R_z & x^2 + y^2, z^2 \ E & \begin{Bmatrix} 1 & \: \: c & \: \: c^* & \: \: 1 & \: \: c & \: \: c^* \ 1 & \: \: c^* & \: \: c & \: \: 1 & \: \: c^* & \: \: c \end{Bmatrix} & x, y & x^2 - y^2, xy \ A' & 1 \: \: \: \: \: \: 1 \: \: \: \: \: \: 1 \: \: \: \: -1 \: \: \: \: -1 \: \: \: \: \: -1 & z & \ E' & \begin{Bmatrix} 1 & c & c^* & -1 & -c & -c^* \ 1 & c^* & c & -1 & -c^* & -c \end{Bmatrix} & R_x, R_y & xz, yz \end{array} \label{30.10}$
$D_n$ groups
$\begin{array}{l|cccc|l|l} D_2 & E & C_2(z) & C_2(y) & C_2(x) & & \ \hline A & 1 & 1 & 1 & 1 & & x^2, y^2, z^2 \ B_1 & 1 & 1 & -1 & -1 & z, R_z & xy \ B_2 & 1 & -1 & 1 & -1 & y, R_y & xz \ B_3 & 1 & -1 & -1 & 1 & x, R_x & yz \end{array} \label{30.11}$
$\begin{array}{l|ccc|l|l} D_3 & E & 2C_3 & 3C_2 & & \ \hline A_1 & 1 & 1 & 1 & & x^2 + y^2, z^2 \ A_2 & 1 & 1 & -1 & z, R_z & \ E & 2 & -1 & 0 & x, y, R_x, R_y & x^2 - y^2, xy, xz, yz \end{array} \label{30.12}$
$D_{nh}$ groups
$\begin{array}{l|cccccccc|l|l} D_{2h} & E & C_2(z) & C_2(y) & C_2(x) & i & \sigma_xy) & \sigma(xz) & \sigma(yz) & & \ \hline A_g & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & & x^2, y^2, z^2 \ B_{1g} & 1 & 1 & -1 & -1 & 1 & 1 & -1 & -1 & R_z & xy \ B_{2g} & 1 & -1 & 1 & -1 & 1 & -1 & 1 & -1 & R_y & xz \ B_{3g} & 1 & -1 & -1 & 1 & 1 & -1 & -1 & 1 & R_x & yz \ A_u & 1 & 1 & 1 & 1 & -1 & -1 & -1 & -1 & & \ B_{1u} & 1 & 1 & -1 & -1 & -1 & -1 & 1 & 1 & z & \ B_{2u} & 1 & -1 & 1 & -1 & -1 & 1 & -1 & 1 & y & \ B_{3u} & 1 & -1 & -1 & 1 & -1 & 1 & 1 & -1 & x & \end{array} \label{30.13}$
$D_{nd}$ groups
$\begin{array}{l|ccccc|l|l} D_{2d} & E & 2S_4 & C_2 & 2C_2' & 2\sigma_d & & \ \hline A_1 & 1 & 1 & 1 & 1 & 1 & & x^2 + y^2, z^2 \ A_2 & 1 & 1 & 1 & -1 & -1 & R_z & \ B_1 & 1 & -1 & 1 & 1 & -1 & & x^2 - y^2 \ B_2 & 1 & -1 & 1 & -1 & 1 & z & xy \ E & 2 & 0 & -2 & 0 & 0 & x, y, R_x, R_y & xy, yz \end{array} \label{30.14}$
$\begin{array}{l|cccccc|l|l} D_{3d} & E & 2C_3 & 3C_2 & i & 2S_6 & 3\sigma_d & & \ \hline A_{1g} & 1 & 1 & 1 & 1 & 1 & 1 & & x^2 + y^2, z^2 \ A_{2g} & 1 & 1 & -1 & 1 & 1 & -1 & R_z & \ E_g & 2 & -1 & 0 & 2 & -1 & 0 & R_x, R_y & x^2 - y^2, xy, xz, yz \ A_{1u} & 1 & 1 & 1 & -1 & -1 & -1 & & \ A_{2u} & 1 & 1 & -1 & -1 & -1 & 1 & z & \ E_u & 2 & -1 & 0 & -2 & 1 & 0 & x, y & \end{array} \label{30.15}$
$C_{\infty v}$ and $D_{\infty h}$
$\begin{array}{l|cccccccc|l|l} D_{\infty h} & E & 2C_\infty^\Phi & \ldots & \infty \sigma_v & i & 2S_\infty^\Phi & \ldots & \infty C_2 & & \ \hline \Sigma_g^+ & 1 & 1 & \ldots & 1 & 1 & 1 & \ldots & 1 & & x^2 + y^2, z^2 \ \Sigma_g^- & 1 & 1 & \ldots & -1 & 1 & 1 & \ldots & -1 & R_z & \ \Pi_g & 2 & 2cos \Phi & \ldots & 0 & 2 & -2cos \Phi & \ldots & 0 & R_x, R_y & xz, yz \ \Delta_g & 2 & 2cos 2\Phi & \ldots & 0 & 2 & 2cos 2\Phi & \ldots & 0 & & x^2 - y^2, xy \ \ldots & \ldots & \ldots & \ldots & \ldots & \ldots & \ldots & \ldots & \ldots & & \ \Sigma_u^+ & 1 & 1 & \ldots & 1 & -1 & -1 & \ldots & -1 & z & \ \Sigma_u^- & 1 & 1 & \ldots & -1 & -1 & -1 & \ldots & 1 & & \ \Pi_u & 2 & 2cos \Phi & \ldots & 0 & -2 & 2cos \Phi & \ldots & 0 & x, y & \ \Delta_u & 2 & 2cos 2\Phi & \ldots & 0 & -2 & -2cos 2\Phi & \ldots & 0 & & \ \ldots & \ldots & \ldots & \ldots & \ldots & \ldots & \ldots & \ldots & \ldots & & \end{array} \label{30.16}$
$S_n$ groups
$\begin{array}{l|c|l|l} S_4 & E \: \: \: \: \: S_4 \: \: \: \: \: C_2 \: \: \: \: \: S_4^3 & & \ \hline A & 1 \: \: \: \: \: \: \: \: \: 1 \: \: \: \: \: \: \: \: \: 1 \: \: \: \: \: \: \: \: \: 1 & R_z & x^2 + y^2, z^2 \ B & 1 \: \: \: \: -1 \: \: \: \: \: \: \: \: 1 \: \: \: \: -1 & z & x^2 - y^2, xy \ E & \begin{Bmatrix} 1 & i & -1 & -i \ 1 & -i & -1 & i \end{Bmatrix} & x, y, R_x, R_y & xz, yz \end{array} \label{30.17}$
$\begin{array}{l|c|l|l} S_6 & E \: \: \: \: \: C_3 \: \: \: \: \: C_3^2 \: \: \: \: \: i \: \: \: \: \: S_6^5 \: \: \: \: \: S_6 & & c=e^{2\pi/3} \ \hline A_g & 1 \: \: \: \: \: \: \: \: 1 \: \: \: \: \: \: \: \: 1 \: \: \: \: \: \: \: \: 1 \: \: \: \: \: \: \: \: 1 \: \: \: \: \: \: \: \: 1 & R_z & x^2 + y^2, z^2 \ E_g & \begin{Bmatrix} 1 & \: \: c & \: \: c^* & \: \: 1 & \: \: c & \: \: c^* \ 1 \: \: & \: \: c^* & \: \: c & \: \: 1 & \: \: c^* & \: \: c \end{Bmatrix} & R_x, R_y & x^2 - y^2, xy, xz, yz \ A_u & 1 \: \: \: \: \: \: 1 \: \: \: \: \: \: 1 \: \: \: \: -1 \: \: \: \: -1 \: \: \: \: \: -1 & z & \ E_u & \begin{Bmatrix} 1 & c & c^* & -1 & -c & -c^* \ 1 & c^* & c & -1 & -c^* & -c \end{Bmatrix} & x, y & \end{array} \label{30.18}$
Cubic groups
$\begin{array}{l|c|l|l} T & E \: \: \: 4C_3 \: \: \: 4C_3^2 \: \: \: 3C_2 & & c=e^{2\pi/3} \ \hline A & 1 \: \: \: \: \: \: \: 1 \: \: \: \: \: \: \: 1 \: \: \: \: \: \: \: 1 & & x^2 + y^2, z^2 \ E & \begin{Bmatrix} 1 & c & c^* & 1 \ 1 & c* & c & 1 \end{Bmatrix} & & 2z^2 - x^2 - y^2, x^2 - y^2 \ T & 3 \: \: \: \: \: 0 \: \: \: \: \: \: \: 0 \: \: \: -1 & R_x, R_y, R_z, x, y, z & xy, xz, yz \end{array} \label{30.19}$
$\begin{array}{l|ccccc|l|l} T_d & E & 8C_3 & 3C_2 & 6S_4 & 6\sigma_d & & \ \hline A_1 & 1 & 1 & 1 & 1 & 1 & & x^2 + y^2, z^2 \ A_2 & 1 & 1 & 1 & -1 & -1 & & \ E & 2 & -1 & 2 & 0 & 0 & & 2z^2 - x^-2 - y^2, x^2 - y^2 \ T_1 & 3 & 0 & -1 & 1 & -1 & R_x, R_y, R_z & \ T_2 & 3 & 0 & -1 & -1 & 1 & x, y, z & xy, xz, yz \end{array} \label{30.20}$
Direct product tables
For the point groups O and T$_d$ (and O$_h$)
$\begin{array}{llllll} & \boldsymbol{A_1} & \boldsymbol{A_2} & \boldsymbol{E} & \boldsymbol{T_1} & \boldsymbol{T_2} \ \boldsymbol{A_1} & A_1 & A_2 & E & T_1 & T_2 \ \boldsymbol{A_2} & & A_1 & E & T_2 & T_1 \ \boldsymbol{E} & & & A_1 + A_2 + E & T_1 + T_2 & T_1 + T_2 \ \boldsymbol{T_1} & & & & A_1 + E + T_1 + T_2 & A_2 + E + T_1 +T_2 \ \boldsymbol{T_2} & & & & & A_1 + E + T_1 + T_2 \end{array} \label{30.21}$
For the point groups D$_4$, C$_{4v}$, D$_{2d}$ (and $D_{4h} = D_4 \otimes C_i$)
$\begin{array}{llllll} & \boldsymbol{A_1} & \boldsymbol{A_2} & \boldsymbol{B_1} & \boldsymbol{B_2} & \boldsymbol{E} \ \boldsymbol{A_1} & A_1 & A_2 & B_1 & B_2 & E \ \boldsymbol{A_2} & & A_1 & B_2 & B_1 & E \ \boldsymbol{B_1} & & & A_1 & A_2 & E \ \boldsymbol{B_2} & & & & A_1 & E \ \boldsymbol{E} & & & & & A_1 + A_2 + B_1 + B_2 \end{array} \label{30.22}$
For the point groups D$_3$ and C$_{3v}$
$\begin{array}{llll} & \boldsymbol{A_1} & \boldsymbol{A_2} & \boldsymbol{E} \ \boldsymbol{A_1} & A_1 & A_2 & E \ \boldsymbol{A_2} & & A_1 & E \ \boldsymbol{E} & & & A_1 + A_2 + E \end{array} \label{30.23}$
For the point groups D$_6$, C$_{6v}$ and D$_{3h}^*$
$\begin{array}{lllllll} & \boldsymbol{A_1} & \boldsymbol{A_2} & \boldsymbol{B_1} & \boldsymbol{B_2} & \boldsymbol{E_1} & \boldsymbol{E_2} \ \boldsymbol{A_1} & A_1 & A_2 & B_1 & B_2 & E_1 & E_2 \ \boldsymbol{A_2} & & A_1 & B_2 & B_1 & E_1 & E_2 \ \boldsymbol{B_1} & & & A_1 & A_2 & E_2 & E_1 \ \boldsymbol{B_2} & & & & A_1 & E_2 & E_1 \ \boldsymbol{E_1} & & & & & A_1 + A_2 + E_2 & B_1 + B_2 + E_1 \ \boldsymbol{E_2} & & & & & & A_1 + A_2 + E_2 \end{array} \label{30.24}$
$^*$in D$_{3h}$ make the following changes in the above table
$\begin{array}{ll} \text{In table} & \text{In D}_{3h} \ A_1 & A_1' \ A_2 & A_2' \ B_1 & A_1'' \ B_2 & A_2'' \ E_1 & E'' \ E_2 & E' \end{array} \label{30.25}$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Symmetry_(Vallance)/1.30%3A_Appendix_B-_Point_Groups.txt
|
• 1.1: Thermodynamic Systems
A thermodynamic system—or just simply a system—is a portion of space with defined boundaries that separate it from its surroundings (see also the title picture of this book). The surroundings may include other thermodynamic systems or physical systems that are not thermodynamic systems.
• 1.2: Thermodynamic Variables
The system is defined and studied using parameters that are called variables. These variables are quantities that we can measure, such as pressure and temperature.
01: Systems and Variables
A thermodynamic system—or just simply a system—is a portion of space with defined boundaries that separate it from its surroundings (see also the title picture of this book). The surroundings may include other thermodynamic systems or physical systems that are not thermodynamic systems. A boundary may be a real physical barrier or a purely notional one. Typical examples of systems are reported in Figure $1$ below. $^1$
In the first case, a liquid is contained in a typical Erlenmeyer flask. The boundaries of the system are the glass walls of the beaker. The second system is represented by the gas contained in a balloon. The boundary is a physical barrier also in this case, being the plastic of the balloon. The third case is that of a thunder cloud. The boundary is not a well-defined physical barrier, but rather some condition of pressure and chemical composition at the interface between the cloud and the atmosphere. Finally, the fourth case is the case of an open flame. In this case, the boundary is again non-physical, and possibly even harder to define than for a cloud. For example, we can choose to define the flame based on some temperature threshold, color criterion, or even some chemical one. Despite the lack of physical boundaries, the cloud and the flame—as portions of space containing matter—can be defined as a thermodynamic system.
A system can exchange exclusively mass, exclusively energy, or both mass and energy with its surroundings. Depending on the boundaries’ ability to transfer these quantities, a system is defined as open, closed, or isolated. An open system exchanges both mass and energy. A closed system exchanges only energy, but not mass. Finally, an isolated system does not exchange mass nor energy.
When a system exchanges mass or energy with its surroundings, some of its parameters (variables) change. For example, if a system loses mass to the surroundings, the number of molecules (or moles) in the system will decrease. Similarly, if a system absorbs some energy, one or more of its variables (such as its temperature) increase. Mass and energy can flow into the system or out of the system. Let’s consider mass exchange only. If some molecules of a substance leave the system, and then the same amount of molecules flow back into the system, the system will not be modified. We can count, for example, 100 molecules leaving a system and assign them the value of –100 in an outgoing process, and then observe the same 100 molecules going back into the system and assign them a value of +100. Regardless of the number of molecules present in the system in the first place, the overall balance will be –100 (from the outgoing process) +100 (from the ingoing process) = 0, which brings the system to its initial situation (mass has not changed). However, from a mathematical standpoint, we could have equally assigned the label +100 to the outgoing process and –100 to the ingoing one, and the overall total would have stayed the same: +100–100 = 0. Which of the two labels is best? For this case, it seems natural to define a mass going out of the system as negative (the system is losing it), and a mass going into the system as positive (the system is gaining it), but is it as straightforward for energy?
Table $1$
Type of System Mass Energy
(either heat or work)
Open Y Y
Closed N Y
Isolated N N
Here is another example. Let’s consider a system that is composed of your body. When you exercise, you lose mass in the form of water (sweat) and CO2 (from respiration). This mass loss can be easily measured by stepping on a scale before and after exercise. The number you observe on the scale will go down. Hence you have lost weight. After exercise, you will reintegrate the lost mass by drinking and eating. If you have reinstated the same amount you have lost, your weight will be the same as before the exercise (no weight loss). Nevertheless, which label do you attach to the amounts that you have lost and gained? Let’s say that you are running a 5 km race without drinking nor eating, and you measure your weight dropping 2 kg after the race. After the race, you drink 1.5 kg of water and eat a 500 g energy bar. Overall you did not lose any weight, and it would seem reasonable to label the 2 kg that you’ve lost as negative (–2) and the 1.5 kg of water that you drank and the 500 g bar that you ate as positive (+1.5 +0.5 = +2). But is it the only way? After all, you didn’t gain nor lose any weight, so why not calling the 2 kg due to exercise +2 and the 2 that you’ve ingested as –2? It might seem silly, but mathematically it would not make any difference, the total would still be zero. Now, let’s consider energy instead of mass. To run the 5km race, you have spent 500 kcal, which then you reintegrate precisely by eating the energy bar. Which sign would you put in front of the kilocalories that you “burned” during the race? In principle, you’ve lost them, so if you want to be consistent, you should use a negative sign. But if you think about it, you’ve put quite an effort to “lose” those kilocalories, so it might not feel bad to assign them a positive sign instead. After all, it’s perfectly OK to say, “I’ve done a 500 kcal run today”, while it might sound quite awkward to say, “I’ve done a –500 kcal run today.” Our previous exercise with mass demonstrates that it doesn’t really matter which sign you put in front of the quantities. As long as you are consistent throughout the process, the signs will cancel out. If you’ve done a +500 kcal run, you’ve eaten a bar for –500 kcal, resulting in a total zero loss/gain. Alternatively, if you’ve done a –500 kcal run, you would have eaten a +500 kcal bar, for a total of again zero loss/gain.
These simple examples demonstrate that the sign that we assign to quantities that flow through a boundary is arbitrary (i.e., we can define it any way we want, as long as we are always consistent with ourselves). There is no best way to assign those signs. If you ask two different people, you might obtain two different answers. But we are scientists, and we must make sure to be rigorous. For this reason, chemists have established a convention for the signs that we will follow throughout this course. If we are consistent in following the convention, we are guaranteed to never make any mistake with the signs.
Definition: System-centric
The chemistry convention of the sign is system-centric:$^2$
• If something (energy or mass) goes into the system it has a positive sign (the system is gaining)
• If something (energy or mass) goes out of the system it has a negative sign (the system is losing)
If you want a trick to remember the convention, use the weight loss/gain during the exercise example above. You are the system, if you lose weight, the kilograms will be negative (–2 kg), while if you gain weight, they will be positive (+2 kg). Similarly, if you eat an energy bar, you are the system, and you will have increased your energy by +500 kcal (positive). In contrast, if you burned energy during exercise, you are the system, and you will have lost energy, hence –500 kcal (negative). If the system is a balloon filled with gas, and the balloon is losing mass, you are the balloon, and you are losing weight; hence the mass will be negative. If the balloon is absorbing heat (likely increasing its temperature and increasing its volume), you are the system, and you are gaining heat; hence heat will be positive.
1. The photos depicted in this figure are taken from Wikipedia: the Erlenmeyer flasks photo was taken by user Maytouch L., and distributed under CC-BY-SA license; the cloud photo was taken by user Mathew T Rader, and distributed under CC-BY-SA license; the flame picture was taken by user Oscar, and distributed under CC-BY-SA license; the balloon photo is in the public domain.︎
2. Notice that physicists use a different sign convention when it comes to thermodynamics. To eliminate confusion, I will not describe the physics convention here, but if you are reading thermodynamics on a physics textbook, or if you are browsing the web and stumble on thermodynamics formula (e.g., on Wikipedia), please be advised that some quantity, such as work, might have a different sign than the one that is used in this textbook. Obviously, the science will not change, but you need to be always consistent, so if you decide that you want to use the physics convention, make sure to always use the physics convention. In this course, on the other hand, we will always use the chemistry one, as introduced above.︎
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/01%3A_Systems_and_Variables/1.01%3A_Thermodynamic_Systems.txt
|
The system is defined and studied using parameters that are called variables. These variables are quantities that we can measure, such as pressure and temperature. However, don’t be surprised if, on some occasions, you encounter some variable that is a little harder to measure directly, such as entropy. The variables depend only on the current state of the system, and therefore they define it. If I know the values of all the “relevant variables” of a system, I know the state of the system. The relationship between the variables is described by mathematical functions called state functions, while the “relevant variables” are called natural variables.
What are the “relevant variables” of a system? The answer to this question depends on the system, and it is not always straightforward. The simplest case is the case of an ideal gas, for which the natural variables are those that enter the ideal gas law and the corresponding equation:
$PV=nRT \label{1.2.1}$
Therefore, the natural variables for an ideal gas are the pressure P, the volume V, the number of moles n, and the temperature T, with R being the ideal gas constant. Recalling from the general chemistry courses, R is a universal dimensional constant which has the values of R = 8.31 kJ/mol in SI units.
We will use the ideal gas equation and its variables as an example to discuss variables and functions in this chapter. We will analyze more complicated cases in the next chapters. Variables can be classified according to numerous criteria, each with its advantages and disadvantages. A typical classification is:
• Physical variables ($P$, $V$, $T$ in the ideal gas law): independent of the chemical composition of the system.
• Chemical variables ($n$ in the ideal gas law): dependent on the chemical composition of the system.
Another useful classification is:
• Intensive variables ($P$, $T$ in the ideal gas law): independent of the physical size (extension) of the system.
• Extensive variables ($V$, $n$ in the ideal gas law): dependent on the physical size (extension) of the system.
When we deal with thermodynamic systems, it is more convenient to work with intensive variables. Luckily, it is relatively easy to convert extensive variables into intensive ones by just taking the ratio between the two of them. For an ideal gas, by taking the ratio between V and n, we obtained the intensive variable called molar volume:
$\overline{V}=\dfrac{V}{n}. \label{1.2.2}$
We can then recast Equation \ref{1.2.1} as:
$P\overline{V}=RT, \label{1.2.3}$
which is the preferred equation that we will use for the remainder of this course. The ideal gas equation connects the 3 variables pressure, molar volume, and temperature, reducing the number of independent variables to just 2. In other words, once 2 of the 3 variables are known, the other one can be easily obtained using these simple relations:
$P(T,\overline{V})=\dfrac{RT}{\overline{V}}, \label{1.2.4}$
$\overline{V}(T,P)=\dfrac{RT}{P}, \label{1.2.5}$
$T(P,\overline{V})=\dfrac{P\overline{V}}{R}. \label{1.2.6}$
These equations define three state functions, each one expressed in terms of two independent natural variables. For example, Equation \ref{1.2.4} defines the state function called “pressure”, expressed as a function of temperature and molar volume. Similarly, Equation \ref{1.2.5} defines the “molar volume” as a function of temperature and pressure, and Equation \ref{1.2.6} defines the “temperature” as a function of pressure and molar volume. When we know the natural variables that define a state function, we can express the function using its total differential, for example for the pressure $P(T, \overline{V})$:
$dP=\left( \dfrac{\partial P}{\partial T} \right)dT + \left( \dfrac{\partial P}{\partial \overline{V}} \right)d\overline{V} \label{1.2.7}$
Recalling Schwartz’s theorem, the mixed partial second derivatives that can be obtained from Equation \ref{1.2.7} are the same:
$\dfrac{\partial^2 P}{\partial T \partial \overline{V}}=\dfrac{\partial}{\partial \overline{V}}\dfrac{\partial P}{\partial T}=\dfrac{\partial}{\partial T}\dfrac{\partial P}{\partial \overline{V}}=\dfrac{\partial^2 P}{\partial \overline{V} \partial T} \label{1.2.8}$
Which can be easily verified considering that:
$\dfrac{\partial}{\partial \overline{V}} \dfrac{\partial P}{\partial T} = \dfrac{\partial}{\partial \overline{V}} \left(\dfrac{R}{\overline{V}}\right) = -\dfrac{R}{\overline{V}^2} \label{1.2.9}$
and
$\dfrac{\partial}{\partial T} \dfrac{\partial P}{\partial \overline{V}} = \dfrac{\partial}{\partial T} \left(\dfrac{-RT}{\overline{V}^2}\right) = -\dfrac{R}{\overline{V}^2} \label{1.2.10}$
While for the ideal gas law, all the variables are “well-behaved” and always satisfy Schwartz’s theorem, we will encounter some variable for which Schwartz’s theorem does not hold. Mathematically, if the Schwartz’s theorem is violated (i.e., if the mixed second derivatives are not equal), then the corresponding function cannot be integrated, hence it is not a state function. The differential of a function that cannot be integrated cannot be defined exactly. Thus, these functions are called path functions; that is, they depend on the path rather than the state. The most typical examples of path functions that we will encounter in the next chapters are heat ($Q$) and work ($W$). For these functions, we cannot define exact differentials $dQ$ and $dW$, and we must introduce a new notation to define their “inexact differentials” $đ Q$ and $đ W$.
We will return to exact and inexact differential when we discuss heat and work, but for this chapter, it is crucial to notice the difference between a state function and a path function. A typical example to understand the difference between state and path function is to consider the distance between two geographical locations. Let’s, for example, consider the distance between New York City and Los Angeles. If we fly straight from one city to the other, there are roughly 4,000 km between them. This “air distance” depends exclusively on the geographical location of the two cities. It stays constant regardless of the method of transportation that I have accessibility to travel between them. Since the cities’ positions depend uniquely on their latitudes and longitudes, the “air distance” is a state function, i.e., it is uniquely defined from a simple relationship between measurable variables. However, the “air distance” is not the distance that I will practically have to drive when I go from NYC to LA. Such “travel distance” depends on the method of transportation that I decide to take (airplane vs. car vs. train vs. boat vs. …). It will depend on a plentiful amount of other factors such as the choice of the road to be traveled (if going by car), the atmospheric conditions (if flying), and so on. A typical “travel distance” by car is, for example, about 4,500 km, which is about 12% more than the “air distance.” Indeed, we could even design a very inefficient road trip that avoids all highways and will result in a “travel distance” of 8,000 km or even more (200% of the “air distance”). The “travel distance” is a clear example of a path function because it depends on the specific path that I decide to travel to go from NYC to LA. See Figure $1$.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/01%3A_Systems_and_Variables/1.02%3A_Thermodynamic_Variables.txt
|
• 2.1: What is Thermodynamics?
Thermodynamics is the branch of science that deals with heat and work, and their relation to energy. As the definition suggests, thermodynamics is concerned with two types of energy: heat and work.
• 2.2: The Zeroth Law of Thermodynamics
The mathematical definition that guarantees that thermal equilibrium is an equivalence relation is called the zeroth law of thermodynamics. The zeroth law of thermodynamics states that if two thermodynamic systems are each in thermal equilibrium with a third one, then they are in thermal equilibrium with each other.
• 2.3: Calculation of Heat
Heat (Q) is a property that gets transferred between substances. Similarly to work, the amount of heat that flows through a boundary can be measured, but its mathematical treatment is complicated because heat is a path function.
• 2.4: Calculation of Work
In the following sections, we will analyze how work is calculated in some prototypical situations commonly encountered in the thermodynamical treatment of systems.
02: Zeroth Law of Thermodynamics
Thermodynamics is the branch of science that deals with heat and work, and their relation to energy. As the definition suggests, thermodynamics is concerned with two types of energy: heat and work. A formal definition of these forms of energy is as follow:
• Work is exchanged if external parameters are changed during the process.
• Heat is exchanged if only internal parameters are changed during the process.
As we saw in chapter 1, heat and work are not “well-behaved” quantities because they are path functions. On the one hand, it might be simple to measure the amount of heat and/or work experimentally, these measured quantities cannot be used to define the state of a system. Since heat and work are path functions, their values depend directly on the methods used to transfer them (their paths). Understanding and quantifying these energy transfers is the reason why thermodynamics was developed in the first place. The origin of thermodynamics dates back to the seventeenth century when people began to use heat and work for technological applications. These early scientists needed a mathematical tool to understand how heat and work were related to each other, and how they were related to the other variables that they were able to measure, such as temperature and volume.
Before we even discuss the definition of energy and how it relates to heat and work, it is crucial to introduce the essential concept of temperature. Temperature is an intuitive concept that has a surprisingly complex definition at the microscopic level. 1 However, for all our purposes, it is not essential to have a microscopic definition of temperature, as long as we have the guarantee that this quantity can be measured unambiguously. In other words, we only need a mathematical definition of temperature that agrees with the physical existence of thermometers.
1. In fact, we will not even give a rigorous microscopic definition of temperature within this textbook.
2.02: The Zeroth Law of Thermodynamics
The mathematical definition that guarantees that thermal equilibrium is an equivalence relation is called the zeroth law of thermodynamics. The zeroth law of thermodynamics states that if two thermodynamic systems are each in thermal equilibrium with a third one, then they are in thermal equilibrium with each other. The law might appear trivial and possibly redundant, but it is a fundamental requirement for the mathematical formulation of thermodynamics, so it needs to be stated. The zeroth law can be summarized by the following simple mathematical relation:
Definition: Zeroth Law of Thermodynamics
If $T_A = T_B$, and $T_B = T_C$, then $T_A = T_C$.
Notice that when we state the zeroth law, it appears intuitive. However, this is not necessarily the case. Let’s, for example, consider a pot of boiling water at $P=1\;\mathrm{bar}$. Its temperature, $T_{H_2O}$, is about 373 K. Let’s now submerge in this water a coin made of wood and another coin made of metal. After some sufficient time, the wood coin will be in thermal equilibrium with the water, and its temperature $T_W = T_{H_2O}$. Similarly, the metal coin will also be in thermal equilibrium with the water, hence $T_M = T_{H_2O}$. According to the zeroth law, the temperature of the wood coin and that of the metal coin are precisely the same $T_W = T_M = 373\;\mathrm{K}$, even if they are not in direct contact with each other. Now here’s the catch: since wood and metal transmit heat in different manners if I take the coins out of the water and put them immediately in your hands, one of them will be very hot, but the other will burn you. If you had to guess the temperature of the two coins without a thermometer, and without knowing that they were immersed in boiling water, would you suppose that they have the same temperature? Probably not.
2.03: Calculation of Heat
Heat ($Q$) is a property that gets transferred between substances. Similarly to work, the amount of heat that flows through a boundary can be measured, but its mathematical treatment is complicated because heat is a path function. As you probably recall from general chemistry, the ability of a substance to absorb heat is given by a coefficient called the heat capacity, which is measured in SI in $\dfrac{\text{J}}{\text{mol K}}$. However, since heat is a path function, these coefficients are not unique, and we have different ones depending on how the heat transfer happens.
Processes at constant volume (isochoric)
The heat capacity at constant volume measures the ability of a substance to absorb heat at constant volume. Recasting from general chemistry:
The molar heat capacity at constant volume is the amount of heat required to increase the temperature of 1 mol of a substance by 1 K at constant volume.
This simple definition can be written in mathematical terms as:
$C_V = \dfrac{đ Q_V}{n dT} \Rightarrow đ Q_V = n C_V dT. \label{2.3.1}$
Given a known value of $C_V$, the amount of heat that gets transfered can be easily calculated by measuring the changes in temperature, after integration of Equation \ref{2.3.1:
$đ Q_V = n C_V dT \rightarrow \int đ Q_V = n \int_{T_i}^{T_F}C_V dT \rightarrow Q_V = n C_V \int_{T_i}^{T_F}dT, \label{2.3.2}$
which, assuming $C_V$ independent of temperature, simply becomes:
$Q_V \cong n C_V \Delta T. \label{2.3.3}$
Processes at constant pressure (isobaric)
Similarly to the previous case, the heat capacity at constant pressure measures the ability of a substance to absorb heat at constant pressure. Recasting again from general chemistry:
The molar heat capacity at constant pressure is the amount of heat required to increase the temperature of 1 mol of a substance by 1 K at constant pressure.
And once again, this mathematical treatment follows:
$C_P = \dfrac{đ Q_P}{n dT} \Rightarrow đ Q_P = n C_P dT \rightarrow \int đ Q_P = n \int_{T_i}^{T_F}C_P dT, \label{2.3.4}$
which result in the simple formula:
$Q_P \cong n C_P \Delta T. \label{2.3.5}$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/02%3A_Zeroth_Law_of_Thermodynamics/2.01%3A_What_is_Thermodynamics.txt
|
In thermodynamics, work ($W$) is the ability of a system to transfer energy by exerting a force on its surroundings. Work can be measured simply by evaluating its effects, such as displacing a massive object by some amount of space. The mathematical treatment of work, however, is complicated because work is a path function. In the following sections, we will analyze how work is calculated in some prototypical situations commonly encountered in the thermodynamical treatment of systems.
Let’s consider the situation in Figure $1$, where a special beaker with a piston that is free to move is filled with an ideal gas. The beaker sits on a desk, so the piston is not subject to any external forces other than the external pressure, $P_{\text{ext}}$, and the internal pressure of the gas, $P$. 1 The piston is initially compressed to a position that is not in equilibrium $(i)$. After the process, the piston reaches a final equilibrium position $(f)$. How do we calculate the work ($W$) performed by the system?
From basic physics, we recall that the infinitesimal amount of work associated with an object moving in space is given by the force acting on the object ($F$) multiplied by the infinitesimal amount it gets displaced ($d h$):
$đ W = - Fdh, \label{2.4.1}$
where the negative sign comes from the chemistry sign convention, Definition: System-centric, since the work in Figure $1$ is performed by the system (expansion). What kind of force is moving the piston? It is the force due to the pressure of the gas. Relying upon another definition from physics, the pressure is the ratio between the force ($F$) and the area ($A$) that such force acts upon:
$P = F/A. \label{2.4.2}$
Obtaining $F$ from Equation \ref{2.4.2} and replacing it in Equation \ref{2.4.1}, we obtain:
$đ W - P \underbrace{Adh}_{dV}, \label{2.4.3}$
and considering that $Adh$ (area times infinitesimal height) is the definition of an infinitesimal volume $dV$, we obtain:
$đ W = - PdV, \label{2.4.4}$
If we want to calculate the amount of work performed by a system, $W$, from Equation \ref{2.4.4}, we need to recall that $đ W$ is an inexact differential. As such, we cannot integrate it from initial to final as for the (exact) differential of a state function, because:
$\int_{i}^{f}đ W \neq W_f - W_i, \label{2.4.5}$
but rather:
$\int_{\text{path}} đ W = W, \label{2.4.6}$
where the integration is performed along the path. Using Equation \ref{2.4.6}, we can integrate Equation \ref{2.4.4} as:
$\int đ W = W = - \int_{i}^{f} PdV, \label{2.4.7}$
where the integral on the left-hand side is taken along the path,2 while the integral on the right-hand side can be taken between the initial and final states, since $dV$ is a state function. How do we solve the integral in Equation \ref{2.4.7}? It turns out that there are many different ways to solve this integral (perhaps not surprisingly, since the left-hand side depends on the path), which we will explore in the next section.
$| W_{\text{max}} |$ and $| W_{\text{min}} |$ in processes at constant temperature (isothermal)
At constant temperature, the piston in Figure $1$ moves along the following PV diagram (this curve is obtained from an ideal gas at constant $T=298$ K):
An expansion of the gas will happen between $P_i$ and $P_f$. If the expansion happens in a one-step fast process, for example against external atmospheric pressure, then we can consider such final pressure constant (for example $P_f=P_{\text{ext}} =1\;\mathrm{bar}$), and solve the integral as:
$W_{\text{1-step}} = - \int_{i}^{f} P_{\text{ext}}dV = -P_{\text{ext}} \int_{i}^{f} dV = -P_{\text{ext}} (V_f-V_i), \label{2.4.8}$
Notice how the work is negative, since during an expansion the work is performed by the system (recall the chemistry sign convention). The absolute value of the work3 represents the red area of the PV-diagram:
$\left| W_{\text{1-step}} \right| = P_{\text{ext}} (V_f-V_i) \label{2.4.9}$
If the process happens in two steps by pausing at an intermediate position (1) until equilibrium is reached, then we should calculate the work by dividing it into two separate processes, $A$ and $B$, and solve each one as we did in the previous case. The first process is an expansion between $P_i$ and $P_1$, with $P_1$ constant. The absolute value of the work, $W_A$, is represented by the blue area:
$\left| W_A \right| = P_1 (V_1-V_i) \label{2.4.10}$
The second process is an expansion between $P_1$ and $P_f$, with $P_f=P_{\text{ext}}$ constant. The absolute value of the work for this second process is represented by the green area:
$\left| W_B \right| = P_f (V_f-V_1) \label{2.4.11}$
The total absolute value of the work for the 2-step process is given by the sum of the two areas:
$\left| W_{\text{2-step}} \right| = \left| W_A \right| + \left| W_B \right| = P_1 (V_1-V_i)+P_f (V_f-V_1). \label{2.4.12}$
As can be easily verified by comparing the shaded areas in the plots, $\left| W_{\text{2-step}} \right| > \left| W_{\text{1-step}} \right|$.
We can easily extend this procedure to consider processes that happens in 3, 4, 5, …, $n$ steps. What is the limit of this procedure? In other words, what happens when $n \rightarrow \infty$? A simple answer is given by the plots in the next Figure, which clearly demonstrates that the maximum value of the area underneath the curve $\left| W_{\text{max}}\right|$ is achieved in an $\infty$-step process, for which the work is calculated as:
$\left| W_{\infty \text{-step}} \right| = \left| W_{\text{max}} \right| = \sum_{n}^{\infty} P_n(V_n-V_{n-1}) = \int_{i}^{f} PdV. \label{2.4.13}$
The integral on the right hand side of Equation \ref{2.4.13} can be solved for an ideal gas by calculating the pressure using the ideal gas law $P=\dfrac{nRT}{V}$, and solving the integral since $n$, $R$, and $T$ are constant:
$\left| W_{\text{max}} \right| = nRT \int_{i}^{f} \dfrac{dV}{V} = nRT \ln \dfrac{V_f}{V_i}. \label{2.4.14}$
This example demonstrates why work is a path function. If we perform a fast 1-step expansion, the system will perform an amount of work that is much smaller than the amount of work it can perform if the expansion between the same points happens slowly in an $\infty$-step process.
The same considerations that we made up to this point for expansion processes hold specularly for compression ones. The only difference is that the work associated with compressions will have a positive sign since it must be performed onto the system. As such, the amount of work for a transformation that happens in a finite amount of steps will be an upper bound to the minimum amount of work required to compress the system.4 $\left| W_{\text{min}} \right|$ for compressions is calculated as the area underneath the PV curve, exactly as $\left| W_{\text{min}} \right|$ for expansions in Equation \ref{2.4.13}.
1. For this simple thought experiment, we will ignore any external force that is not significant. In other words, we will not consider the friction of the piston on the beaker walls or any other foreign influence.︎
2. from here on we will replace the notation $\int_{\text{path}}$ with the more convenient $\int$ and we will keep in mind that the integral of an inexact differential must be taken along the path.︎
3. we use the absolute value to avoid confusions due to the fact that the expansion work is negative according to Definition: System-centric.︎
4. In contrast to a lower bound for expansion processes.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/02%3A_Zeroth_Law_of_Thermodynamics/2.04%3A_Calculation_of_Work.txt
|
The internal energy ($U$) of a system is a thermodynamic state function defined as:
Definition: Internal Energy
Property of a system that can be either transferred or converted.
In the absence of chemical transformations, heat and work are the only two forms of energy that thermodynamics is concerned with. Keeping in mind Definition: System-Centric, which gives the convention for the signs of heat and work, the internal energy of a system can be written as:
$U = Q + W, \label{3.1.1}$
which we can write in differential form by considering that the internal energy is a state function, as: $dU = đ Q + đ W, \label{3.1.2}$
which, using eq. 2.4.4 becomes:
$dU = đ Q - PdV. \label{3.1.3}$
Internal energy in isothermal processes
To study the behavior of the internal energy in a process at constant temperature ($dT=0$), James Prescott Joule (1818–1889) created the apparatus depicted in Figure $1$.
The left side of the Joule apparatus’s inner chamber is filled with an ideal gas, while a vacuum is created in the right chamber. Both chambers are immersed in a water bath, to guarantee isolation from the environment. When the communication channel between the chambers is open, the gas expands and equilibrates. The work associated with the transformation is:
$đ W=-P_{\text{ext}}dV = 0, \label{3.1.4}$
since the chambers are not in communication with the environment, $P_{\text{ext}}=0$. Thus, changes in internal energy are associated with the heat transfer of the process, which can be measured by monitoring the temperature of the gas at the beginning, $T_i$, and at the end of the experiment $T_f$. Joule noticed experimentally that if he used an ideal gas for this experiment, the temperature would not change $T_i = T_f$. Since the temperature doesn’t change, there is no heat transfer, and therefore the internal energy stays constant:
$dU = đ Q = 0. \label{3.1.5}$
Note
Notice that Joule’s conclusion is valid only for an ideal gas. If we expand a real gas, we do notice a change in temperature associated with the expansion. A typical example of this behavior is when you use a pressurized spray bottle and release its content for an extended time in the air. The container will typically get colder. We will discuss this behavior in chapter 11 when we will study non-ideal gases.
From this simple experiment, we can conclude that the internal energy of an ideal gas depends only on its temperature.
Internal energy in adiabatic processes
An adiabatic process is defined as a process that happens without the exchange of heat. As such, $đ Q=0$, and the work associated with an adiabatic process becomes a state function:
$dU=đ W=-PdV, \label{3.1.6}$
which can then be calculated using the formulas that we derived in section 2.4. Notice that isothermal and adiabatic are two very different processes. While an adiabatic process happens without the exchange of heat across the system’s boundaries, this does not mean that the system’s temperature does not change. Isothermal processes are usually associated with a heat transfer across the boundaries to maintain the temperature of the system constant. For adiabatic processes, it is quite the opposite since they are usually associated with a change in temperature.
Internal energy in isochoric processes
An isochoric process is a process in which the volume does not change. Therefore, $đ W=0$, and $dU = đ Q_V$, which using Equation 2.3.1, becomes:
$dU = đ Q_V = n C_V dT. \label{3.1.7}$
Since no work is performed at these conditions, the heat becomes a state function. Equation \ref{3.1.7} also gives a mathematical justification of the concept of heat capacity at constant volume. $C_V$ can now be interpreted as the partial derivative (a coefficient) of a state function (the internal energy):
$C_V = \left( \dfrac{\partial U} {\partial T} \right)_{V,n=1}, \label{3.1.8}$
where we have replaced the total derivative $d$ with a partial one $\partial$, and we have specified that the derivation happens at constant volume and number of moles. Equation \ref{3.1.8} equation brings a rigorous definition of heat capacity at constant volume for 1 mol of substance:
Definition: Heat Capacity
The heat capacity of a substance, $C_V$, represents its ability to absorb energy at constant volume.
Internal energy in isobaric processes
In an isobaric process, the pressure does not change, hence $dP=0$. Unfortunately, Equation \ref{3.1.2} for this case does not simplify further, as happened in the two previous cases. However, in section 2.3, we have introduced the useful concept of heat capacity at constant $P$. $C_P$ was used in an adiabatic process in the same manner as $C_V$ was used in the isochoric case. That is, as a coefficient to measure the amount of heat absorbed at constant pressure. Equation \ref{3.1.8} gave a mathematical definition of $C_V$ as the partial derivative of a state function (the internal energy). But if heat capacities are coefficients, and coefficients are partial derivatives of state functions, how do we explain $C_P$? In order to do so, we can introduce a new state function, called the enthalpy ($H$), as:
$H = U + PV, \label{3.1.9}$
and its differential, calculated as:
$dH = dU + d(PV) = dU + PdV + \overbrace{VdP}^{0}, \label{3.1.10}$
which can be rearranged as:
$dU = dH -PdV, \label{3.1.11}$
Replacing Equation \ref{3.1.11} into Equation \ref{3.1.3}:
$dH -PdV = đ Q_P - PdV, \label{3.1.12}$
which simplifies to:
$dH = đ Q_P. \label{3.1.13}$
Equation \ref{3.1.13} establishes that the heat exchanged at constant pressure is equal to a new state function called the enthalpy, defined by Equation \ref{3.1.9}. It also establishes a mathematical justification of the concept of heat capacity at constant pressure. Similarly to $C_V$, $C_P$ can now be interpreted as the partial derivative (a coefficient) of the new state function (the enthalpy):
$C_P = \left( \dfrac{\partial H} {\partial T} \right)_{P,n=1}, \label{3.1.14}$
Equation \ref{3.1.14} brings also a rigorous definition of heat capacity at constant pressure for 1 mol of substance:
Definition: Heat Capacity
The heat capacity of a substance, $C_P$, represents its ability to absorb enthalpy at constant pressure.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/03%3A_First_Law_of_Thermodynamics/3.01%3A_Calculation_of_Internal_Energy_Changes.txt
|
We finally come to a working definition of the first law. If we take an isolated system—i.e., a system that does not exchange heat nor mass with its surroundings—its internal energy is conserved. If the internal energy is conserved, $dU=0$. Therefore, for an isolated system:
$đ Q = -đ W, \nonumber$
and heat and work can be easily calculated using any of the appropriate formulas introduced in either section 2.4 or 2.3.
The first law is a conservation law. It is intuitive since it comes directly from Lavoisier’s principle of “nothing is lost, nothing is created, everything is transformed.” Considering that the only system that is truly isolated is the universe, we can condense the first law in one simple sentence:
Definition: First Law of Thermodynamics
First Law of Thermodynamics: The energy of the universe is conserved.
3.03: Reversible and Irreversible Processes
Let’s now consider the cycle in Figure $1$. The process in this case starts from state 1 (system at $P_1V_1$), expands to state 2 (system at $P_2V_2$), and compresses back to state 1 (system back to $P_1V_1$).
Since the process starts and finishes at the same state, the value of the internal energy at the end of the process will be the same as its value at the beginning, regardless of the path:$^1$
$\oint dU=0, \label{3.3.1}$
where the symbol $\oint$ indicates an integral around a cycle. Considering the work associated with the cycle, however, the situation is radically different because it depends on the path that the system is taking, and in general $\oint_{\text{path}} đW \neq 0. \label{3.3.2}$
For instance, if we perform the expansion in one step, the work associated with it will be (using Equation $8$):$^2$
$W^{\text{expansion}}_{\text{1-step}}=-P_2(\underbrace{V_2-V_1}_{>0})<0, \label{3.3.3}$
and if we also perform the compression in 1-step: $^{3}$
$W^{\text{compression}}_{\text{1-step}}=-P_1(\underbrace{V_1-V_2}_{<0})>0.\label{3.3.4}$
With a little bit of math, it is easy to prove that the total work for the entire cycle is:
\begin{aligned} W^{\text{cycle}}_{\text{1-step}} {} & = W^{\text{expansion}}_{\text{1-step}}+W^{\text{compression}}_{\text{1-step}} \ & = -P_2(V_2-V_1)-P_1(V_1-V_2) \ & = -P_2(V_2-V_1)+P_1(V_2-V_1) \ & = (\underbrace{V_2-V_1}_{>0})(\underbrace{P_1-P_2}_{>0}) > 0, \end{aligned} \label{3.3.5}
or, in other words, net work is destroyed.
Note
In practice, if we want to manually perform this cycle by pushing on the piston by hand, we will notice that it requires more energy to push down than the amount it gives back when we release it, and it moves back up.
In contrast, if both the expansion and the compression happen in a slow $\infty$-step manner, the work associated with them will be $W_{\text{max}}$ and $W_{\text{min}}$, respectively, which are calculated using Equation $14$. The total work related with the cycle will be in this case:
\begin{aligned} W^{\text{cycle}}_{\infty\text{-step}} {} & = W^{\text{expansion}}_{\text{max}}+W^{\text{compression}}_{\text{min}} \ & = -nRT \ln \dfrac{V_f}{V_i}-nRT \ln \dfrac{V_i}{V_f} \ & = -nRT \underbrace{\left( \ln \dfrac{V_f}{V_i} - \ln \dfrac{V_f}{V_i} \right) }_{=0} = 0, \end{aligned} \label{3.3.6}
which means that, in this case, work is not destroyed nor created.
Note
In practice, if we were able to perform this cycle manually by pushing on the piston down by hand, we will notice that it requires the same amount of energy to push down than the amount it gives back when it moves up.
This process can happen both ways without losses, and is called reversible:
Definition: Reversible Process
Reversible Process: a process whose direction can be returned to its original position by inducing infinitesimal changes to some property of the system via its surroundings.$^4$
Reversible processes are ideal processes that are hard to realize in practice since they require transformations that happen in an infinite amount of steps (infinitely slowly).
1. recall that the internal energy is a state function, so its value depends exclusively from the conditions at the beginning and at the end. In a cycle, we’re going back to the same point, so the conditions at the beginning and at the end are equal by definition.︎
2. notice that the work for the expansion is negative, as it should be.︎
3. notice that the work for the compression is positive, as it should be.︎
4. Definition from: Sears, F.W. and Salinger, G.L. (1986), Thermodynamics, Kinetic Theory, and Statistical Thermodynamics, 3rd edition (Addison-Wesley.)
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/03%3A_First_Law_of_Thermodynamics/3.02%3A_The_First_Law_of_Thermodynamics.txt
|
In chapter 3, we have discussed thermodynamical changes in energy in the absence of chemical reactions. When a chemical reaction takes place, some bonds break and/or some new one form. This process either absorbs or releases the energy contained in these bonds. For a proper thermodynamic treatment of the system, this extra energy must be included in the net balance.
In this chapter, we will consider the heat associated with chemical reactions. Since most chemical reactions happen at constant atmospheric pressure (isobaric conditions) in the lab, we can use eq. 3.1.13 to replace the inexact differential of the heat with the exact differential of the state function called enthalpy. The advantage of this transformation is that it allows us to study the heat associated with chemical reactions at constant pressure independently of their path. If we call the molecules at the beginning of the reaction “reactants” and the molecules at the end of the reaction “products,” the heat associated with the reaction (rxn) is defined as:
$\Delta_{\text{rxn}} H = H_{\text{products}}-H_{\text{reactants}} \; . \nonumber$
For example, if we take a simple reaction of the form:
$\mathrm{A} + \mathrm{B} \rightarrow \mathrm{C} + \mathrm{D}, \nonumber$
the heat at constant pressure is equal to the enthalpy of reaction, which is calculated as:
$Q_P = \Delta_{\text{rxn}} H = \underbrace{ \left (H_{\mathrm{C}}+H_{\mathrm{D}} \right) }_{\text{products}} - \underbrace{\left( H_{\mathrm{A}}+H_{\mathrm{B}}\right)}_{\text{reactants}}. \label{4.1.2}$
Using the chemistry sign convention, Definition: System-centric, reactions are classified in terms of the sign of their reaction enthalpies as follows:
Definition: Signs of Reaction Enthalpies
$\Delta_{\text{rxn}} H > 0 \Rightarrow$ Endothermic reaction (heat is gained by the system).
$\Delta_{\text{rxn}} H < 0 \Rightarrow$ Exothermic reaction (heat is lost by the system).
If we expand the sample reaction to account for its stoichiometry:
$a\mathrm{A} + b\mathrm{B} \rightarrow c\mathrm{C} + d\mathrm{D}\; , \nonumber$
where $a,b,c,d$ are the stoichiometric coefficients of species $\mathrm{A,B,C,D}$. Equation \ref{4.1.2} can be rewritten as:
$Q_P = \Delta_{\text{rxn}} H = \underbrace{\left( cH_{\mathrm{C}}+dH_{\mathrm{D}} \right) }_{\text{products}} - \underbrace{ \left( aH_{\mathrm{A}}+bH_{\mathrm{B}} \right)}_{\text{reactants}}, \label{4.1.3}$
while for the most general case we can write it:
$\Delta_{\text{rxn}} H = \sum_i \nu_i H_i, \nonumber$
where $\nu_i$ is the stoichiometric coefficient of species $i$ with its own sign. The signs of the stoichiometric are defined according to Equation \ref{4.1.3} as:
Definition: Signs of the stoichiometric coefficients
$\nu_i$ is positive if $i$ is a product.
$\nu_i$ is negative if $i$ is a reactant.
4.02: Standard Enthalpies of Formation
In principle, we could use eq. 4.1.3 to calculate the reaction enthalpy associated with any reaction. However, to do so, the absolute enthalpies $H_i$ of reactants and products would be required. Unfortunately, absolute enthalpies are not known—and theoretically unknowable, since this would require an absolute zero for the enthalpy scale, which does not exist.$^{1}$ To prevent this problem, enthalpies relative to a defined reference state must be used. This reference state is defined at the constituent elements in their standard state, and the enthalpies of 1 mol of substance in this reference state are called standard enthalpies of formation.
Definition: Standard Enthalpy of Formation
The standard enthalpy of formation of compound $i$, $\Delta_{\mathrm{f}} H^{\; -\kern-7pt{\ominus}\kern-7pt-}_i$, is the change of enthalpy during the formation of 1 mol of $i$ from its constituent elements, with all substances in their standard states.
The standard pressure is defined at $P^{\; -\kern-7pt{\ominus}\kern-7pt-} = 100 \; \mathrm{kPa} = 1 \; \mathrm{bar}$.$^2$ There is no standard temperature, but standard enthalpies of formation are usually reported at room temperature, $T = 298.15 \; \mathrm{K}$. Standard states are indicated with the symbol $\; -\kern-7pt{\ominus}\kern-7pt-$ and they are defined for elements as the form in which such element is most stable at standard pressure (for example, for hydrogen, carbon, and oxygen the standard states are $\mathrm{H}_{2(g)}, \mathrm{C}_{(s,\text{graphite})}, \text{and }\mathrm{O}_{2(g)}$, respectively).$^3$
For example, the standard enthalpies of formation of some common compounds at $T = 298.15 \; \mathrm{K}$ are calculated from the following reactions:
\begin{aligned} \mathrm{C}_{(s,\text{graphite})}+\mathrm{O}_{2(g)} \rightarrow \mathrm{CO}_{2(g)} \qquad & \Delta_{\mathrm{f}} H_{\mathrm{CO}_{2(g)}}^{\; -\kern-7pt{\ominus}\kern-7pt-}= -394 \; \text{kJ/mol} \ \mathrm{C}_{(s,\text{graphite})}+2 \mathrm{H}_{2(g)} \rightarrow \mathrm{CH}_{4(g)} \qquad & \Delta_{\mathrm{f}} H_{\mathrm{CH}_{4(g)}}^{\; -\kern-7pt{\ominus}\kern-7pt-}= -75 \; \text{kJ/mol} \ \mathrm{H}_{2(g)}+\frac{1}{2} \mathrm{O}_{2(g)} \rightarrow \mathrm{H}_2 \mathrm{O}_{(l)} \qquad & \Delta_{\mathrm{f}} H_{\mathrm{H}_2 \mathrm{O}_{(l)}}^{\; -\kern-7pt{\ominus}\kern-7pt-}= -286 \; \text{kJ/mol} \end{aligned} \nonumber
A comprehensive list of standard enthalpies of formation of inorganic and organic compounds is also reported in appendix 16.
1. An example of a known absolute zero for a scale is the zero of the temperature scale, a temperature that can be approached only as a limit from above. No such thing exists for the enthalpy.︎
2. prior to 1982 the value of $P^{\; -\kern-7pt{\ominus}\kern-7pt-} = 1.0 \mathrm{ atm}$ was used. The two values of $P^{\; -\kern-7pt{\ominus}\kern-7pt-}$ are within 1% of each other, since 1 atm = 101.325 kPa.︎
3. There are some exception, such as phosphorus, for which the most stable form at 1 bar is black phosphorus, but white phosphorus is chosen as the standard reference state for zero enthalpy of formation. For the purposes of this course, however, we can safely ignore them.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/04%3A_Thermochemistry/4.01%3A_Reaction_Enthalpies.txt
|
The calculation of a standard reaction enthalpy can be performed using the following cycle:
\begin{aligned} \text{reactants} & \quad \xrightarrow{\Delta_{\text{rxn}} H^{\; -\kern-5pt{\ominus}\kern-5pt-}} \quad \text{products} \ \scriptstyle{-\Delta_{\text{f}} H_{\text{reactants}}^{\; -\kern-5pt{\ominus}\kern-5pt-}} \quad \bigg\downarrow \quad & \qquad \qquad \qquad \qquad \scriptstyle{\bigg\uparrow \; \Delta_{\text{f}} H_{\text{products}}^{\; -\kern-5pt{\ominus}\kern-5pt-}} \ \text{"elements in } & \text{their standard reference state"} \end{aligned} \label{4.3.1}
This process is summarized by the simple formula:
$\Delta_{\text{rxn}} H^{\; -\kern-7pt{\ominus}\kern-7pt-}= \Delta_{\mathrm{f}} H_{\text{products}}^{\; -\kern-7pt{\ominus}\kern-7pt-}- \Delta_{\mathrm{f}} H_{\text{reactants}}^{\; -\kern-7pt{\ominus}\kern-7pt-}. \label{4.3.2}$
Notice how there is a negative sign in front of the enthalpy of formation of the reactants because they are normally defined for the reactions that go from the elements to the reactants and not vice-versa. To close the cycle in Equation \ref{4.3.1}, however, we should go from the reactants to the elements, and therefore we must invert the sign in front of the formation enthalpies of the reactants. Equation \ref{4.3.2} can be generalized using the same technique used to derive eq. 4.1.4, resulting in:
$\Delta_{\text{rxn}} H^{\; -\kern-7pt{\ominus}\kern-7pt-}= \sum_i \nu_i \Delta_{\mathrm{f}} H_i^{\; -\kern-7pt{\ominus}\kern-7pt-}, \label{4.3.3}$
which is a mathematical expression of the law that is known as Hess’s Law. Hess’s law is valid at constant pressure because, at those conditions, the heat of reaction—a path function—is equal to the enthalpy of reaction—a state function. Therefore, the enthalpy of a reaction depends exclusively on the initial and final state, and it can be obtained via the pathway that passes through the elements in their standard state (the formation pathway).
Exercise $1$
Calculate the standard enthalpy of formation at 298 K for the combustion of 1 mol of methane, using the data in eq. 4.2.1.
Answer
The reaction that is under consideration is: $\mathrm{CH}_{4(g)} + 2 \mathrm{O}_{2(g)} \rightarrow \mathrm{CO}_{2(g)} + 2 \mathrm{H}_2 \mathrm{O}_{(l)} \qquad \Delta_{\mathrm{f}} H_{\mathrm{CH}_{4(g)}}^{\; -\kern-7pt{\ominus}\kern-7pt-}= ? \nonumber$
Using Hess’s Law, Equation \ref{4.3.3}, the enthalpy of formation for methane is:
$\Delta_{\text{rxn}} H^{\; -\kern-7pt{\ominus}\kern-7pt-}= \Delta_{\text{f}} H^{\; -\kern-7pt{\ominus}\kern-7pt-}_{\mathrm{CO}_{2(g)}} + 2 \Delta_{\text{f}} H^{\; -\kern-7pt{\ominus}\kern-7pt-}_{\mathrm{H}_{2}O_{(l)}} - \Delta_{\text{f}} H^{\; -\kern-7pt{\ominus}\kern-7pt-}_{\mathrm{CH}_{4(g)}} - 2 \underbrace{\Delta_{\text{f}} H^{\; -\kern-7pt{\ominus}\kern-7pt-}_{\mathrm{O}_{2(g)}}}_{=0} \nonumber$
whose values are reported in eq. 4.2.1. Notice that the formation enthalpy of $O_{2(g)}$ is zero, since it is an element in its standard state. The final result is:
$\Delta_{\text{rxn}} H^{\; -\kern-7pt{\ominus}\kern-7pt-}= \overbrace{-394}^{\Delta_{\text{f}} H^{\; -\kern-5pt{\ominus}\kern-5pt-}_{\mathrm{CO}_{2(g)}}} +2 \overbrace{(-286)}^{\Delta_{\text{f}} H^{\; -\kern-5pt{\ominus}\kern-5pt-}_{\mathrm{H}_{2}O_{(l)}}} - \overbrace{(-75)}^{\Delta_{\text{f}} H^{\; -\kern-5pt{\ominus}\kern-5pt-}_{\mathrm{CH}_{4(g)}}} = -891 \mathrm{kJ/mol}. \nonumber$
where the negative sign indicates that the reaction is exothermic (see eq. 4.1.1), as we should expect. The cycle that we used to solve this exercise can be summarized with :
\begin{aligned} \mathrm{CH}_{4(g)} + & 2 \mathrm{O}_{2(g)} \quad \xrightarrow{\Delta_{\text{rxn}} H^{\; -\kern-5pt{\ominus}\kern-5pt-}} \quad \mathrm{CO}_{2(g)} + 2 \mathrm{H}_2 \mathrm{O}_{(l)} \ \scriptstyle{-\Delta_{\text{f}} H_{\mathrm{CH}_{4(g)},\mathrm{O}_{2(g)}}^{\; -\kern-5pt{\ominus}\kern-5pt-}} & \searrow \qquad \qquad \qquad \qquad \qquad \nearrow \; \scriptstyle{\Delta_{\text{f}} H_{\text{CO}_{2(g)},\mathrm{H}_{2(g)}}^{\; -\kern-5pt{\ominus}\kern-5pt-}}\ & \qquad \mathrm{H}_{2(g)}, \mathrm{C}_{(s,\text{graphite})}, \mathrm{O}_{2(g)} \end{aligned} \nonumber
Notice that at standard pressure and $T = 298 \; \mathrm{K}$ water is in liquid form. However, when we burn methane, the heat associated with the exothermic reaction immediately vaporize the water. Substances in different states of matter have different formation enthalpies, and $\Delta_{\text{f}} H^{\; -\kern-7pt{\ominus}\kern-7pt-}_{\mathrm{H}_{2}O_{(l)}} = -242 \ \mathrm{kJ/mol}$. The difference between the formation enthalpies of the same substance in different states represents the latent heat that separates them. For example, for water:
\begin{aligned} \Delta_{\text{vap}} H^{\; -\kern-7pt{\ominus}\kern-7pt-}_{\mathrm{H}_2O} & = \Delta_{\text{f}} H^{\; -\kern-7pt{\ominus}\kern-7pt-}_{\mathrm{H}_{2}O_{(g)}} - \Delta_{\text{f}} H^{\; -\kern-7pt{\ominus}\kern-7pt-}_{\mathrm{H}_{2}O_{(l)}} \ & = (-242) - (-286) = + 44 \; \text{kJ/mol} \end{aligned} \nonumber
which is the latent heat of vaporization for water, $\Delta_{\text{vap}} H^{\; -\kern-7pt{\ominus}\kern-7pt-}_{\mathrm{H}_2O}$. The latent heat is positive to indicate that the system absorbs energy in going from the liquid to the gaseous state (and it will release energy when going the opposite direction from gas to liquid).
4.04: Calculations of Enthalpies of Reaction at T 298 K
Standard enthalpies of formation are usually reported at room temperature ($T$ = 298 K), but enthalpies of formation at any temperature $T'$ can be calculated from the values at 298 K using eqs. (2.4) and (3.13):
\begin{aligned} dH = C_P dT \rightarrow & \int_{H_{T=298}^{-\kern-6pt{\ominus}\kern-6pt-}}^{H_{T'}} dH = \int_{T=298}^{T'} C_P dT \ & H_{T'}^{-\kern-6pt{\ominus}\kern-6pt-}- H_{T=298}^{-\kern-6pt{\ominus}\kern-6pt-}= \int_{T=298}^{T'} C_P dT \ & H_{T'}^{-\kern-6pt{\ominus}\kern-6pt-}= H_{T=298}^{-\kern-6pt{\ominus}\kern-6pt-}+ \int_{T=298}^{T'} C_P dT, \end{aligned} \tag{4.9} \label{4.9}
which, in conjunction with Hess’s Law (Equation \ref{4.8}), results in:
$\Delta_{\text{rxn}} H_{T'}^{-\kern-6pt{\ominus}\kern-6pt-}= \Delta_{\text{rxn}} H_{T=298}^{-\kern-6pt{\ominus}\kern-6pt-}+ \int_{T=298}^{T'} \Delta C_P dT, \tag{4.10} \label{4.10}$
with $\Delta C_P = \sum_i \nu_i C_{P,i}$.
Exercise $2$
Calculate $\Delta_{\text{rxn}}H$ of the following reaction at 398 K, knowing that $\Delta_{\text{rxn}}H^{-\kern-6pt{\ominus}\kern-6pt-}$ at 298 K is -283.0 kJ/mol, and the following $C_P$ values: $\mathrm{CO}_{(g)}$ = 29 J/(mol K), $\mathrm{O}_{2(g)}$ = 30 J/(mol K), $\mathrm{CO}_{2(g)}$ = 38 J/(mol K):
$\mathrm{CO}_{(g)}+\frac{1}{2}\mathrm{O}_{2(g)} \rightarrow \mathrm{CO}_{2(g)}, \nonumber$
Answer
Using Equation \ref{4.10} we obtain:
$\Delta_{\text{rxn}} H^{398} = \overbrace{-283.0}^{\Delta_{\text{rxn}}H^{-\kern-6pt{\ominus}\kern-6pt-}} + \int_{298}^{398} ( \overbrace{38}^{C_P^{\mathrm{CO}_2}} -\overbrace{29}^{C_P^{\mathrm{CO}}} -\frac{1}{2}\overbrace{30}^{C_P^{\mathrm{O}_2}} ) \times 10^{-3} dT, \nonumber$
which, assuming that the heat capacities does not depend on the temperature, becomes:
\begin{aligned} \Delta_{\text{rxn}} H^{398} &= -283.0 + \left(38-29-\frac{1}{2}30 \right) \times 10^{-3} (398-298) \ &= -283.6 \; \text{kJ/mol}. \end{aligned} \nonumber
As we notice from this result, a difference in temperature of 100 K translates into a change in $\Delta_{\text{rxn}}H^{-\kern-6pt{\ominus}\kern-6pt-}$ of this reaction of only 0.6 kJ/mol. This is a trend that is often observed, and values of $\Delta_{\text{rxn}}H$ are very weakly dependent on changes in temperature for most chemical reactions. This numerical result can also be compared with the amount that is experimentally measured for $\Delta_{\text{rxn}}H^{398}$ of this reaction, which is –283.67 kJ/mol. This comparison strongly supports the assumption that we used to solve the integral in Equation \ref{4.10}, confirming that the heat capacities are mostly independent of temperature.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/04%3A_Thermochemistry/4.03%3A_Hess%27s_Law.txt
|
The first law of thermodynamics places no restrictions on the conversion of energy from one form to another. For example, let’s consider once again the Joule experiment (Figure 3.1.1). If we design a cycle that goes from the gas on the left chamber only to the gas equilibrated in both chambers and backward, as in Figure $1$, there are no restrictions imposed on this hypothetical cycle by the first law.
As we saw in section 3.1.1, states 1 and 2 have exactly the same energy at constant temperature. Restricting the analysis to the information contained in the first law, the ideal gas could hypothetically go from state 1 (all gas in the left chamber) to state 2 (gas in both chambers), as well as spontaneously close the cycle back from state 2 to state 1, without external intervention. While the transformation from 1 $\rightarrow$ 2 is intuitively spontaneous (it’s the same transformation that we considered in section 3.1.1), the backward transformation from 2 $\rightarrow$ 1 is clearly not as intuitive. In this case, the gas should spontaneously compress back to the left side, leaving a vacuum on the right chambers, without interventions from the outside. This transformation is clearly never observed. A gas just does not spontaneously concentrate on one side of a room, leaving a vacuum on the other side. In fact, when we need to create a vacuum, a lot of energy must be spent. Suppose we use exclusively information contained in the first law. In this case, there is nothing that might suggest a system’s preference to perform the transformation 1 $\rightarrow$ 2, while restricting the 2 $\rightarrow$ 1 from happening spontaneously. Both states have the same energy, and
$\oint dU=0, \nonumber$
James Joule himself was indeed convinced that this must be the case and that we don’t observe the backward transformation in practice only because we cannot build ideal machines.$^1$ Another scientist of that era was not convinced. William Thomson, the 1st Baron Kelvin (1824–1907), was unsure about this idea, and invested substantial resources to try to prove Joule’s wrong.$^2$
A few years later, the controversy between Joule and Kelvin was redeemed in favor of the latter, thanks to the experiments of French military engineer Nicolas Léonard Sadi Carnot (1796–1832). The work of Carnot began in France several years before Joule and Kelvin’s time.$^3$ At that time, the importance of steam engines was growing for industrial applications, but a theoretical perspective was lacking. Carnot was convinced that a scientific understanding of heat engines was necessary to improve their efficiency.
1. Either because we don’t really have ideal gases, or because we are unable to construct mechanical devices without loss, or in general because of other experimental factors︎
2. Interestingly enough, both Joule and Lord Kelvin are now recognized as key figures in the development of thermodynamics and science in general. So much so, that the energy unit and the temperature unit in the SI system are named after them.
3. Carnot’s lone book, the Réflexions sur la Puissance Motrice du Feu (“Reflections on the Motive Power of Fire”) was published in France in 1824, the same year Kelvin was born and just 6 years after Joule’s birth.
• 5.1: Carnot Cycle
The main contribution of Carnot to thermodynamics is his abstraction of the steam engine’s essential features into a more general and idealized heat engine. The definition of Carnot’s idealized cycle is as follows:
• 5.2: Energy, Heat, and Work in the Carnot Cycle
Summarizing the results of the previous sections, the total amount of energy for a Carnot cycle is:
• 5.3: Efficiency of a Carnot Cycle
The efficiency (ε) of a cycle is defined as the ratio between the absolute value of the work extracted from the cycle (|WTOT|) and the heat that gets into the system (|Qh|).
05: Thermodynamic Cycles
The main contribution of Carnot to thermodynamics is his abstraction of the steam engine’s essential features into a more general and idealized heat engine. The definition of Carnot’s idealized cycle is as follows:
Definition: Carnot Cycle
A Carnot cycle is an idealized process composed of two isothermal and two adiabatic transformations. Each transformation is either an expansion or a compression of an ideal gas. All transformations are assumed to be reversible, and no energy is lost to mechanical friction.
A Carnot cycle connects two “heat reservoirs” at temperatures $T_h$ (hot) and $T_l$ (low), respectively. The reservoirs have a large thermal capacity so that their temperatures are unaffected by the cycle. The system is composed exclusively by the ideal gas, which is the only substance that changes temperature throughout the cycle. If we report the four transformations of a Carnot cycle on a $PV$ diagram, we obtain the following plot:
Stage 1: isothermal expansion $A \rightarrow B$
At this stage heat is released from the hot reservoir and is absorbed by the ideal gas particles within the system. Thus, the temperature of the system rises. The high temperature causes the gas particles to expand; pushing the piston upwards and doing work on the surroundings.
Starting the analysis of the cycle from point $A$ in Figure $1$,$^1$ the first transformation we encounter is an isothermal expansion at $T_h$. Since the transformation is isothermal:
$\Delta U_1 = \overbrace{W_1}^{<0} + \overbrace{Q_1}^{>0} = 0 \Rightarrow Q_1 = -W_1, \nonumber$
and heat and work can be calculated for this stage using Equation 2.4.14:
\begin{aligned} Q_1 & = \left| Q_h \right| = nRT_h \overbrace{\ln \dfrac{V_B}{V_A}}^{>0 \text{ since } V_B>V_A} > 0, \ W_1 & = -Q_1 = - nRT_h \ln \dfrac{V_B}{V_A} < 0, \end{aligned} \nonumber
where we denoted $\left| Q_h \right|$ the absolute value of the heat that gets into the system from the hot reservoir.
Stage 2: adiabatic expansion $B \rightarrow C$
At this stage expansion continues, however there is no heat exchange between system and surroundings. Thus, the system is undergoing adiabatic expansion. The expansion allows the ideal gas particles to cool, decreasing the temperature of the system.
The second transformation is an adiabatic expansion between $T_h$ and $T_l$. Since we are at adiabatic conditions:
$Q_2 = 0 \Rightarrow \Delta U_2 = W_2, \nonumber$
and the negative energy (expansion work) can be calculated using:
$\Delta U_2 = W_2 = n \underbrace{\int_{T_h}^{T_l} C_V dT}_{<0 \text{ since } T_\mathrm{l}<T_\mathrm{h}} < 0. \nonumber$
Stage 3: isothermal compression $C \rightarrow D$
At this stage the surroundings do work on the system which causes heat to be released (qc). The temperature within the system remains the same. Thus, isothermal compression occurs.
The third transformation is an isothermal compression at $T_l$. The formulas are the same as those used for stage 1, but they will results in heat and work with reversed signs (since this is a compression):
$\Delta U_3 = \overbrace{W_3}^{>0} + \overbrace{Q_3}^{<0} = 0 \Rightarrow Q_3 = -W_3, \nonumber$
and:
\begin{aligned} Q_3 & = \left| Q_l \right| = nRT_l \overbrace{\ln \dfrac{V_D}{V_C}}^{<0 \text{ since } V_D<V_C} < 0 , \ W_3 & = -Q_3 = - nRT_l \ln \dfrac{V_D}{V_C} > 0, \end{aligned} \nonumber
where $\left| Q_l \right|$ is the absolute value of the heat that gets out of the system to the cold reservoir ($\left| Q_l \right|$ being the heat entering the system).
Stage 4: adiabatic compression $D \rightarrow A$
No heat exchange occurs at this stage, however, the surroundings continue to do work on the system. Adiabatic compression occurs which raises the temperature of the system as well as the location of the piston back to its original state (prior to stage one).
The fourth and final transformation is an adiabatic comprssion that restores the system to point $A$, bringing it from $T_l$ to $T_h$. Similarly to stage 3:
$Q_4 = 0 \Rightarrow \Delta U_4 = W_4, \nonumber$
Since we are at adiabatic conditions. The energy associated with this process is now positive (compression work), and can be calculated using:
$\Delta U_4 = W_4 = n \underbrace{\int_{T_l}^{T_h} C_V dT}_{>0 \text{ since } T_\mathrm{h}>T_\mathrm{l}} > 0. \nonumber$
Notice how $\Delta U_4 = - \Delta U_2$ because $\int_x^y=-\int_y^x$.
1. ︎The stages of a Carnot depicted at the beginning of each of this section and the following three ones are genetaken from Wikipedia, and have been generated and distributed by Author BlyumJ under CC-BY-SA license.︎
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/05%3A_Thermodynamic_Cycles/5.01%3A_Carnot_Cycle.txt
|
Summarizing the results of the previous sections, the total amount of energy for a Carnot cycle is:
\begin{aligned} \Delta U_{\text{TOT}} & = \Delta U_1+\Delta U_2+\Delta U_3+\Delta U_4 \ & = 0 + n \int_{T_h}^{T_l} C_V dT + 0 + n \int_{T_l}^{T_h} C_V dT \ & = n \int_{T_h}^{T_l} C_V dT - n \int_{T_h}^{T_l} C_V dT = 0 \ \end{aligned} \nonumber
which is obviously zero, since $\oint dU=0$. The amounts of work and heat, however, are not zero, since $Q$ and $W$ are path functions. Therefore:
\begin{aligned} W_{\text{TOT}} & = W_1+W_2+W_3+W_4 \ & = - nRT_h \ln \dfrac{V_B}{V_A} + n \int_{T_h}^{T_l} C_V dT - nRT_l \ln \dfrac{V_D}{V_C} + n \int_{T_l}^{T_h} C_V dT \ & = - nRT_h \ln \dfrac{V_B}{V_A} - nRT_l \ln \dfrac{V_D}{V_C}, \ \end{aligned} \nonumber
which, considering that $V_C/V_D=V_B/V_A$, reduces to:
$W_{\text{TOT}} = - nR \left( T_h-T_l \right) \ln \dfrac{V_B}{V_A} < 0, \nonumber$
which is negative, because $T_h>T_l$ and $V_B>V_A$. Negative work means that the work is done by the system. In other words, the system is performing $PV$-work by transferring heat from a hot reservoir to a cold one via a Carnot cycle. On the other hand, for the heat:
\begin{aligned} Q_{\text{TOT}} & = Q_1+Q_2+Q_3+Q_4 \ & = Q_h + 0 + Q_l + 0 \ & = nRT_h \ln \dfrac{V_B}{V_A} + nRT_l \ln \dfrac{V_D}{V_C} \ & = nR \left( T_h-T_l \right) \ln \dfrac{V_B}{V_A} = -W_{\text{TOT}}, \end{aligned} \nonumber
which, simplifies to:
$W_{\text{TOT}}=-(Q_1+Q_3), \nonumber$
and, replacing $Q_1$ and $Q_3$ with the absolute values of the heats drawn from the hot and cold reservoirs, $\left| Q_h \right|$, and $\left| Q_l \right|$ respectively:
$\left| W_{\text{TOT}} \right| = \left| Q_h \right| - \left| Q_l \right|, \nonumber$
or, in other words, more heat is extracted from the hot reservoir than it is put into the cold one. The difference between the absolute value of these amounts of heat gives the total work of the cycle. This process is depicted in Figure $1$.
Exercise $1$
Up to this point, we have discussed Carnot cycles working in the hot $\rightarrow$ cold direction ($A$ $\rightarrow$ $B$ $\rightarrow$ $C$ $\rightarrow$ $D$ $\rightarrow$ $A$), since this is the primary mode of operation of heat engines that produce work. However, a heat engine could also—in principle—work in the reversed cold $rightarrow$ hot direction ($A$ $\rightarrow$ $D$ $\rightarrow$ $C$ $\rightarrow$ $B$ $\rightarrow$ $A$). Write the equations for heat, work, and energy of each stage of a Carnot cycle going the opposite direction than the one discussed in sections 5.1 and 5.2.
Answer
When the heat engine works in reverse order, the formulas remain the same, but all the signs in front of $Q$, $W$, and $U$ will be reversed. In this case, the total work would get into the systems, and heat would be transferred from the cold reservoir to the hot one. Figure $1$ would be modified as:
This reversed mode of operation is the basic principle behind refrigerators and air conditioning.
5.03: Efficiency of a Carnot Cycle
The efficiency ($\varepsilon$) of a cycle is defined as the ratio between the absolute value of the work extracted from the cycle ($\left| W_{\text{TOT}} \right|$) and the heat that gets into the system ($\left| Q_h \right|$):
$\varepsilon = \dfrac{\left| W_{\text{TOT}} \right|}{\left| Q_h \right|} =\dfrac{-W_{\text{TOT}}}{Q_1} \label{5.3.1}$
where the minus sign in front of the work is necessary because the efficiency is defined as a positive number. Replacing Equation 5.2.5 into eq. \ref{5.3.1}, we obtain:
$\varepsilon = \dfrac{Q_3+Q_1}{Q_1} = 1+\dfrac{Q_3}{Q_1}. \nonumber$
If we go back to Equation \ref{5.3.1} and we replace Equation 5.2.3 for $W_{\mathrm{TOT}}$ and Equation 5.1.3 for $Q_1$, we obtain:
$\varepsilon = \dfrac{nR \left( T_h - T_l \right) \ln V_B/V_A}{nRT_h \ln V_B/V_A} = \dfrac{T_h-T_l}{T_h}=1-\dfrac{T_l}{T_h }<1, \label{5.3.3}$
which proves that the efficiency of a Carnot cycle is strictly smaller than 1.$^{1}$ In other words, no cycle can convert 100% of the heat into work it extracts from a hot reservoir. This finding had remarkable consequences on the entire thermodynamics field and set the foundation for the introduction of entropy. We will use eqs. \ref{5.3.1} and \ref{5.3.3} for this purpose in the next chapter, but we conclude the discussion on Carnot cycles by returning back to Lord Kelvin. In 1851 he used this finding to state his statement “It is impossible for a self-acting machine, unaided by any external agency, to convey heat from one body to another at a higher temperature. It is impossible, by means of inanimate material agency, to derive mechanical effect from any portion of matter by cooling it below the temperature of the coldest of the surrounding objects.”$^{2}$ This statement conclusively disproved Joule’s original theories and demonstrated that there is some fundamental principle to govern the flow of heat beyond the first law of thermodynamics.
1. Equation \ref{5.3.3} can be equal to 1 only if $T_l=0 \; \text{K}$ or $T_h=\infty$, two conditions that are both equally impossible.︎
2. Thomson W. Transactions of the Royal Society of Edinburgh. 1851 XX 261–268, 289–298..
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/05%3A_Thermodynamic_Cycles/5.02%3A_Energy_Heat_and_Work_in_the_Carnot_Cycle.txt
|
In chapter 5, we have discussed heat engines as a means of understanding how some processes are spontaneous while others are not. Carnot’s findings did not just simply inspire Lord Kelvin on this subject, but they also motivated Rudolf Clausius (1822–1888) to introduce the concept of entropy.
• 6.1: Entropy
Let’s return to the definition of efficiency of a Carnot cycle and bring together eqs. 5.3.2 and 5.3.3.
• 6.2: Irreversible Cycles
What happens when we face an irreversible cycle? The efficiency of a Carnot cycle in Equation 5.3.3 is the maximum efficiency that an idealized thermodynamic cycle can reach.
• 6.3: The Second Law of Thermodynamics
For any spontaneous process, the entropy of the universe is increasing.
06: Second Law of Thermodynamics
Let’s return to the definition of efficiency of a Carnot cycle and bring together eqs. 5.3.2 and 5.3.3:
$\varepsilon = 1+\dfrac{Q_3}{Q_1} = 1-\dfrac{T_l}{T_h}. \nonumber$
Simplifying this equality, we obtain:
$\dfrac{Q_3}{T_l} = -\dfrac{Q_1}{T_h}, \label{6.1.2}$
or alternatively:
$\dfrac{Q_3}{T_l} + \dfrac{Q_1}{T_h} = 0. \label{6.1.3}$
The left hand side of Equation \ref{6.1.3} contains the sum of two quantities around the Carnot cycle, each calculated as $\dfrac{Q_{\mathrm{REV}}}{T}$, with $Q_{\mathrm{REV}}$ being the heat exchanged at reversible conditions (recall that according to Definition: Carnot Cycle each transformation in a Carnot cycle is reversible). Equation \ref{6.1.2} can be generalized to a sequence of connected Carnot cycles joining more than two isotherms by taking the summation across different temperatures:
$\sum_i \dfrac{Q_{\mathrm{REV}}}{T_i} = 0, \label{6.1.4}$
where the summation happens across a sequence of Carnot cycles that connects different temperatures. Eqs. \label{6.1.3} and \ref{6.1.4} show that for a Carnot cycle—or a series of connected Carnot cycles—there exists a conserved quantity obtained by dividing the heat associated with each reversible stage and the temperature at which such heat is exchanged. If a quantity is conserved around a cycle, it must be independent on the path, and therefore it is a state function. Looking at similar equations, Clausius introduced in 1865 a new state function in thermodynamics, which he decided to call entropy and indicate with the letter $S$:
Definition: Entropy
$S = \dfrac{Q_{\mathrm{REV}}}{T}. \nonumber$
We can use the new state function to generalize Equation \ref{6.1.4} to any reversible cycle in a $PV$-diagram by using the rules of calculus. First, we will slice $S$ into an infinitesimal quantity:
$dS = \dfrac{đQ_{\mathrm{REV}}}{T}, \nonumber$
then we can extend the summation across temperatures of Equation \ref{6.1.4} to a sum over infinitesimal quantities—that is the integral—around the cycle:
$\oint dS = \oint \dfrac{đQ_{\mathrm{REV}}}{T} = 0. \nonumber$
6.02: Irreversible Cycles
Up to this point, we have discussed reversible cycles only. Notice that the heat that enters the definition of entropy (Definition: Entropy) is the heat exchanged at reversible conditions since it is only at those conditions that the right-hand side of Equation 6.1.5 becomes a state function. What happens when we face an irreversible cycle? The efficiency of a Carnot cycle in Equation 5.3.3 is the maximum efficiency that an idealized thermodynamic cycle can reach. As such, any irreversible cycle will incontrovertibly have an efficiency smaller than the maximum efficiency of the idealized Carnot cycle. Therefore, Equation 6.1.1 for an irreversible cycle will not hold anymore and must be rewritten as:
$\overbrace{1+\dfrac{Q_3}{Q_1}}^{\varepsilon_{\mathrm{IRR}}} < \overbrace{1-\dfrac{T_l}{T_h}}^{\varepsilon_{\mathrm{REV}}}, \label{6.2.1}$
and, following the same procedure used in section 6.1, we can rewrite Equation \ref{6.2.1} as:
$\dfrac{Q^{\text{IRR}}_3}{Q^{\text{IRR}}_1} < - \dfrac{T_l}{T_h} \longrightarrow \dfrac{Q^{\text{IRR}}_3}{T_l} + \dfrac{Q^{\text{IRR}}_1}{T_h} < 0 \longrightarrow \sum_i \dfrac{Q_{\text{IRR}}}{T_i} < 0, \nonumber$
which can be generalized using calculus to:
$\oint \dfrac{đQ_{\mathrm{IRR}}}{T} < 0. \label{6.2.3}$
Putting eqs. 6.1.6 and \ref{6.2.3} together, we obtain:
$\oint \dfrac{đQ}{T} \leq 0, \label{6.2.4}$
where the equal sign holds for reversible transformations exclusively, while the inequality sign holds for irreversible ones. Equation \ref{6.2.4} is known as Clausius inequality.
6.03: The Second Law of Thermodynamics
Now we can consider an isolated system undergoing a cycle composed of an irreversible forward transformation (1 $\rightarrow$ 2) and a reversible backward transformation (2 $\rightarrow$ 1), as in Figure $1$.
This cycle is similar to the cycle depicted in Figure $1$ for the Joule’s expansion experiment. In this case, we have an intuitive understanding of the spontaneity of the irreversible expansion process, while the non-spontaneity of the backward compression. Since the cycle has at least one irreversible step, it is overall irreversible, and we can calculate:
$\oint \dfrac{đQ_{\mathrm{IRR}}}{T} = \int_1^2 \dfrac{đQ_{\mathrm{IRR}}}{T} + \int_2^1 \dfrac{đQ_{\mathrm{REV}}}{T}. \nonumber$
We can then use Clausius inequality (Equation 6.2.4) to write:
\begin{aligned} \int_1^2 \dfrac{đQ_{\mathrm{IRR}}}{T} + \int_2^1 \dfrac{đQ_{\mathrm{REV}}}{T} < 0, \end{aligned} \nonumber
which can be rearranged as:
$\underbrace{\int_1^2 \dfrac{đQ_{\mathrm{REV}}}{T}}_{\int_1^2 dS = \Delta S} > \underbrace{\int_1^2 \dfrac{đQ_{\mathrm{IRR}}}{T}}_{=0}, \label{6.3.3}$
where we have used the fact that, for an isolated system (the universe), $đQ_{\mathrm{IRR}}=0$. Equation \ref{6.3.3} can be rewritten as:
$\Delta S > 0, \label{6.3.4}$
which proves that for any irreversible process in an isolated system, the entropy is increasing. Using Equation \ref{6.3.4} and considering that the only system that is truly isolated is the universe, we can write a concise statement for a new fundamental law of thermodynamics:
Definition: Second Law of Thermodynamics
For any spontaneous process, the entropy of the universe is increasing.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/06%3A_Second_Law_of_Thermodynamics/6.01%3A_Entropy.txt
|
The Second Law can be used to infer the spontaneity of a process, as long as the entropy of the universe is considered. To do so, we need to remind ourselves that the universe can be divided into a system and its surroundings (environment). When we calculate the entropy of the universe as an indicator of the spontaneity of a process, we need to always consider changes in entropy in both the system (sys) and its surroundings (surr):
$\Delta S^{\mathrm{universe}} = \Delta S^{\mathrm{sys}} + \Delta S^{\mathrm{surr}}, \nonumber$
or, in differential form:
$d S^{\mathrm{universe}} = d S^{\mathrm{sys}} + d S^{\mathrm{surr}}, \nonumber$
• 7.1: Calculation of ΔSsys
In general ΔSsys can be calculated using either its Definition: Entropy, or its differential formula, Equation 6.1.5. In practice, it is always convenient to keep in mind that entropy is a state function, and as such it does not depend on the path.
• 7.2: Calculation of ΔSsurr
While the entropy of the system can be broken down into simple cases and calculated using the formulas introduced above, the entropy of the surroundings does not require such a complicated treatment, and it can always be calculated as:
• 7.3: Clausius Theorem
By replacing Equation 7.2.2 into 7.2 we can write the differential change in the entropy of the system as:
• 7.4: The Third Law of Thermodynamics
The Third Law of Thermodynamics sets an unambiguous zero of the entropy scale, similar to what happens with absolute zero in the temperature scale. The absolute value of the entropy of every substance can then be calculated in reference to this unambiguous zero. As such, absolute entropies are always positive. This is in stark contrast to what happened for the enthalpy.
07: Calculation of Entropy and the Third Law of Thermodynamics
In general $\Delta S^{\mathrm{sys}}$ can be calculated using either its Definition: Entropy, or its differential formula, Equation 6.1.5. In practice, it is always convenient to keep in mind that entropy is a state function, and as such it does not depend on the path. For this reason, we can break every transformation into elementary steps, and calculate the entropy on any path that goes from the initial state to the final state, such as, for example:
\begin{aligned} P_i, T_i & \quad \xrightarrow{ \Delta_{\text{TOT}} S_{\text{sys}} } \quad P_f, T_f \ \scriptstyle{\Delta_1 S^{\text{sys}}} & \searrow \qquad \qquad \nearrow \; \scriptstyle{\Delta_2 S^{\text{sys}}} \ & \qquad P_i, T_f \ \ \Delta_{\text{TOT}} S^{\text{sys}} & = \Delta_1 S^{\text{sys}} + \Delta_2 S^{\text{sys}}, \end{aligned} \nonumber
with $\Delta_1 S^{\text{sys}}$ calculated at constant $P$, and $\Delta_2 S^{\text{sys}}$ at constant $T$. The most important elementary steps from which we can calculate the entropy resemble the prototypical processes for which we calculated the energy in section 3.1.
Entropy in isothermal processes
• For an ideal gas at constant temperature $\Delta U =0$, and $Q_{\mathrm{REV}} = -W_{\mathrm{REV}}$. Using the formula for $W_{\mathrm{REV}}$ in either Equation (??) or Equation (??), we obtain: $\Delta S^{\mathrm{sys}} = \int_i^f \dfrac{đQ_{\mathrm{REV}}}{T} = \dfrac{-W_{\mathrm{REV}}}{T} = \dfrac{nRT \ln \dfrac{V_f}{V_i}}{T} = nR \ln \dfrac{V_f}{V_i}, \nonumber$ or, similarly: $\Delta S^{\mathrm{sys}} = nR \ln \dfrac{P_i}{P_f}. \nonumber$
• A phase change is a particular case of an isothermal process that does not follow the formulas introduced above since an ideal gas never liquefies. The entropy associated with a phase change at constant pressure can be calculated from its definition, remembering that $Q_{\mathrm{rev}}= \Delta H$. For example for vaporizations:
$\Delta_{\mathrm{vap}} S = \dfrac{\Delta_{\mathrm{vap}}H}{T_B}, \nonumber$
with $\Delta_{\mathrm{vap}}H$ being the enthalpy of vaporization of a substance, and $T_B$ its boiling temperature.
It is experimentally observed that the entropies of vaporization of many liquids have almost the same value of:
$\Delta_{\mathrm{vap}} S \approx 10.5 R, \label{7.1.7}$
which corresponds in SI to the range of about 85–88 J/(mol K). This simple rule is named Trouton’s rule, after the French scientist that discovered it, Frederick Thomas Trouton (1863-1922).
Exercise $1$
Calculate the standard entropy of vaporization of water knowing $\Delta_{\mathrm{vap}} H_{\mathrm{H}_2\mathrm{O}}^{-\kern-6pt{\ominus}\kern-6pt-}= 44 \ \text{kJ/mol}$, as calculated in Exercise 4.3.1.
Answer
Using Equation \ref{7.1.7}—and knowing that at standard conditions of $P^{-\kern-6pt{\ominus}\kern-6pt-}= 1 \ \text{bar}$ the boiling temperature of water is 373 K—we calculate:
$\Delta_{\mathrm{vap}} S_{\mathrm{H}_2\mathrm{O}}^{-\kern-6pt{\ominus}\kern-6pt-}= \dfrac{44 \times 10^3 \text{J/mol}}{373 \ \text{K}} = 118 \ \text{J/(mol K)}. \nonumber$
The entropy of vaporization of water is far from Trouton’s rule range of 85–88 J/(mol K) because of the hydrogen bond interactions between its molecules. Other similar exceptions are ethanol, formic acid, and hydrogen fluoride.
Entropy in adiabatic processes
Since adiabatic processes happen without the exchange of heat, $đQ=0$, it would be tempting to think that $\Delta S^{\mathrm{sys}} = 0$ for every one of them. A transformation at constant entropy (isentropic) is always, in fact, a reversible adiabatic process. However, the opposite case is not always true, and an irreversible adiabatic transformation is usually associated with a change in entropy. To explain this fact, we need to recall that the definition of entropy includes the heat exchanged at reversible conditions only. Therefore, for irreversible adiabatic processes $\Delta S^{\mathrm{sys}} \neq 0$. The calculation of the entropy change for an irreversible adiabatic transformation requires a substantial effort, and we will not cover it at this stage. The situation for adiabatic processes can be summarized as follows:
\begin{aligned} \text{reversible:} \qquad & \dfrac{đQ_{\mathrm{REV}}}{T} = 0 \longrightarrow \Delta S^{\mathrm{sys}} = 0 \quad \text{(isentropic),}\ \text{irreversible:} \qquad & \dfrac{đQ_{\mathrm{IRR}}}{T} = 0 \longrightarrow \Delta S^{\mathrm{sys}} \neq 0. \ \end{aligned} \nonumber
Entropy in isochoric processes
We can calculate the heat exchanged in a process that happens at constant volume, $Q_V$, using Equation 2.3.2. Since the heat exchanged at those conditions equals the energy (Equation 3.1.7), and the energy is a state function, we can use $Q_V$ regardless of the path (reversible or irreversible). The entropy associated with the process will then be:
$\Delta S^{\mathrm{sys}} = \int_i^f \dfrac{đQ_{\mathrm{REV}}}{T} = \int_i^f nC_V \dfrac{dT}{T}, \nonumber$
which, assuming $C_V$ independent of temperature and solving the integral on the right-hand side, becomes:
$\Delta S^{\mathrm{sys}} \approx n C_V \ln \dfrac{T_f}{T_i}. \nonumber$
Entropy in isobaric processes
Similarly to the constant volume case, we can calculate the heat exchanged in a process that happens at constant pressure, $Q_P$, using Equation 2.3.4. Again, similarly to the previous case, $Q_P$ equals a state function (the enthalpy), and we can use it regardless of the path to calculate the entropy as:
$\Delta S^{\mathrm{sys}} = \int_i^f \dfrac{đQ_{\mathrm{REV}}}{T} = \int_i^f nC_P \dfrac{dT}{T}, \nonumber$
which, assuming $C_P$ independent of temperature and solving the integral on the right-hand side, becomes:
$\Delta S^{\mathrm{sys}} \approx n C_P \ln \dfrac{T_f}{T_i}. \nonumber$︎
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/07%3A_Calculation_of_Entropy_and_the_Third_Law_of_Thermodynamics/7.01%3A_Calculation_of__Ssys.txt
|
While the entropy of the system can be broken down into simple cases and calculated using the formulas introduced above, the entropy of the surroundings does not require such a complicated treatment, and it can always be calculated as:
$\Delta S^{\mathrm{surr}} = \dfrac{Q_{\text{surr}}}{T_{\text{surr}}}=\dfrac{-Q_{\text{sys}}}{T_{\text{surr}}}, \nonumber$
or, in differential form:
$d S^{\mathrm{surr}} = \dfrac{đQ_{\text{surr}}}{T_{\text{surr}}}=\dfrac{-đQ_{\text{sys}}}{T_{\text{surr}}}, \nonumber$
where the substitution $Q_{\text{surr}}=-Q_{\text{sys}}$ can be performed regardless of whether the transformation is reversible or not. In other words, the surroundings always absorb heat reversibly. To justify this statement, we need to restrict the analysis of the interaction between the system and the surroundings to just the vicinity of the system itself. Outside of a generally restricted region, the rest of the universe is so vast that it remains untouched by anything happening inside the system.$^1$ To facilitate our comprehension, we might consider a system composed of a beaker on a workbench. We can then consider the room that the beaker is in as the immediate surroundings. To all effects, the beaker+room combination behaves as a system isolated from the rest of the universe. The room is obviously much larger than the beaker itself, and therefore every energy production that happens in the system will have minimal effect on the parameters of the room. For example, an exothermal chemical reaction occurring in the beaker will not affect the overall temperature of the room substantially. When we study our reaction, $T_{\text{surr}}$ will be constant, and the transfer of heat from the reaction to the surroundings will happen at reversible conditions.
Exercise $1$
Calculate the changes in entropy of the universe for the process of 1 mol of supercooled water, freezing at –10°C, knowing the following data: $\Delta_{\mathrm{fus}}H = 6 \; \text{kJ/mol}$, $C_P^{\mathrm{H}_2 \mathrm{O}_{(l)}}=76 \; \text{J/(mol K)}$, $C_P^{\mathrm{H}_2 \mathrm{O}_{(s)}}=38 \; \text{J/(mol K)}$, and assuming both $C_P$ to be independent on temperature.
Answer
$\Delta S^{\mathrm{sys}}$ for the process under consideration can be calculated using the following cycle:
\begin{aligned} \mathrm{H}_2 \mathrm{O}_{(l)} & \quad \xrightarrow{\quad \Delta S_{\text{sys}} \quad} \quad \mathrm{H}_2 \mathrm{O}_{(s)} \qquad \quad T=263\;K\ \scriptstyle{\Delta S_1} \; \bigg\downarrow \quad & \qquad \qquad \qquad \qquad \scriptstyle{\bigg\uparrow \; \Delta S_3} \ \mathrm{H}_2 \mathrm{O}_{(l)} & \quad \xrightarrow{\quad \Delta S_2 \qquad} \quad \mathrm{H}_2\mathrm{O}_{(s)} \qquad \; T=273\;K\ \ \Delta S^{\text{sys}} & = \Delta S_1 + \Delta S_2 + \Delta S_3 \end{aligned}\label{7.2.3}
$\Delta S_1$ and $\Delta S_3$ are the isobaric heating and cooling processes of liquid and solid water, respectively, and can be calculated filling the given data into Equation 7.1.12. $\Delta S_2$ is a phase change (isothermal process) and can be calculated translating Equation 7.1.6 to the freezing transformation. Overall:
\begin{aligned} \Delta S^{\text{sys}} & = \int_{263}^{273} \dfrac{C_P^{\mathrm{H}_2 \mathrm{O}_{(l)}}}{T}dT+\dfrac{-\Delta_{\mathrm{fus}}H}{273}+\int_{273}^{263} \dfrac{C_P^{\mathrm{H}_2 \mathrm{O}_{(s)}}}{T}dT \ & = 76 \ln \dfrac{273}{263} - \dfrac{6 \times 10^3}{273} + 38 \ln \dfrac{263}{273}= -20.6 \; \text{J/K}. \end{aligned} \label{7.2.4}
Don’t be confused by the fact that $\Delta S^{\text{sys}}$ is negative. This is not the entropy of the universe! Hence it tells nothing about spontaneity! We can now calculate $\Delta S^{\text{surr}}$ from $Q_{\text{sys}}$, noting that we can calculate the enthalpy around the same cycle in Equation \ref{7.2.3}. To do that, we already have $\Delta_{\mathrm{fus}}H$ from the given data, and we can calculate $\Delta H_1$ and $\Delta H_3$ using Equation 2.3.4.
\begin{aligned} Q^{\text{sys}} & = \Delta H = \int_{263}^{273} C_P^{\mathrm{H}_2 \mathrm{O}_{(l)}} dT + (-\Delta_{\mathrm{fus}}H) + \int_{273}^{263} C_P^{\mathrm{H}_2 \mathrm{O}_{(s)}}dT \ & = 76 \times 10^{-3} (273-263) - 6 + 38 \times 10^{-3} (263-273) \ &= -5.6 \; \text{kJ}. \ \ \Delta S^{\text{surr}} & = \dfrac{-Q_{\text{sys}}}{T}=\dfrac{5.6 \times 10^3}{263} = + 21.3 \; \text{J/K}. \ \end{aligned} \label{7.2.5}
Bringing \ref{7.2.3} and \ref{7.2.5} results together, we obtain:
$\Delta S^{\text{universe}}=\Delta S^{\text{sys}} + \Delta S^{\text{surr}} = -20.6+21.3=+0.7 \; \text{J/K}. \nonumber$
Since the entropy changes in the universe are positive, the process is spontaneous, as expected.
1. Even if we think at the most energetic event that we could imagine happening here on earth—such as the explosion of an atomic bomb or the hit of a meteorite from outer space—such an event will not modify the average temperature of the universe by the slightest degree.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/07%3A_Calculation_of_Entropy_and_the_Third_Law_of_Thermodynamics/7.02%3A_Calculation_of_Ssurr.txt
|
By replacing Equation 7.2.2 into 7.2 we can write the differential change in the entropy of the system as:
$d S^{\mathrm{sys}} = d S^{\mathrm{universe}} - d S^{\mathrm{surr}} = d S^{\mathrm{universe}} + \dfrac{đQ_{\text{sys}}}{T}. \nonumber$
According to the second law, for any spontaneous process $d S^{\mathrm{universe}}\geq0$, and therefore, replacing it into Equation $1$:
$d S^{\mathrm{sys}} \geq \dfrac{đQ}{T}, \nonumber$
which is the mathematical expression of the so-called Clausius theorem. Eq. $2$ distinguishes between three conditions:
\begin{aligned} d S^{\mathrm{sys}} > \dfrac{đQ}{T} \qquad &\text{spontaneous, irreversible transformation} \ d S^{\mathrm{sys}} = \dfrac{đQ}{T} \qquad &\text{reversible transformation} \ d S^{\mathrm{sys}} < \dfrac{đQ}{T} \qquad &\text{non-spontaneous, irreversible transformation}, \end{aligned} \nonumber
Clausius theorem provides a useful criterion to infer the spontaneity of a process, especially in cases where it’s hard to calculate $\Delta S^{\mathrm{universe}}$. Eq. $2$ requires knowledge of quantities that are dependent on the system exclusively, such as the difference in entropy, the amount of heat that crosses the boundaries, and the temperature at which the process happens.1 If a process produces more entropy than the amount of heat that crosses the boundaries divided by the absolute temperature, it will be spontaneous. Vice versa, if the entropy produced is smaller than the amount of heat crossing the boundaries divided by the absolute temperature, the process will be non-spontaneous. The equality holds for systems in equilibrium with their surroundings, or for reversible processes since they happen through a series of equilibrium states. Measuring or calculating these quantities might not always be the simplest of calculations. We will return to the Clausius theorem in the next chapter when we seek more convenient indicators of spontaneity.
1. In cases where the temperature of the system changes throughout the process, $T$ is just the (constant) temperature of its immediate surroundings, $T_{\text{surr}}$, as explained in section 7.2.
7.04: The Third Law of Thermodynamics
In chapter 4, we have discussed how to calculate reaction enthalpies for any reaction, given the formation enthalpies of reactants and products. In this section, we will try to do the same for reaction entropies. In this case, however, our task is simplified by a fundamental law of thermodynamics, introduced by Walther Hermann Nernst (1864–1941) in 1906.1 The statement that was initially known as Nernst’s Theorem is now officially recognized as the third fundamental law of thermodynamics, and it has the following definition:
Definition: Third Law of Thermodynamics
The entropy of a perfectly ordered, pure, crystalline substance is zero at $T=0 \; \text{K}$.
This law sets an unambiguous zero of the entropy scale, similar to what happens with absolute zero in the temperature scale. The absolute value of the entropy of every substance can then be calculated in reference to this unambiguous zero. As such, absolute entropies are always positive. This is in stark contrast to what happened for the enthalpy. An unambiguous zero of the enthalpy scale is lacking, and standard formation enthalpies (which might be negative) must be agreed upon to calculate relative differences.
In simpler terms, given a substance $i$, we are not able to measure absolute values of its enthalpy $H_i$ (and we must resort to known enthalpy differences, such as $\Delta_{\mathrm{f}} H^{-\kern-6pt{\ominus}\kern-6pt-}$ at standard pressure). At the same time, for entropy, we can measure $S_i$ thanks to the third law, and we usually report them as $S_i^{-\kern-6pt{\ominus}\kern-6pt-}$. A comprehensive list of standard entropies of inorganic and organic compounds is reported in appendix 16. Reaction entropies can be calculated from the tabulated standard entropies as differences between products and reactants, using:
$\Delta_{\text{rxn}} S^{-\kern-6pt{\ominus}\kern-6pt-}= \sum_i \nu_i S_i^{-\kern-6pt{\ominus}\kern-6pt-}, \nonumber$
with $\nu_i$ being the usual stoichiometric coefficients with their signs given in Definition: Signs of the Stoichiometric Coefficients.
The careful wording in the definition of the third law Definition: Third Law of Thermodynamics allows for the fact that some crystal might form with defects (i.e., not as a perfectly ordered crystal). In this case, a residual entropy will be present even at $T=0 \; \text{K}$. However, this residual entropy can be removed, at least in theory, by forcing the substance into a perfectly ordered crystal.2
An interesting corollary to the third law states that it is impossible to find a procedure that reduces the temperature of a substance to $T=0 \; \text{K}$ in a finite number of steps.︎
1. Walther Nernst was awarded the 1920 Nobel Prize in Chemistry for his work in thermochemistry.︎
2. A procedure that—in practice—might be extremely difficult to achieve.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/07%3A_Calculation_of_Entropy_and_the_Third_Law_of_Thermodynamics/7.03%3A_Clausius_Theorem.txt
|
• 8.1: Fundamental Equation of Thermodynamics
Let’s summarize some of the results from the first and second law of thermodynamics that we have seen so far.
• 8.2: Thermodynamic Potentials
Starting from the fundamental equation, we can define new thermodynamic state functions that are more convenient to use under certain specific conditions. The new functions are determined by using a mathematical procedure called the Legendre transformation.
• 8.3: Free Energies
he Legendre transformation procedure translates all information contained in the original function to the new one. Therefore, H(S,P,{ni}) , A(T,V,{ni}) , and G(T,P,{ni}) all contain the same information that is in U(S,V,{ni}) .
• 8.4: Maxwell Relations
Let’s consider the fundamental equations for the thermodynamic potentials that we have derived in section 8.1.
08: Thermodynamic Potentials
Let’s summarize some of the results from the first and second law of thermodynamics that we have seen so far. For reversible processes in closed systems:
\begin{aligned} \text{From 1}^{\text{st}} \text{ Law:} \qquad \quad & dU = đQ_{\mathrm{REV}}-PdV \ \text{From The Definition of Entropy:} \qquad \quad & dS = \dfrac{đQ_{\mathrm{REV}}}{T} \rightarrow đQ_{\mathrm{REV}} = TdS \ \ \Rightarrow \quad & dU = TdS - PdV. \end{aligned} \label{8.1.1}
Equation \ref{8.1.1} is called the fundamental equation of thermodynamics since it combines the first and the second laws. Even though we started the derivation above by restricting to reversible transformations only, if we look carefully at Equation \ref{8.1.1}, we notice that it exclusively involves state functions. As such, it applies to both reversible and irreversible processes. The fundamental equation, however, remains constrained to closed systems. This fact restricts its utility for chemistry, since when a chemical reaction happens, the mass in the system will change, and the system is no longer closed.
At the end of the 19th century, Josiah Willard Gibbs (1839–1903) proposed an important addition to the fundamental equation to account for chemical reactions. Gibbs was able to do so by introducing a new quantity that he called the chemical potential:
Definition: Chemical Potential
The chemical potential is the amount of energy absorbed or released due to a change of the particle number of a given chemical species.
The chemical potential of species $i$ is usually abbreviated as $\mu_i$, and it enters the fundamental equation of thermodynamics as:
$dU = TdS-PdV+\sum_i\mu_i dn_i, \nonumber$
where $dn_i$ is the differential change in the number of moles of substance $i$, and the summation extends over all chemical species in the system.
According to the fundamental equation, the internal energy of a system is a function of the three variables entropy, $S$, volume, $V$, and the numbers of moles $\{n_i\}$.1 Because of their importance in determining the internal energy, these three variables are crucial in thermodynamics. Under several circumstances, however, they might not be the most convenient variables to use.2 To emphasize the important connections given by the fundamental equation, we can use the notation $U(S,V,\{n_i\})$ and we can term $S$, $V$, and $\{n_i\}$ natural variables of the energy.
1. In the case of the numbers of moles we include them in curly brackets to indicate that there might be more than one, depending on how many species undergo chemical reactions.︎
2. For example, we don’t always have a simple way to calculate or to measure the entropy.︎
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/08%3A_Thermodynamic_Potentials/8.01%3A_Fundamental_Equation_of_Thermodynamics.txt
|
Starting from the fundamental equation, we can define new thermodynamic state functions that are more convenient to use under certain specific conditions. The new functions are determined by using a mathematical procedure called the Legendre transformation. A Legendre transformation is a linear change in variables that brings from an initial mathematical function to a new function obtained by subtracting one or more products of conjugate variables.1
Taking the internal energy as defined in Equation 8.1.1, we can perform such procedure by subtracting products of the following conjugate variables pairs: $T \text{ and } S$ or $-P \text{ and } V$. This procedure aims to define new state functions that depend on more convenient natural variables.2 The new functions are called “thermodynamic potential energies,” or simply thermodynamic potentials.3 An example of this procedure is given by the definition of enthalpy that we have already seen in section 3.1.4. If we take the internal energy and subtract the product of two conjugate variables ($-P$ and $V$), we obtain a new state function called enthalpy, as we did in Equation 3.1.9). Taking the differential of this definition, we obtain:
$dH = dU +VdP +PdV, \label{8.2.1}$
and using the fundamental equation, Equation 8.1.2, to replace $dU$, we obtain:
\begin{aligned} dH & = TdS -PdV +\sum_i\mu_i dn_i +VdP +PdV \ & = TdS +VdP +\sum_i\mu_i dn_i. \label{8.2.2}\end{aligned}
which is the fundamental equation for enthalpy. The natural variables of the enthalpy are $S$, $P$, and $\{n_i\}$. The Legendre transformation has allowed us to go from $U(S,V,\{n_i\})$ to $H(S,P,\{n_i\})$ by replacing the dependence on the extensive variable, $V$, with an intensive one, $P$.
Following the same procedure, we can perform another Legendre transformation to replace the entropy with a more convenient intensive variable such as the temperature. This can be done by defining a new function called the Helmholtz free energy, $A$, as:
$A = U -TS \label{8.2.3}$
which, taking the differential and using the fundamental equation (Equation \ref{8.2.2}) becomes:
\begin{aligned} dA &= dU -SdT -TdS = TdS - PdV +\sum_i \mu_i dn_i -SdT -TdS \ &= -SdT -PdV +\sum_i \mu_i dn_i. \end{aligned} \label{8.2.4}
The Helmholtz free energy is named after Hermann Ludwig Ferdinand von Helmholtz (1821—1894), and its natural variables are temperature, volume, and the number of moles.
Finally, suppose we perform a Legendre transformation on the internal energy to replace both the entropy and the volume with intensive variables. In that case, we can define a new function called the Gibbs free energy, $G$, as:
$G = U -TS +PV \label{8.2.5}$
which, taking again the differential and using Equation \ref{8.2} becomes:
\begin{aligned} dG &= dU -SdT -TdS +VdP +PdV \ &= TdS - PdV +\sum_i\mu_i dn_i -SdT -TdS +VdP +PdV \ &= VdP -SdT +\sum_i\mu_i dn_i. \end{aligned} \label{8.2.6}
The Gibbs free energy is named after Willard Gibbs himself, and its natural variables are temperature, pressure, and number of moles.
A summary of the four thermodynamic potentials is given in the following table.
Table $1$
Name Symbol Fundamental Equation Natural Variables
Energy $U$ $dU=TdS-PdV+\sum_i\mu_i dn_i$ $S,V,\{n_i\}$
Enthalpy $H$ $dH=TdS+VdP+\sum_i\mu_i dn_i$ $S,P,\{n_i\}$
Helmholtz Free Energy $A$ $dA=-SdT-PdV+\sum_i\mu_i dn_i$ $T,V,\{n_i\}$
Gibbs Free Energy $G$ $dG=VdP-SdT+\sum_i\mu_i dn_i$ $T,P,\{n_i\}$
The thermodynamic potentials are the analog of the potential energy in classical mechanics. Since the potential energy is interpreted as the capacity to do work, the thermodynamic potentials assume the following interpretations:
• Internal energy ($U$) is the capacity to do work plus the capacity to release heat.
• Enthalpy ($H$) is the capacity to do non-mechanical work plus the capacity to release heat.
• Gibbs free energy ($G$) is the capacity to do non-mechanical work.
• Helmholtz free energy ($A$) is the capacity to do mechanical plus non-mechanical work.4
Where non-mechanical work is defined as any type of work that is not expansion or compression ($PV$–work). A typical example of non-mechanical work is electrical work.
1. The mathematical condition that is fulfilled when performing a Legendre transformation is that the first derivatives of the original function and its transformation are inverse functions of each other.︎
2. The rigorous mathematical definition of conjugate variables is unimportant at this stage. However, we can relate the variables in a pair with basic physics by noticing how the first variable in a pair is always intensive ($T$ and $-P$), while the second one is always extensive ($S$ and $V$). The intensive variables represent thermodynamic driving forces (as compared with mechanical forces in classical mechanics), while the extensive ones are the thermodynamic displacements (as compared with spatial displacements in classical mechanics). Similarly to classical mechanics, the product of two conjugate variables in a pair yields an energy. The minus sign in front of $P$ is explained by the fact that an increase in the force should always correspond to an increase in the displacement (while $P$ and $V$ are inversely related).︎
3. Even if we introduced both concepts in the same chapter, it is important to never confuse the thermodynamic potentials—which are potential energy functions—with the chemical potential—which have been introduced by Gibbs to study heat in chemical reactions.︎
4. For the mathematically inclined, an entertaining method to summarize the same thermodynamic potentials is the thermodynamic square.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/08%3A_Thermodynamic_Potentials/8.02%3A_Thermodynamic_Potentials.txt
|
The Legendre transformation procedure translates all information contained in the original function to the new one. Therefore, $H(S,P,\{n_i\})$, $A(T,V,\{n_i\})$, and $G(T,P,\{n_i\})$ all contain the same information that is in $U(S,V,\{n_i\})$. However, the new functions depend on different natural variables, and they are useful at different conditions. For example, when we want to study chemical changes, we are interested in studying the term $\sum_i\mu_i dn_i$ that appears in each thermodynamic potential. To do so, we need to isolate the chemical term by keeping all other natural variables constant. For example, changes in the chemical term will correspond to changes in the internal energy at constant $S$ and constant $V$:
$dU(S,V,\{n_i\}) = \sum_i\mu_i dn_i \quad \text{if} \quad dS=dV=0. \label{8.3.1}$
Similarly:
\begin{aligned} dH(S,P,\{n_i\}) = \sum_i\mu_i dn_i \quad \text{if} \quad dS=dP=0, \ dA(T,V,\{n_i\}) = \sum_i\mu_i dn_i \quad \text{if} \quad dT=dV=0, \ dG(T,P,\{n_i\}) = \sum_i\mu_i dn_i \quad \text{if} \quad dT=dP=0. \end{aligned}\label{8.3.2}
The latter two cases are particularly interesting since most of chemistry happens at either constant volume,1 or constant pressure.2 Since $dS=0$ is not a requirement for both free energies to describe chemical changes, we can apply either of them to study non-isentropic processes. If a process is not isentropic, it either increases the entropy of the universe, or it decreases it. Therefore—according to the second law—it is either spontaneous or not. Using this concept in conjunction with Clausius theorem, we can devise new criteria for inferring the spontaneity of a process that depends exclusively on the free energies.
Recalling Clausius theorem:
$d S^{\mathrm{sys}} \geq \dfrac{đQ}{T_{\text{surr}}} \quad \longrightarrow \quad TdS \geq đQ, \label{8.3.3}$
we can consider the two cases: constant $V$ ($đQ_V=dU$, left), and constant $P$ ($đQ_P=dH$, right):
\begin{aligned} \text{constant} & \; V: & \qquad \qquad & \qquad \qquad & \text{constant} & \; P: \ \ TdS & \geq dU & & & TdS & \geq dH \ \ TdS -dU & \geq 0 & & & TdS -dH & \geq 0 \ \end{aligned} \label{8.3.4}
we can then simplify the definition of free energies, eqs. 8.2.4 and 8.2.6:
\begin{aligned} \text{constant} & \; T,V: & \qquad & \qquad & \text{constant} & \; T,P: \ \ (dA)_{T,V} &= dU -TdS & & & (dG)_{T,P} &= dH - TdS \ \ dU = (dA)_{T,V} &+TdS & & & dH = (dG)_{T,P} &+TdS \end{aligned} \label{8.3.5}
and by merging $dU$ and $dH$ from eqs. \ref{8.3.5} into Clausius theorem expressed using eqs. \ref{8.3.4}, we obtain:
\begin{aligned} TdS -(dA)_{T,V} &- TdS \geq 0 & \qquad & \qquad & TdS -(dG)_{T,P} &- TdS \geq 0 \ \ (dA)_{T,V} & \leq 0 & \qquad & \qquad & (dG)_{T,P} & \leq 0. \ \end{aligned} \label{8.3.6}
These equations represent the conditions on $dA$ and $dG$ for inferring the spontaneity of a process, and can be summarized as follows:
Definition: Spontaneous Process
During a spontaneous process at constant temperature and volume, the Helmholtz free energy will decrease $(dA<0)$, until it reaches a stationary point at which the system will be at equilibrium $(dA=0)$.
During a spontaneous process at constant temperature and pressure, the Gibbs free energy will decrease $(dG<0)$, until it reaches a stationary point at which the system will be at equilibrium $(dG=0)$.
1. for example, several industrial processes in chemical plants.︎
2. for example, most processes in a chemistry lab.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/08%3A_Thermodynamic_Potentials/8.03%3A_Free_Energies.txt
|
Let’s consider the fundamental equations for the thermodynamic potentials that we have derived in section 8.1:
\begin{aligned} dU(S,V,\{n_i\}) &= \enspace T dS -P dV + \sum_i \mu_i dn_i \ dH(S,P,\{n_i\}) &= \enspace T dS + V dP + \sum_i \mu_i dn_i \ dA(T,V,\{n_i\}) &= -S dT -P dV + \sum_i \mu_i dn_i \ dG(T,P,\{n_i\}) &= -S dT + V dP + \sum_i \mu_i dn_i\;. \end{aligned} \nonumber
From the knowledge of the natural variable of each potential, we could reconstruct these formulas by using the total differential formula:
\begin{aligned} dU &= \underbrace{\left(\dfrac{\partial U}{\partial S} \right)_{V,\{n_i\}}}_{T} dS + \underbrace{\left(\dfrac{\partial U}{\partial V} \right)_{S,\{n_i\}}}_{-P} dV + \sum_i \underbrace{\left(\dfrac{\partial U}{\partial n_i} \right)_{S,V,\{n_{j \neq i}\}}}_{\mu_i} dn_i \ dH &= \underbrace{\left(\dfrac{\partial H}{\partial S} \right)_{P,\{n_i\}}}_{T} dS + \underbrace{\left(\dfrac{\partial H}{\partial P} \right)_{S,\{n_i\}}}_{V} dP + \sum_i \underbrace{\left(\dfrac{\partial H}{\partial n_i} \right)_{S,P,\{n_{j \neq i}\}}}_{\mu_i} dn_i \ dA &= \underbrace{\left(\dfrac{\partial A}{\partial T} \right)_{V,\{n_i\}}}_{-S} dT + \underbrace{\left(\dfrac{\partial A}{\partial V} \right)_{T,\{n_i\}}}_{-P} dV + \sum_i \underbrace{\left(\dfrac{\partial A}{\partial n_i} \right)_{T,V,\{n_{j \neq i}\}}}_{\mu_i} dn_i \ dG &= \underbrace{\left(\dfrac{\partial G}{\partial T} \right)_{V,\{n_i\}}}_{-S} dT + \underbrace{\left(\dfrac{\partial G}{\partial P} \right)_{T,\{n_i\}}}_{V} dP + \sum_i \underbrace{\left(\dfrac{\partial G}{\partial n_i} \right)_{T,P,\{n_{j \neq i}\}}}_{\mu_i} dn_i\;, \end{aligned} \nonumber
we can derive the following new definitions:
\begin{aligned} T &= \left(\dfrac{\partial U}{\partial S} \right)_{V,\{n_i\}} = \left(\dfrac{\partial H}{\partial S} \right)_{P,\{n_i\}} \ -P &= \left(\dfrac{\partial U}{\partial V} \right)_{S,\{n_i\}} = \left(\dfrac{\partial A}{\partial V} \right)_{T,\{n_i\}} \ V &= \left(\dfrac{\partial H}{\partial P} \right)_{S,\{n_i\}} = \left(\dfrac{\partial G}{\partial P} \right)_{T,\{n_i\}} \ -S &= \left(\dfrac{\partial A}{\partial T} \right)_{V,\{n_i\}} = \left(\dfrac{\partial G}{\partial T} \right)_{V,\{n_i\}} \ \text{and:} \ \mu_i &= \left(\dfrac{\partial U}{\partial n_i} \right)_{S,V,\{n_{j \neq i}\}} = \left(\dfrac{\partial H}{\partial n_i} \right)_{S,P,\{n_{j \neq i}\}} \ &= \left(\dfrac{\partial A}{\partial n_i} \right)_{T,V,\{n_{j \neq i}\}} = \left(\dfrac{\partial G}{\partial n_i} \right)_{T,P,\{n_{j \neq i}\}}\;. \end{aligned} \nonumber
Since $T$, $P$, $V$, and $S$ are now defined as partial first derivatives of a thermodynamic potential, we can now take a second partial derivation with respect to a separate variable, and rely on Schwartz’s theorem to derive the following relations:
\begin{aligned} \dfrac{\partial^2 U }{\partial S \partial V} &=& +\left(\dfrac{\partial T}{\partial V}\right)_{S,\{n_{j \neq i}\}} &=& -\left(\dfrac{\partial P}{\partial S}\right)_{V,\{n_{j \neq i}\}} \ \dfrac{\partial^2 H }{\partial S \partial P} &=& +\left(\dfrac{\partial T}{\partial P}\right)_{S,\{n_{j \neq i}\}} &=& +\left(\dfrac{\partial V}{\partial S}\right)_{P,\{n_{j \neq i}\}} \ -\dfrac{\partial^2 A }{\partial T \partial V} &=& +\left(\dfrac{\partial S}{\partial V}\right)_{T,\{n_{j \neq i}\}} &=& +\left(\dfrac{\partial P}{\partial T}\right)_{V,\{n_{j \neq i}\}} \ \dfrac{\partial^2 G }{\partial T \partial P} &=& -\left(\dfrac{\partial S}{\partial P}\right)_{T,\{n_{j \neq i}\}} &=& +\left(\dfrac{\partial V}{\partial T}\right)_{P,\{n_{j \neq i}\}} \end{aligned}\label{8.4.4}
The relations in \ref{8.4.4} are called Maxwell relations,1 and are useful in experimental settings to relate quantities that are hard to measure with others that are more intuitive.
Exercise $1$
Derive the last Maxwell relation in Equation \ref{8.4.4}.
Answer
We can start our derivation from the definition of $V$ and $S$ as a partial derivative of $G$:
$V = \left(\dfrac{\partial G}{\partial P} \right)_{T,\{n_i\}} \qquad \text{and:} \qquad -S = \left(\dfrac{\partial G}{\partial T} \right)_{V,\{n_i\}}, \nonumber$
and then take a second partial derivative of each quantity with respect to the second variable:
\begin{aligned} \left(\dfrac{\partial V}{\partial T} \right)_{P,\{n_i\}} &=\dfrac{\partial}{\partial T}\left[ \left(\dfrac{\partial G}{\partial P} \right)_{T,\{n_i\}} \right]_{P,\{n_i\}} \ \ -\left(\dfrac{\partial S}{\partial P} \right)_{T,\{n_i\}} &=\dfrac{\partial}{\partial P}\left[ \left(\dfrac{\partial G}{\partial T} \right)_{P,\{n_i\}} \right]_{T,\{n_i\}} \;. \end{aligned} \nonumber
These two derivatives are mixed partial second derivatives of $G$ with respect to $T$ and $P$, and therefore, according to Schwartz’s theorem, they are equal to each other:
\begin{aligned} \dfrac{\partial}{\partial T}\left[ \left(\dfrac{\partial G}{\partial P} \right)_{T,\{n_i\}} \right]_{P,\{n_i\}} &= \dfrac{\partial}{\partial P}\left[ \left(\dfrac{\partial G}{\partial T} \right)_{P,\{n_i\}} \right]_{T,\{n_i\}}, \ \ \text{hence:} \ \ \left(\dfrac{\partial V}{\partial T} \right)_{P,\{n_i\}} &= -\left(\dfrac{\partial S}{\partial P} \right)_{T,\{n_i\}}, \end{aligned} \nonumber
which is the last of Maxwell relations, as defined in Equation \ref{8.4.4}. This relation is particularly useful because it connects the quantity $\left(\dfrac{\partial S}{\partial P} \right)_{T,\{n_i\}}$—which is impossible to measure in a lab—with the quantity $\left(\dfrac{\partial V}{\partial T} \right)_{P,\{n_i\}}$—which is easier to measure from an experiment that determines isobaric volumetric thermal expansion coefficients.
1. Maxwell relations should not be confused with the Maxwell equations of electromagnetism.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/08%3A_Thermodynamic_Potentials/8.04%3A_Maxwell_Relations.txt
|
In this chapter, we will concentrate on chemical processes that happen at constant \(T\) and constant \(P\).1 As such, we will focus our attention on the Gibbs free energy.
1. The majority of chemical reactions in a lab happens at those conditions, and all biological functions happen at those conditions as well.
• 9.1: Gibbs Equation
Recalling from chapter 8, the definition of G is:
• 9.2: Temperature Dependence of ΔG
Let’s analyze the first coefficient that gives the dependence of the Gibbs energy on temperature. Since this coefficient is equal to −S and the entropy is always positive, G must decrease when T increases at constant P and {ni}, and vice versa.
• 9.3: Pressure Dependence of ΔG
We can now turn the attention to the second coefficient that gives how the Gibbs free energy changes when the pressure change. To do this, we put the system at constant T and {ni}, and then we consider infinitesimal variations of G.
• 9.4: Composition Dependence of ΔG
The third and final coefficient gives the chemical potential as the dependence of G on the chemical composition at constant T and P. Similarly to the previous cases, we can take the definition of the coefficient and integrate it directly between the initial and final stages of a reaction.
09: Gibbs Free Energy
Recalling from chapter 8, the definition of $G$ is:
$G = U -TS +PV = H-TS, \nonumber$
which, taking the differential at constant $T$ and $P$, becomes:
$dG = dH \; \overbrace{-SdT}^{=0} -TdS = dH -TdS. \nonumber$
Integrating Equation \ref{9.1.2} between the initial and final states of a process results in:
\begin{aligned} \int_i^f dG &= \int_i^f dH -T \int_i^f dS \ \ \Delta G &= \Delta H -T \Delta S \end{aligned} \label{9.1.2}
which is the famous Gibbs equation for $\Delta G$. Using Definition: Spontaneous Process, we can use $\Delta G$ to infer the spontaneity of a chemical process that happens at constant $T$ and $P$ using $\Delta G \leq 0$. If we set ourselves at standard conditions, we can calculate the standard Gibbs free energy of formation, $\Delta_{\text{rxn}} G^{-\kern-6pt{\ominus}\kern-6pt-}$, for any reaction as:
\begin{aligned} \Delta_{\text{rxn}} G^{-\kern-6pt{\ominus}\kern-6pt-}&= \Delta_{\text{rxn}} H^{-\kern-6pt{\ominus}\kern-6pt-}-T \Delta_{\text{rxn}} S^{-\kern-6pt{\ominus}\kern-6pt-}\ \ &= \sum_i \nu_i \Delta_{\mathrm{f}} H_i^{-\kern-6pt{\ominus}\kern-6pt-}+ T \sum_i \nu_i S_i^{-\kern-6pt{\ominus}\kern-6pt-}, \end{aligned} \nonumber
where $\Delta_{\mathrm{f}} H_i^{-\kern-6pt{\ominus}\kern-6pt-}$ are the standard enthalpies of formation, $S_i^{-\kern-6pt{\ominus}\kern-6pt-}$ are the standard entropies, and $\nu_i$ are the stoichiometric coefficients for every species $i$ involved in the reaction. All these quantities are commonly available, and we have already discussed their usage in chapters 4 and 7, respectively.$^1$
The following four options are possible for $\Delta G^{-\kern-6pt{\ominus}\kern-6pt-}$ of a chemical reaction:
$\Delta G^{-\kern-6pt{\ominus}\kern-6pt-}$ $\Delta H^{-\kern-6pt{\ominus}\kern-6pt-}$ $\Delta S^{-\kern-6pt{\ominus}\kern-6pt-}$ Spontaneous?
if + Always
+ if + Never
–/+ if Depends on $T$:
$\scriptstyle{\text{spontaneous at low } T}$
+/– if + + Depends on $T$:
$\scriptstyle{\text{spontaneous at high } T}$
Or, in other words:
• Exothermic reactions that increase the entropy are always spontaneous.
• Endothermic reactions that reduce the entropy are always non-spontaneous.
• For the other two cases, the spontaneity of the reaction depends on the temperature:
• Exothermic reactions that reduce the entropy are spontaneous at low $T$.
• Endothermic reactions that increase the entropy are spontaneous at high $T$.
A simple criterion to evaluate the entropic contribution of a reaction is to look at the total number of moles of the reactants and the products (as the sum of the stoichiometric coefficients). If the reaction is producing more molecules than it destroys $\left( \left| \sum_\text{products} \nu_i \right| > \left| \sum_\text{reactants} \nu_i \right| \right)$, it will increase the entropy. Vice versa, if the total number of moles in a reaction is reducing $\left( \left| \sum_\text{products} \nu_i \right| < \left| \sum_\text{reactants} \nu_i \right| \right)$, the entropy will also reduce.
As we saw in section 8.2, the natural variables of the Gibbs free energy are the temperature, $T$, the pressure, $P$, and chemical composition, as the number of moles $\{n_i\}$. The Gibbs free energy can therefore be expressed using the total differential as (see also, last formula in Equation 8.4.2):
$dG(T,P,\{n_i\}) = \mkern-18mu \underbrace{\left(\dfrac{\partial G}{\partial T} \right)_{P,\{n_i\}}}_{\text{temperature dependence}} \mkern-36mu dT + \underbrace{\left(\dfrac{\partial G}{\partial P} \right)_{T,\{n_i\}}}_{\text{pressure dependence}} \mkern-36mu dP + \sum_i \underbrace{\left(\dfrac{\partial G}{\partial n_i} \right)_{T,P,\{n_{j \neq i}\}}}_{\text{composition dependence}} \mkern-36mu dn_i. \label{9.1.5}$
If we know the behavior of $G$ as we vary each of the three natural variables independently of the other two, we can reconstruct the total differential $dG$. Each of these terms represents a coefficient in Equation \ref{9.1.5}, which are given in Equation 8.4.3.
1. ︎It is not uncommon to see values of $\Delta_{\text{f}} G^{-\kern-6pt{\ominus}\kern-6pt-}$ tabulated alongside $\Delta_{\mathrm{f}} H^{-\kern-6pt{\ominus}\kern-6pt-}$ and $S_i^{-\kern-6pt{\ominus}\kern-6pt-}$, which simplifies even further the calculation. In fact, a comprehensive list of standard Gibbs free energy of formation of inorganic and organic compounds is reported in the appendix of this book 16. For cases where $\Delta_{\text{f}} G^{-\kern-6pt{\ominus}\kern-6pt-}$ are not reported, they can always be calculated by their constituents.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/09%3A_Gibbs_Free_Energy/9.01%3A_Gibbs_Equation.txt
|
$\left(\dfrac{\partial G}{\partial T} \right)_{P,\{n_i\}}=-S \nonumber$
Let’s analyze the first coefficient that gives the dependence of the Gibbs energy on temperature. Since this coefficient is equal to $-S$ and the entropy is always positive, $G$ must decrease when $T$ increases at constant $P$ and $\{n_i\}$, and vice versa.
If we replace this coefficient for $-S$ in the Gibbs equation, Equation 9.1.3, we obtain:
$\Delta G = \Delta H + T \left(\dfrac{\partial \Delta G}{\partial T} \right)_{P,\{n_i\}}, \label{9.2.1}$
and since Equation \ref{9.2.1} includes both $\Delta G$ and its partial derivative with respect to temperature $\left(\dfrac{\partial \Delta G}{\partial T} \right)_{P,\{n_i\}}$ we need to rearrange it to include the temperature derivative only. To do so, we can start by evaluating the partial derivative of $\left( \dfrac{\Delta G}{T} \right)$ using the chain rule:
$\left[ \dfrac{\partial\left( \dfrac{\Delta G}{T} \right)}{\partial T} \right]_{P,\{n_i\}} = \dfrac{1}{T} \left(\dfrac{\partial \Delta G}{\partial T} \right)_{P,\{n_i\}} - \dfrac{1}{T^2}\Delta G, \label{9.2.2}$
which, replacing $\Delta G$ from Equation \ref{9.2.1} into Equation \ref{9.2.2}, becomes:
\begin{aligned} \left[ \dfrac{\partial\left( \dfrac{\Delta G}{T} \right)}{\partial T} \right]_{P,\{n_i\}} &= \dfrac{1}{T} \left(\dfrac{\partial \Delta G}{\partial T} \right)_{P,\{n_i\}} - \dfrac{1}{T^2} \left[ \Delta H + T \left(\dfrac{\partial \Delta G}{\partial T} \right)_{P,\{n_i\}} \right] \ &= \dfrac{1}{T} \left(\dfrac{\partial \Delta G}{\partial T} \right)_{P,\{n_i\}}- \dfrac{\Delta H}{T^2}-\dfrac{1}{T} \left(\dfrac{\partial \Delta G}{\partial T} \right)_{P,\{n_i\}}, \end{aligned} \label{9.2.3}
which simplifies to:
\begin{aligned} \left[ \dfrac{\partial\left( \dfrac{\Delta G}{T} \right)}{\partial T} \right]_{P,\{n_i\}} &= - \dfrac{\Delta H}{T^2}. \end{aligned} \label{9.2.4}
Equation \ref{9.2.4} is known as the Gibbs–Helmholtz equation, and is useful in its integrated form to calculate the Gibbs free energy for a chemical reaction at any temperature $T$ by knowing just the standard Gibbs free energy of formation and the standard enthalpy of formation for the individual species, which are usually reported at $T=298\;\text{K}$. The integration is performed as follows:
\begin{aligned} \int_{T_i=298 \;\text{K}}^{T_f=T} \dfrac{\partial\left( \dfrac{\Delta_{\text{rxn}} G}{T} \right)}{\partial T} &= - \int_{T_i=298 \;\text{K}}^{T_f=T} \dfrac{\Delta_{\text{rxn}} H}{T^2} \ \ \dfrac{\Delta_{\text{rxn}} G^{-\kern-6pt{\ominus}\kern-6pt-}(T)}{T} &= \dfrac{\Delta_{\text{rxn}} G^{-\kern-6pt{\ominus}\kern-6pt-}}{298 \;\text{K}} + \Delta_{\text{rxn}} H^{-\kern-6pt{\ominus}\kern-6pt-}\left( \dfrac{1}{T^2} -\dfrac{1}{(298 \;\text{K})^2} \right), \end{aligned} \label{9.2.5}
giving the integrated Gibbs–Helmholtz equation:
$\dfrac{\Delta_{\text{rxn}} G^{-\kern-6pt{\ominus}\kern-6pt-}(T)}{T} = \dfrac{\sum_i \nu_i \Delta_{\text{f}} G_i^{-\kern-6pt{\ominus}\kern-6pt-}}{298 \;\text{K}} + \sum_i \nu_i \Delta_{\text{f}} H_i^{-\kern-6pt{\ominus}\kern-6pt-}\left( \dfrac{1}{T^2} -\dfrac{1}{(298 \;\text{K})^2} \right) \label{9.2.6}$
9.03: Pressure Dependence of G
$\left(\dfrac{\partial G}{\partial P} \right)_{T,\{n_i\}}=V \nonumber$
We can now turn the attention to the second coefficient that gives how the Gibbs free energy changes when the pressure change. To do this, we put the system at constant $T$ and $\{n_i\}$, and then we consider infinitesimal variations of $G$. From Equation 8.2.6:
$dG = VdP -SdT +\sum_i\mu_i dn_i \quad \xrightarrow{\text{constant}\; T,\{n_i\}} \quad dG = VdP, \label{9.3.1}$
which is the differential equation that we were looking for. To study changes of $G$ for macroscopic changes in $P$, we can integrate Equation $\ref{9.3.1}$ between initial and final pressures, and considering an ideal gas, we obtain:
\begin{aligned} \int_i^f dG &= \int_i^f VdP \ \Delta G &= nRT \int_i^f \dfrac{dP}{P} = nRT \ln \dfrac{P_f}{P_i}. \end{aligned} \label{9.3.2} \
If we take $P_i = P^{-\kern-6pt{\ominus}\kern-6pt-}= 1 \, \text{bar}$, we can rewrite Equation \ref{9.3.2} as:
$G = G^{-\kern-6pt{\ominus}\kern-6pt-}+ nRT \ln \dfrac{P_f}{P^{-\kern-6pt{\ominus}\kern-6pt-}}, \label{9.3.3}$
which is useful to convert standard Gibbs free energies of formation at pressures different than standard pressure, using:
$\Delta_{\text{f}} G = \Delta_{\text{f}} G^{-\kern-6pt{\ominus}\kern-6pt-}+ nRT \ln \dfrac{P_f}{\underbrace{P^{-\kern-6pt{\ominus}\kern-6pt-}}_{=1 \; \text{bar}}} = \Delta_{\text{f}} G^{-\kern-6pt{\ominus}\kern-6pt-}+ nRT \ln P_f \label{9.3.4}$
For liquids and solids, $V$ is essentially independent of $P$ (liquids and solids are incompressible), and Equation \ref{9.3.1} can be integrated as:
$\Delta G = \int_i^f VdP = V \int_i^f dP = V \Delta P.\label{9.3.5}$
The plots in Figure $1$ show the remarkable difference in the behaviors of $\Delta_{\text{f}} G$ for a gas and for a liquid, as obtained from eqs. \ref{9.3.2} and \ref{9.3.5}.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/09%3A_Gibbs_Free_Energy/9.02%3A_Temperature_Dependence_of_G.txt
|
$\left(\dfrac{\partial G}{\partial n_i} \right)_{T,P}=\mu_i \nonumber$
The third and final coefficient gives the chemical potential as the dependence of $G$ on the chemical composition at constant $T$ and $P$. Similarly to the previous cases, we can take the definition of the coefficient and integrate it directly between the initial and final stages of a reaction. If we consider a reaction product, pure substance $i$, at the beginning of the reaction there will be no moles of it $n_i=0$, and consequently $G=0$.1 We can then integrate the left-hand side between zero and the number of moles of product at the end of the reaction, $n$, and the right-hand side between zero and the Gibbs free energy of the product, $G$. The integral will become:
$\int_0^G d G = \int_0^n \mu^* dn, \label{9.4.1}$
where $\mu^*$ indicates the chemical potential of a pure substance, which is independent on the number of moles by definition. As such, Equation \ref{9.4.1} becomes:
$\int_0^G d G = \mu^* \int_0^n dn \quad \rightarrow \quad G = \mu^* n \quad \rightarrow \quad \mu^* = \dfrac{G}{n}, \label{9.4.2}$
which gives a straightforward interpretation of the chemical potential of a pure substance as the molar Gibbs free energy.
We can start from Equation 9.3.3 and write for a pure substance that is brought from $P_i=P^{-\kern-6pt{\ominus}\kern-6pt-}$ to $P_f=P$ at constant $T$:
$G - G^{-\kern-6pt{\ominus}\kern-6pt-}= nRT \ln \dfrac{P}{P^{-\kern-6pt{\ominus}\kern-6pt-}}, \label{9.4.3}$
dividing both sides by $n$, we obtain:
$\dfrac{G}{n} - \dfrac{G^{-\kern-6pt{\ominus}\kern-6pt-}}{n} = RT \ln \dfrac{P}{P^{-\kern-6pt{\ominus}\kern-6pt-}}, \label{9.4.4}$
which, for a pure substance at $P^{-\kern-6pt{\ominus}\kern-6pt-}= 1 \;\text{bar}$, becomes:
$\mu^* - \mu^{-\kern-6pt{\ominus}\kern-6pt-}= RT \ln P. \label{9.4.5}$
Notice that, while we use the pressure of the gas inside the logarithm in Equation \ref{9.4.5}, the quantity is formally divided by the standard pressure $P^{-\kern-6pt{\ominus}\kern-6pt-}= 1 \;\text{bar}$, and therefore it is a dimensionless quantity, as it should be. For simplicity of notation, however, we will omit the division by $P^{-\kern-6pt{\ominus}\kern-6pt-}$ in the remaining of this textbook, especially wherever it does not create confusion. Let’s now consider a mixture of ideal gases, and let’s try to find out whether the chemical potential of a pure gas inside the mixture, $\mu_i^{\text{mixture}}$, is the same as its chemical potential outside the mixture, $\mu^*$. To do so, we can use Equation \ref{9.4.5} and replace the pressure $P$ with the partial pressure $P_i$:
$\mu_i^{\text{mixture}} = \mu_i^{-\kern-6pt{\ominus}\kern-6pt-}+ RT \ln P_i, \label{9.4.6}$
where the partial pressure $P_i$ can be obtained from the simple relation that is known as Dalton’s Law:
$P_i = y_i P, \label{9.4.7}$
with $y_i$ being the concentration of gas $i$ measured as a mole fraction in the gas phase $y_i=\dfrac{n_i}{n_{\text{TOT}}} < 1$. Replacing Equation \ref{9.4.7} into Equation \ref{9.4.6}, we obtain:
\begin{aligned} \mu_i^{\text{mixture}} &= \mu_i^{-\kern-6pt{\ominus}\kern-6pt-}+ RT \ln (y_i P) \ &= \underbrace{\mu_i^{-\kern-6pt{\ominus}\kern-6pt-}+ RT \ln P}_{\mu_i^*} + RT \ln y_i, \end{aligned} \label{9.4.8}
which then reduces to the following equation:
$\mu_i^{\text{mixture}} = \mu_i^* + RT \ln y_i. \label{9.4.9}$
Analyzing Equation \ref{9.4.9}, we can immediately see that, since $y_i < 1$:
$\mu_i^{\text{mixture}} < \mu_i^*,\label{9.4.10}$
or, in other words, the chemical potential of a substance in the mixture is always lower than the chemical potential of the pure substance. If we consider a process where we start from two separate pure ideal gases and finish with a mixture of the two, we can calculate the change in Gibbs free energy due to the mixing process with:
$\Delta_{\text{mixing}} G = \sum n_i \left( \mu_i^{\text{mixture}} - \mu_i^* \right) < 0, \label{9.4.11}$
or, in other words, the process is spontaneous under all circumstances, and pure ideal gases will always mix.
1. For reactants, the same situation usually applies but in reverse. More complicated cases where the reaction does not consume all reactants are possible, but insignificant for the following treatment.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/09%3A_Gibbs_Free_Energy/9.04%3A_Composition_Dependence_of_G.txt
|
Let’s consider a prototypical reaction at constant $T,P$:
$a\mathrm{A} + b\mathrm{B} \rightarrow c\mathrm{C} + d\mathrm{D} \label{10.1.1}$
The Gibbs free energy of the reaction is defined as:
$\Delta_{\text{rxn}} G = G_{\text{products}} - G_{\text{reactants}} = G^{\text{C}} + G^{\text{D}} - G^{\text{A}}-G^{\text{B}}, \label{10.1.2}$
and replacing the absolute Gibbs free energies with the chemical potentials $\mu_i$, we obtain:
$\Delta_{\text{rxn}} G = c \mu_{\text{C}} + d \mu_{\text{D}} - a \mu_{\text{A}}- b\mu_{\text{B}}. \label{10.1.3}$
Assuming the reaction is happening in the gas phase, we can then use Equation 9.4.6 to replace the chemical potentials with their value in the reaction mixture, as:
\begin{aligned} \mkern-60mu \Delta_{\text{rxn}} G =& \; c (\mu_{\text{C}}^{-\kern-6pt{\ominus}\kern-6pt-}+RT \ln P_{\text{C}}) + d (\mu_{\text{D}}^{-\kern-6pt{\ominus}\kern-6pt-}+RT \ln P_{\text{D}}) - a (\mu_{\text{A}}^{-\kern-6pt{\ominus}\kern-6pt-}+RT \ln P_{\text{A}}) - b (\mu_{\text{B}}^{-\kern-6pt{\ominus}\kern-6pt-}+RT \ln P_{\text{B}}) \[4pt] =& \; \underbrace{c \mu_{\text{C}}^{-\kern-6pt{\ominus}\kern-6pt-}+ d \mu_{\text{D}}^{-\kern-6pt{\ominus}\kern-6pt-}- a \mu_{\text{A}}^{-\kern-6pt{\ominus}\kern-6pt-}- b\mu_{\text{B}}^{-\kern-6pt{\ominus}\kern-6pt-}}_{\Delta_{\text{rxn}} G^{-\kern-6pt{\ominus}\kern-6pt-}} +RT \ln \dfrac{P_{\text{C}}^c \cdot P_{\text{D}}^d}{P_{\text{A}}^a \cdot P_{\text{B}}^b}. \end{aligned} \label{10.1.4}
We can define a new quantity called the reaction quotient as a function of the partial pressures of each substance:$^1)$
$Q_P = \dfrac{P_{\text{C}}^c \cdot P_{\text{D}}^d}{P_{\text{A}}^a \cdot P_{\text{B}}^b}, \label{10.1.5}$
and we can then simply rewrite Equation \ref{10.1.4} using Equation \ref{10.1.5} as:
$\Delta_{\text{rxn}} G = \Delta_{\text{rxn}} G^{-\kern-6pt{\ominus}\kern-6pt-}+ RT \ln Q_P. \label{10.1.6}$
This equation tells us that the sign of $\Delta_{\text{rxn}} G$ is influenced by the reaction quotient $Q_P$. For a spontaneous reaction at the beginning, the partial pressures of the reactants are much higher than the partial pressures of the products, therefore $Q_P \ll 1$ and $\Delta_{\text{rxn}} G < 0$, as we expect. As the reaction proceeds, the partial pressures of the products will increase, while the partial pressures of the reactants will decrease. Consequently, both $Q_P$ and $\Delta_{\text{rxn}} G$ will increase. The reaction will completely stop when $\Delta_{\text{rxn}} G = 0$, which is the chemical equilibrium point. At the reaction equilibrium:
$\Delta_{\text{rxn}} G = 0 = \Delta_{\text{rxn}} G^{-\kern-6pt{\ominus}\kern-6pt-}+ RT \ln K_P, \label{10.1.7}$
where we have defined a new quantity called equilibrium constant, as the value the reaction quotient assumes when the reaction reaches equilibrium, and we have denoted it with the symbol $K_P$.$^2$ From Equation \ref{10.1.7} we can derive the following fundamental equation on the standard Gibbs free energy of reaction:
$\Delta_{\text{rxn}} G^{-\kern-6pt{\ominus}\kern-6pt-}= - RT \ln K_P. \label{10.1.8}$
To extend the concept of $K_P$ beyond the four species in the prototypical reaction (10.1), we can use the product of a series symbol $\left( \prod_i \right)$, and write:
$K_P=\prod_i P_{i,\text{eq}}^{\nu_i}, \label{10.1.9}$
where $P_{i,\text{eq}}$ are the partial pressure of each species at equilibrium. Eq. (10.1.9) is in principle valid for ideal gases only. However, reaction involving ideal gases are pretty rare. As such, we can further extend the concept of equilibrium constant and write:
$K_{\text{eq}} =\prod_i a_{i,\text{eq}}^{\nu_i}, \label{10.1.10}$
where we have replaced the partial pressure at equilibrium, $P_{i,\text{eq}}$, with a new concept introduced initially by Gilbert Newton Lewis (1875–1946),$^3$ that he termed activity, and represented by the letter $a$. For ideal gases, it is clear that $a_i=P_i/P^{-\kern-6pt{\ominus}\kern-6pt-}$. For non-ideal gases, the activity is equal to the fugacity $a_i=f_i/P^{-\kern-6pt{\ominus}\kern-6pt-}$, a concept that we will investigate in the next chapter. For pure liquids and solids, the activity is simply $a_i=1$. For diluted solutions, the activity is equal to a measured concentration (such as, for example, the mole fraction $x_i$ in the liquid phase, and $y_i$ in the gas phase, or the molar concentration $[i]/[i]^{-\kern-6pt{\ominus}\kern-6pt-}$ with $[i]^{-\kern-6pt{\ominus}\kern-6pt-}= 1\;\text[mol/L]$). Finally for concentrated solutions, the activity is related to the measured concentration via an activity coefficient. We will return to the concept of activity in chapter 14, when we will specifically deal with solutions. For now, it is interesting to use the activity to write the definition of the following two constants:
$K_y =\prod_i \left( y_{i,\text{eq}} \right)^{\nu_i} \qquad \qquad \qquad \qquad K_C =\left( \prod_i [i]_{\text{eq}}/[i]^{-\kern-6pt{\ominus}\kern-6pt-}\right)^{\nu_i}, \label{10.1.11}$
which can then be related with $K_P$ for a mixture of ideal gases using:
$P_i = y_i P \qquad \qquad \qquad P_i=\dfrac{n_i}{V}RT=[i]RT, \label{10.1.12}$
which then results in:
$K_P = K_y\cdot \left(\dfrac{P}{P^{-\kern-6pt{\ominus}\kern-6pt-}}\right)^{\Delta \nu} \qquad \qquad K_P = K_C \left( \dfrac{[i]^{-\kern-6pt{\ominus}\kern-6pt-}RT}{P^{-\kern-6pt{\ominus}\kern-6pt-}} \right)^{\Delta \nu}, \label{10.1.13}$
with $\Delta \nu =\sum_i \nu_i$.
Using the general equilibrium constant, $K_{\text{eq}}$, we can also rewrite the fundamental equation on $\Delta_{\text{rxn}} G^{-\kern-6pt{\ominus}\kern-6pt-}$ that we derived in Equation \ref{10.1.8} to be applicable at most conditions, as:
$\Delta_{\text{rxn}} G^{-\kern-6pt{\ominus}\kern-6pt-}= - RT \ln K_{\text{eq}}, \label{10.1.14}$
and since $\Delta_{\text{rxn}} G^{-\kern-6pt{\ominus}\kern-6pt-}$ depends on $T,P$ and $\{n_i\}$, it is useful to explore how $K_{\text{eq}}$ depends on those variables as well.
1. Notice that since we used Equation 9.4.5 to derive the reaction quotient, the partial pressures inside it are always dimensionless since they are divided by $P^{-\kern-6pt{\ominus}\kern-6pt-}$.︎
2. The subscript $P$ refers to the fact that the equilibrium constant is measured in terms of partial pressures.︎
3. Gilber Lewis is the same scientist that invented the concept of Lewis Structures.
10.02: Temperature Dependence of Keq
To study the temperature dependence of $K_{\text{eq}}$ we can use Equation 10.1.14 for the general equilibrium constant and write:
$\ln K_{\text{eq}} = -\dfrac{\Delta G^{-\kern-6pt{\ominus}\kern-6pt-}}{RT}, \label{10.2.1}$
which we can then differentiate with respect to temperature at constant $P,\{n_i\}$ on both sides:
$\left( \dfrac{\partial \ln K_{\text{eq}}}{\partial T} \right)_{P,\{n_i\}} = -\dfrac{1}{R} \left[ \dfrac{\partial \left( \dfrac{\Delta G^{-\kern-6pt{\ominus}\kern-6pt-}}{T} \right)}{\partial T} \right]_{P,\{n_i\}}, \label{10.2.2}$
and, using Gibbs-Helmholtz equation (Equation \ref{9.9}) to simplify the left hand side, becomes:
$\left( \dfrac{\partial \ln K_{\text{eq}}}{\partial T} \right)_{P,\{n_i\}} = -\dfrac{1}{R} \left( -\dfrac{\Delta H^{-\kern-6pt{\ominus}\kern-6pt-}}{T^2} \right) = \dfrac{\Delta H^{-\kern-6pt{\ominus}\kern-6pt-}}{RT^2}, \label{10.2.3}$
which gives the dependence of $\ln K_{\text{eq}}$ on $T$ that we were looking for. Equation \ref{10.2.3} is also called van ’t Hoff equation,$^1$ and it is the mathematical expression of Le Chatelier’s principle. The simplest interpretation is as follows:
• For an exothermic reaction ($\Delta H^{-\kern-6pt{\ominus}\kern-6pt-}< 0$): $K_{\text{eq}}$ will decrease as the temperature increases.
• For an endothermic reaction ($\Delta H^{-\kern-6pt{\ominus}\kern-6pt-}> 0$): $K_{\text{eq}}$ will increase as the temperature increases.
If we integrate the van ’t Hoff equation between two arbitrary points at constant $P$, and assuming constant $\Delta H^{-\kern-6pt{\ominus}\kern-6pt-}$, we obtain the following:
$\int_1^2 d \ln K_{\text{eq}} = \dfrac{\Delta H^{-\kern-6pt{\ominus}\kern-6pt-}}{R} \int_1^2 \dfrac{dT}{T^2}, \label{10.2.4}$
which leads to the linear equation:
$\ln K_{\text{eq}}(2) = \ln K_{\text{eq}}(1) - \dfrac{\Delta H^{-\kern-6pt{\ominus}\kern-6pt-}}{R} \left( \dfrac{1}{T_2}-\dfrac{1}{T_1} \right). \label{10.2.5}$
which is the equation that produces the so-called van ’t Hoff plots, from which $\Delta H^{-\kern-6pt{\ominus}\kern-6pt-}$ can be experimentally determined:
1. named after Jacobus Henricus “Henry” van ’t Hoff Jr. (1852–1911).
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/10%3A_Chemical_Equilibrium/10.01%3A_Reaction_Quotient_and_Equilibrium_Constant.txt
|
While $K_P$ is independent of both temperature and number of moles for an ideal gas, the same is not necessarily true for the other equilibrium constants.
$\left( \dfrac{\partial K_P}{\partial P} \right)_{T,\{n_i\}} = 0 \qquad \qquad \left( \dfrac{\partial K_P}{\partial n_i} \right)_{T,P} =0. \label{10.3.1}$
For example, it is easy to look at Equation 10.1.13 and determine that $K_y$ usually depends on $P$.1 Using Dalton’s Law, Equation 9.4.7, we can also notice that the equilibrium partial pressures of the reactants and products in a gas-phase reaction can be expressed in terms of their equilibrium mole fractions $y_i$ and the total pressure $P$. As such, we can use $K_y$ to demonstrate that the equilibrium mole fractions will change when $P$ changes,2 as it is demonstrated by the following exercise.
Exercise $1$
Calculate the mole fraction change for the dissociation of $\mathrm{Cl}_{2(g)}$ when the pressure is increased from $P^{-\kern-6pt{\ominus}\kern-6pt-}$ to $P_f=2.5 \;\text{bar}$ at constant $T=2\,298\;\mathrm{K}$, knowing that $\Delta_{\mathrm{f}} G^{-\kern-6pt{\ominus}\kern-6pt-}_{\mathrm{Cl}_{(g)}} = 105.3 \;\text{kJ/mol}$ and $\Delta_{\mathrm{f}} H^{-\kern-6pt{\ominus}\kern-6pt-}_{\mathrm{Cl}_{(g)}} = 121.3 \;\text{kJ/mol}$, and remembering that both of these values are tabulated at $T=298\;\text{K}$.
Answer
Let’s consider the reaction: $\mathrm{Cl}_{2(g)} \rightleftarrows 2 \mathrm{Cl}_{(g)} \nonumber$
We can divide the exercise into two parts. In the first one, we will deal with calculating the equilibrium constant at $T=2\,298\;\mathrm{K}$ from the data at $T=298\;\mathrm{K}$. In the second one, we will calculate the change in mole fraction when the pressure is increased from $P^{-\kern-6pt{\ominus}\kern-6pt-}=1\;\text{bar}$ to $P_f=2.5 \;\text{bar}$.
Let’s begin the first part by calculating $\Delta_{\text{rxn}} G^{-\kern-6pt{\ominus}\kern-6pt-}$ and $\Delta_{\text{rxn}} H^{-\kern-6pt{\ominus}\kern-6pt-}$ from: \begin{aligned} \Delta_{\text{rxn}} G^{-\kern-6pt{\ominus}\kern-6pt-}&= 2 \Delta_{\text{f}} G^{-\kern-6pt{\ominus}\kern-6pt-}_{\text{Cl}_{(g)}} - \Delta_{\text{f}} G^{-\kern-6pt{\ominus}\kern-6pt-}_{\text{Cl}_{2(g)}} \ \Delta_{\text{rxn}} H^{-\kern-6pt{\ominus}\kern-6pt-}&= 2 \Delta_{\text{f}} H^{-\kern-6pt{\ominus}\kern-6pt-}_{\text{Cl}_{(g)}} - \Delta_{\text{f}} H^{-\kern-6pt{\ominus}\kern-6pt-}_{\text{Cl}_{2(g)}}, \end{aligned} \nonumber and since $\text{Cl}_{2(g)}$ is an element in its most stable form at $T=298\;\mathrm{K}$, its standard enthalpy and Gibbs free energy of formation are $\Delta_{\text{f}} H^{-\kern-6pt{\ominus}\kern-6pt-}_{\text{Cl}_{2(g)}} = \Delta_{\text{f}} G^{-\kern-6pt{\ominus}\kern-6pt-}_{\text{Cl}_{2(g)}} = 0$. Therefore:$^3$ \begin{aligned} \Delta_{\text{rxn}} G^{-\kern-6pt{\ominus}\kern-6pt-}&= 2 \cdot 105.3 - 0 = 210.6 \;\text{kJ/mol} \ \Delta_{\text{rxn}} H^{-\kern-6pt{\ominus}\kern-6pt-}&= 2 \cdot 121.3 - 0 = 242.6\;\text{kJ/mol}. \end{aligned} \nonumber Using Equation 10.1.8 to calculate $K_P (P^{-\kern-6pt{\ominus}\kern-6pt-},298\;\text{K})$, we obtain:$^4$ $\ln [ K_P (P^{-\kern-6pt{\ominus}\kern-6pt-},298\;\text{K}) ] = \dfrac{ - \Delta_{\text{rxn}} G^{-\kern-6pt{\ominus}\kern-6pt-}}{RT} = \dfrac{-210.6\times10^3}{8.31 \cdot 298} = - 85.0. \nonumber$ We can now use the integrated van ’t Hoff equation, Equation 10.2.5, to calculate $K_P$ at $T=2\,298\;\text{K}$: \begin{aligned} \ln [K_P (P^{-\kern-6pt{\ominus}\kern-6pt-},&2\,298\;\text{K})] = \ln [K_P (P^{-\kern-6pt{\ominus}\kern-6pt-},298\;\text{K})] \;+ \ &-\dfrac{\Delta_{\text{rxn}} H^{-\kern-6pt{\ominus}\kern-6pt-}}{R} \left(\dfrac{1}{2\,298}-\dfrac{1}{298} \right), \end{aligned} \nonumber which becomes: \begin{aligned} \ln [K_P (P^{-\kern-6pt{\ominus}\kern-6pt-},&2\,298\;\text{K})] = - 85.0 \;+\&-\dfrac{242.6\times 10^{3}}{8.31} \left(\dfrac{1}{2\,298}-\dfrac{1}{298} \right) = 0.262\;, \end{aligned} \nonumber which corresponds to: $K_P (P^{-\kern-6pt{\ominus}\kern-6pt-},2\,298\;\text{K}) = \exp (0.262)=1.30. \nonumber$
Let’s now move to the second part of the exercise, where we increase the pressure from $1\;\text{bar}$ to $2.5\;\text{bar}$ at constant $T=2\,298\;\text{K}$. We start by writing the definition of $K_P$ and $K_y$: $K_P=\dfrac{P_\mathrm{Cl_{(g)}}^2}{P_{\mathrm{Cl}_{2(g)}}} \qquad \qquad K_y=\dfrac{y_\mathrm{Cl_{(g)}}^2}{y_{\mathrm{Cl}_{2(g)}}}, \nonumber$ and using Equation 10.1.13: \begin{aligned} \Delta \nu &= 2 - 1 = 1 \ K_P &= K_y \cdot \dfrac{P}{P^{-\kern-6pt{\ominus}\kern-6pt-}} \quad \xrightarrow \qquad K_y=K_P \left( \dfrac{P}{P^{-\kern-6pt{\ominus}\kern-6pt-}} \right)^{-1}, \end{aligned} \nonumber we can calculate the initial $K_y$ at $P_i=P^{-\kern-6pt{\ominus}\kern-6pt-}$, using: $K_y (P^{-\kern-6pt{\ominus}\kern-6pt-},2\,298\;\text{K}) = 1.30 =\dfrac{1.30}{1}. \nonumber$ and calculate the initial concentration of $\mathrm{Cl}_{(g)}$ and $\mathrm{Cl}_{(g)}$ at $P^{-\kern-6pt{\ominus}\kern-6pt-}$, recalling that $y_{\mathrm{Cl}_{2(g)}}=1-y_{\mathrm{Cl}_{(g)}}:$ $K_y (P_i,2\,298\;\text{K})=\dfrac{\left(y^i_{\mathrm{Cl}_{(g)}}\right)^2}{y^i_{\mathrm{Cl}_{(g)}}} = 1.30. \nonumber$ Solving the quadratic equation, we obtain one negative answer—which is unphysical—,$^5$ and: $y_{\mathrm{Cl}_{(g)}}^i= 0.662 \quad \xrightarrow \qquad y_{\mathrm{Cl}_{2(g)}}^i=1-0.662 = 0.338. \nonumber$ At the end of the process, $P_f=2.5\;\text{bar}$, and we obtain: $K_y (P_f,2\,298\;\text{K}) = 0.520 = K_P \dfrac{P^{-\kern-6pt{\ominus}\kern-6pt-}}{P_f} = \dfrac{1.30}{2.5}, \nonumber$ and, using the same technique used before to solve the quadratic equation: $K_y (P_f,2\,298\;\text{K})=\dfrac{\left(y^f_{\mathrm{Cl}_{(g)}}\right)^2}{y^f_{\mathrm{Cl}_{(g)}}} = 0.520, \nonumber$ gives: $y_{\mathrm{Cl}_{(g)}}^f=0.507 \quad \xrightarrow \qquad y_{\mathrm{Cl}_{2(g)}}^i=1-0.507 = 0.493. \nonumber$ To summarize, when we increase the pressure from $1\;\text{bar}$ to $2.5\;\text{bar}$ at $T=2\,298\;\text{K}$, the equilibrium constant in terms of the mole fraction decreases from $K_y(P^{-\kern-6pt{\ominus}\kern-6pt-},2\,298\;\text{K})=1.30$ to $K_y(P_f=2.5\;\text{bar},2\,298\;\text{K})=0.520$. This reduction is causing a shift of the equilibrium towards the reactants, with the concentration of $\text{Cl}_{2(g)}$ increasing from $y_{\text{Cl}_{2(g)}}^i = 0.338$ to $y_{\text{Cl}_{2(g)}}^f = 0.493$ and the concentration of $\text{Cl}_{(g)}$ decreasing from $y_{\text{Cl}_{2(g)}}^i = 0.662$ to $y_{\text{Cl}_{(g)}}^f = 0.507$.
The dependence of $K_{\text{eq}}$ on $P$ highlighted above is another mathematical expression of Le Chatelier’s principle, on this occasion, for changes in pressure. The interpretation For a reaction happening in the gas phase is as follows:
• If the total pressure increases, the equilibrium will shift towards the side of the chemical equation that contains the smallest total amount of moles (the equilibrium in exercise $1$ shifts toward the reactant).
1. $K_y$ becomes independent of $P$ in the particular case where $\Delta \nu=0$, i.e., for reactions where the total number of moles of reactants is the same as the total number of moles of the products.︎
2. Keep in mind that $K_P$ will not change.︎
3. Notice how a positive $\Delta_{\text{rxn}} G^{-\kern-6pt{\ominus}\kern-6pt-}$ indicates that the dissociation of $\mathrm{CL}_{2(g)}$ is non-spontaneous at $T=298\;\text{K}$ and $P=1\;\text{bar}$. As such, we should expect a very small value for $K_P$.︎
4. The results corresponds to $K_P=1.2\times 10^{-37}$, an incredible miniscule number, as we should expect given the data of $\Delta_{\text{rxn}} G^{-\kern-6pt{\ominus}\kern-6pt-}$.︎
5. Concentration cannot be negative.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/10%3A_Chemical_Equilibrium/10.03%3A_Pressure_and_Composition_Dependence_of_Keq.txt
|
• 11.1: The Ideal Gas Equation
The concept of an ideal gas is a theoretical construct that allows for straightforward treatment and interpretation of gases’ behavior. As such, the ideal gas is a simplified model that we use to understand nature, and it does not correspond to any real system.
• 11.2: Behaviors of Non-Ideal Gases
Non-ideal gases (sometimes also referred to as “real gases”), do not behave as ideal gases because at least one of the assumptions in Definition: Ideal Gas is violated. What characterizes non-ideal gases is that there is no unique equation that we can use to describe their behavior.
• 11.3: Critical Phenomena
The compressibility factor is a correction coefficient that describes the deviation of a real gas from ideal gas behaviour. It is usually represented with the symbol z.
• 11.4: Fugacity
The chemical potential of a pure ideal gas can be calculated using Equation 9.4.5. Since we are not interested in mixture, we can drop the asterisk in μ∗, and rewrite Equation 9.4.5 as:
11: Ideal and Non-Ideal Gases
The concept of an ideal gas is a theoretical construct that allows for straightforward treatment and interpretation of gases’ behavior. As such, the ideal gas is a simplified model that we use to understand nature, and it does not correspond to any real system. The following two assumptions define the ideal gas model:
Definition: Ideal Gas
• The particles that compose an ideal gas do not occupy any volume.
• The particles that compose an ideal gas do not interact with each other.
Because of its simplicity, the ideal gas model has been the historical foundation of thermodynamics and of science in general. The first studies of the ideal gas behavior date back to the seventeenth century, and the scientists that performed them are among the founders of modern science.
Boyle’s Law
In 1662 Robert Boyle (1627–1691) found that the pressure and the volume of an ideal gas are inversely related at constant temperature. Boyle’s Law has the following mathematical description:
$P\propto\dfrac{1}{V}\quad\text{at const.}\;T, \label{11.1.1}$
or, in other terms:
$PV=k_1\quad\text{at const.}\;T, \label{11.1.2}$
which results in the familiar $PV$ plots of Figure $1$. As we already discussed in chapter 2, each of the curves in Figure $1$ is obtained at constant temperature, and it is therefore called “isotherm.”
Charles’s and Gay-Lussac’s Laws
It took scientists more than a century to expand Boyle’s work and study the relationship between volume and temperature. In 1787 Jacques Alexandre César Charles (1746–1823) wrote the relationship known as Charles’s Law:
$V\propto T\quad\text{at const.}\;P, \label{11.1.3}$
or, in other terms:
$V=k_2 T\quad\text{at const.}\;P, \label{11.1.4}$
which results in the plots of Figure $2$. Each of the curves is obtained at constant pressure, and it is termed “isobar.”
The interesting thing about isobars is that each line seems to converge to a specific point along the temperature line when we extrapolate them to $V\rightarrow 0$. This led to the introduction of the absolute temperature scale, suggesting that the temperature will never get smaller than $-273.15^\circ\mathrm{C}$.
It took an additional 21 years to write a formal relationship between pressure and temperature. The following relationships were proposed by Joseph Louis Gay-Lussac (1778–1850) in 1808:
$P\propto T\quad\text{at const.}\;V, \label{11.1.5}$
or, in other terms:
$P=k_3 T\quad\text{at const.}\;V, \label{11.1.6}$
which results in the plots of Figure $3$. Each of the curves is obtained at constant volume, and it is termed “isochor.”
Avogadro’s Law
Ten years later, Amedeo Avogadro (1776–1856) discovered a seemingly unrelated principle by studying the composition of matter. His Avogadro’s Law encodes the relationship between the number of moles in an ideal gas and its volume as:
$V\propto n\quad\text{at const.}\;P,T, \label{11.1.7}$
or in other terms:
$V=k_4 n\quad\text{at const.}\;P,T, \label{11.1.8}$
The ideal gas Law
Despite all of the ingredients being available for more than 20 years, it’s only in 1834 that Benoît Paul Émile Clapeyron (1799–1864) was finally able to combine them into what is now known as the ideal gas Law. Using the same formulas obtained above, we can write:
$PV=\underbrace{k_3 T}_{\text{from Gay-Lussac's}} \cdot \underbrace{k_4 n,}_{\text{from Avogadro's}} \label{11.1.9}$
which by renaming the product of the two constants $k_3$ and $k_4$ as $R$, becomes:
$PV=nRT \label{11.1.10}$
The value of the constant $R$ can be determined experimentally by measuring the volume that 1 mol of an ideal gas occupies at a constant temperature (e.g., at $T=0^\circ\mathrm{C}$) and a constant pressure (e.g., atmospheric pressure $P=1\;\mathrm{atm}$). At those conditions, the volume is measured at 22.4 L, resulting in the following value of $R$:
$R=\dfrac{VP}{nT}=\dfrac{22.4 \cdot 1}{1 \cdot 273}=0.082 \;\dfrac{\text{L atm}}{\text{mol K}}, \label{11.1.11}$
which a simple conversion to SI units transforms into:
$R=8.31\;\dfrac{\text{J}}{\text{mol K}}. \label{11.1.12}$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/11%3A_Ideal_and_Non-Ideal_Gases/11.01%3A_The_Ideal_Gas_Equation.txt
|
Non-ideal gases (sometimes also referred to as “real gases”), do not behave as ideal gases because at least one of the assumptions in Definition: Ideal Gas is violated. What characterizes non-ideal gases is that there is no unique equation that we can use to describe their behavior. For this reason, we have a plethora of several experimental models, none of which is superior to the other. The van der Waals (vdW) equation is the only model that we will analyze in detail because of its simple interpretation. However, it is far from universal, and for several non-ideal gases, it is severely inaccurate. Other popular non-ideal gases equations are the Clausius equation, the virial equation, the Redlich–Kwong equation and several others.$^1$
The van der Waals equation
One of the simplest empirical equation that describes non-ideal gases was obtained in 1873 by Johannes Diderik van der Waals (1837–1923). The vdW equation includes two empirical parameters ($a$ and $b$) with different values for different non-ideal gases. Each of the parameters corresponds to a correction for the breaking of one of the two conditions that define the ideal gas behavior (Definition: Ideal Gas). The vdW equation is obtained from the ideal gas equation performing the following simple substitutions:
\begin{aligned} P & \;\rightarrow\;\left( P + \dfrac{a}{\overline{V}^2} \right)\ \overline{V} & \;\rightarrow\;\left( \overline{V} - b\right),\ \end{aligned} \label{11.2.1}
which results in:
\begin{aligned} P\overline{V} &=RT \; \rightarrow \; \left( P + \dfrac{a}{\overline{V}^2} \right)\left( \overline{V} - b\right)=RT\ P &=\dfrac{RT}{\overline{V} - b}-\dfrac{a}{\overline{V}^2}. \end{aligned} \label{11.2.2}
The parameter $a$ accounts for the presence of intermolecular interactions, while the parameter $b$ accounts for the non-negligible volume of the gas molecules. Despite the parameters having simple interpretations, their values for each gas must be determined experimentally. Values for these parameters for some significant non-ideal gas are reported below:
$a \left[ \dfrac{\mathrm{L}^2\mathrm{bar}}{\mathrm{mol}^2} \right]$ $b \left[ \dfrac{\mathrm{L}}{\mathrm{mol}} \right]$
Ammonia 4.225 0.0371
Argon 1.355 0.03201
Carbon dioxide 3.640 0.04267
Carbon monoxide 1.505 0.03985
Chlorine 6.579 0.05622
Freon 10.78 0.0998
Helium 0.0346 0.0238
Hydrogen 0.2476 0.02661
Mercury 8.200 0.01696
Methane 2.283 0.04278
Neon 0.2135 0.01709
Nitrogen 1.370 0.0387
Oxygen 1.382 0.03186
Radon 6.601 0.06239
Xenon 4.250 0.05105
Joule–Thomson effect
We have already met William Thomson, also known as Lord Kelvin, and his seminal work on the second law of thermodynamics. In conjunction with that work, Thomson is famous for developing a sensitive method for measuring the temperature changes related to the expansion of a gas. These experiments improved on the earlier work by James Joule, and Lord Kelvin’s improved instrument depicted in Figure $4$ is named the Joule–Thomson apparatus. The apparatus is composed of two chambers, each with its own mobile piston. The chambers are connected via a valve or a porous plug. The entire equipment is also thermally isolated from the surroundings. This instrument is a more sensitive version of the Joule expansion apparatus that we already described in section 3 (compare with Figure 3.1.1).
Thomson realized that a gas flowing through an obstruction experience a drop in pressure. If the entire apparatus is insulated, it will not exchange heat with its surroundings ($Q=0$), and each transformation will happen at adiabatic conditions. Let’s consider an initial condition with 1 mol of gas in the left chamber, occupying a volume $V_l$, and a completely closed right chamber, for which $V_r^i=0$. After the process completes, the volume of the right chamber will reduce to $V_l^f=0$, while the volume of the right chamber will be $V_r$. Using the first law of thermodynamics, we can write:
$\Delta U=U_r-U_l=\underbrace{Q}_{=0}+W=W_l+W_r, \label{11.2.3}$
with:
\begin{aligned} W_l &=-\int_{V_l}^0 P_l dV = P_l V_l\ W_r &=-\int_0^{V_r} P_r dV = - P_r V_r. \end{aligned} \label{11.2.4}
Replacing \ref{11.2.4} into Equation \ref{11.2.3}, results in:
\begin{aligned} U_r-U_l &=P_l V_l-P_r V_r \ \underbrace{U_r+P_r V_r}_{H_r} &= \underbrace{U_l + P_l V_l}_{H_l}, \end{aligned} \label{11.2.5}
which, replacing the definition of enthalpy $H=U+PV$, we obtain:
\begin{aligned} H_r &=H_l \ \Delta H &=0, \end{aligned} \label{11.2.6}
or, in other words, the process is isenthalpic. Using the total differential of $H$:
$dH=\left(\dfrac{\partial H}{\partial T} \right)_P dT + \left(\dfrac{\partial H}{\partial P} \right)_T dP = C_P dT + \left(\dfrac{\partial H}{\partial P} \right)_T dP, \label{11.2.7}$
we obtain:
$\Delta H=\int dH = \int C_P dT + \int \left(\dfrac{\partial H}{\partial P} \right)_T dP =0, \label{11.2.8}$
or, in purely differential form:
$dH = C_P dT + \left(\dfrac{\partial H}{\partial P} \right)_T dP =0, \label{11.2.9}$
From Equation \ref{11.2.9} we can define a new coefficient, called the Joule–Thomson coefficient, $\mu_{\mathrm{JT}}$, that measures the rate of change of temperature of a gas with respect to pressure in the Joule–Thomson process:
$\mu_{\mathrm{JT}}=\left( \dfrac{\partial T}{\partial P} \right)_H=-\dfrac{1}{C_P} \left( \dfrac{\partial H}{\partial T} \right)_P \label{11.2.10}$
The value of $\mu_{\mathrm{JT}}$ depends on the type of gas, the temperature and pressure before expansion, and the heat capacity at constant pressure of the gas. The temperature at which $\mu_{\mathrm{JT}}$ changes sign is called the “Joule–Thomson inversion temperature.” Since the pressure decreases during an expansion, $\partial P$ is negative by definition, and the following possibilities are available for $\mu_{\mathrm{JT}}$:
Gas temperature: $\partial P$ $\mu_{\mathrm{JT}}$ $\partial T$ The gas will:
Below the inversion temperature + cool
Above the inversion temperature + warm
For example, helium has a very low Joule–Thomson inversion temperature at standard pressure $(T=45\;\text{K})$, and it warms when expanded at constant enthalpy at typical room temperatures. The only other gases that have standard inversion temperature lower than room temperature are hydrogen and neon. On the other hand, nitrogen and oxygen have high inversion temperatures ($T=621\;\text{K}$ and $T=764\;\text{K}$, respectively), and they both cool when expanded at room temperature. Therefore, it is possible to use the Joule–Thomson effect in refrigeration processes such as air conditioning.$^2$ As we already discussed in chapter 3, the temperature of an ideal gases stays constant in an adiabatic expansion, therefore its Joule–Thomson coefficient is always equal to zero.
1. For more information on empirical equations for non-ideal gases see this Wikipedia page.︎
2. Nitrogen and oxygen are the two most abundant gases in the air. A sequence of Joule–Thomson expansions are also used for the industrial liquefaction of air.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/11%3A_Ideal_and_Non-Ideal_Gases/11.02%3A_Behaviors_of_Non-Ideal_Gases.txt
|
Compressibility factors
The compressibility factor is a correction coefficient that describes the deviation of a real gas from ideal gas behaviour. It is usually represented with the symbol $z$, and is calculated as:
$z=\dfrac{\overline{V}}{\overline{V}_{\text{ideal}}} = \dfrac{P \overline{V}}{RT}. \label{11.3.1}$
It is evident from Equation \ref{11.3.1} that the compressibility factor is dependent on the pressure, and for an ideal gas $z=1$ always. For a non-ideal gas at any given pressure, $z$ can be higher or lower than one, separating the behavior of non-ideal gases into two possibilities. The dependence of the compressibility factor against pressure is represented for $\mathrm{H}_2$ and $\mathrm{CO}_2$ in Figure $5$.
The two types of possible behaviors are differentiated based on the compressibility factor at $P\rightarrow 0$. To analyze these situations we can use the vdW equation to calculate the compressibility factor as:
$z= \dfrac{\overline{V}}{RT} \left( \dfrac{RT}{\overline{V}-b} -\dfrac{a}{\overline{V}^2} \right). \label{11.3.2}$
and then we can differentiate this equation at constant temperature with respect to changes in the pressure near $P=0$, to obtain:
$\left. \left( \dfrac{\partial z}{\partial P}\right)_T \right|_{P=0} = \dfrac{1}{RT} \left( b -\dfrac{a}{RT} \right). \label{11.3.3}$
which is then interpreted as follows:
• Type I gases: $b>\dfrac{a}{RT} \; \Rightarrow \; \dfrac{\partial z}{\partial P} > 0$ molecular size dominates ($\mathrm{H}_2-$like behavior).
• Type II gases: $b<\dfrac{a}{RT} \; \Rightarrow \; \dfrac{\partial z}{\partial P} < 0$ attractive forces dominates ($\mathrm{CO}_2-$like behavior).
The dependence of the compressibility factor as a function of temperature (Figure $6$) results in different plots for each of the two types of behavior.
Both type I and type II non-ideal gases will approach the ideal gas behavior as $T\rightarrow \infty$, because $\dfrac{1}{RT}\rightarrow 0$ as $T\rightarrow \infty$. For type II gases, there are three interesting situations:
• At low $T$: $b<\dfrac{a}{RT} \; \Rightarrow \; \dfrac{\partial z}{\partial P} < 0,$ which is the behavior described above.
• At high $T$: $b>\dfrac{a}{RT} \; \Rightarrow \; \dfrac{\partial z}{\partial P} > 0,$ which is the same behavior of type I gases.
• At a very specific temperature, inversion will occur (i.e., at $T=713 \; \mathrm{K}$ for $\mathrm{CO}_2$). This temperature is called the Boyle temperature, $T_{\mathrm{B}}$, and is the temperature at which the attractive and repulsive forces balance out. It can be calculated from the vdW equation, since $b-\dfrac{a}{RT_{\mathrm{B}}}=0 \; \Rightarrow \; T_{\mathrm{B}}=\dfrac{a}{bR}.$ At the Boyle’s temperature a type II gas shows ideal gas behavior over a large range of pressure.
Phase diagram of a non-ideal gas
Let’s now turn our attention to the $PV$ phase diagram of a non-ideal gas, reported in Figure $7$.
We can start the analysis from an isotherm at a high temperature. Since every gas will behave as an ideal gas at those conditions, the corresponding isotherms will look similar to those of an ideal gas ($T_5$ and $T_4$ in Figure $3$). Lowering the temperature, we start to see the deviation from ideality getting more prominent ($T_3$ in Figure $3$) until we reach a particular temperature called the critical temperature, $T_c$.
Definition: Critical Temperature
The temperature above which no appearance of a second phase is observed, regardless of how high the pressure becomes.
At the critical temperature and below, the gas liquefies when the pressure is increased. For this reason, the liquefaction of a gas is called a critical phenomenon.
The critical temperature is the coordinate of a unique point, called the critical point, that can be visualized in the three-dimensional $T,P,V$ diagram of each gas (Figure $4$)$^1$.
The critical point has coordinates ${T_c,P_c, \overline{V}_c}$. These critical coordinates can be determined from the vdW equation at $T_c$, as:
$T_c=\dfrac{8a}{27Rb} \qquad P_c=\dfrac{a}{27b^2} \qquad \overline{V}_c=3b, \label{11.3.4}$
These relations are used, in practice, to determine the vdW constants $a,b$ from the experimentally measured critical isotherms.
The critical compressibility factor, $z_c$, is predicted from the vdW equation at:
$z_c=\dfrac{P_c \overline{V}_c}{R T_c}=\left( \dfrac{a}{27b^2} \right) \left( \dfrac{3b}{R} \right) \left( \dfrac{27Rb}{8a} \right) = \dfrac{3}{8} = 0.375, \label{11.3.5}$
a value that is independent of the gas. Experimentally measured values of $z_c$ for different non-ideal gases are in the range of 0.2–0.3. These values can be used to infer the accuracy of the vdW equation for each non-ideal gas. Since the experimental $z_c$ is usually lower than the one calculated from the vdW equation, we can deduce that the vdW equation overestimates the critical molar volume.
Notice how slicing the $PT\overline{V}$ diagram at constant $T$ results in the $PV$ diagram that we reported in Figure $4$. On the other hand, slicing the $PT\overline{V}$ diagram at constant $P$ results in the $PT$ diagram that we will examine in detail in the next chapter.
11.04: Fugacity
The chemical potential of a pure ideal gas can be calculated using Equation 9.4.5. Since we are not interested in mixture, we can drop the asterisk in $\mu^*$, and rewrite Equation 9.4.5 as:
$\mu_{\text{ideal}} = \mu^{-\kern-6pt{\ominus}\kern-6pt-}+ RT \ln \dfrac{P}{P^{-\kern-6pt{\ominus}\kern-6pt-}}. \label{11.4.1}$
For a non-ideal gas, the pressure cannot be used in Equation \ref{11.4.1} because each gas response to changes in pressure is not universal. We can, however, define a new variable to replace the pressure in Equation \ref{11.4.1} and call it fugacity ($f$).
Definition: Fugacity
The effective pressure of a non-ideal gas that corresponds to the pressure of an ideal gas with the same temperature and chemical potential of the non-ideal one.
Equation \ref{11.4.1} then becomes:
$\mu_{\text{non-ideal}} = \mu^{-\kern-6pt{\ominus}\kern-6pt-}+ RT \ln \dfrac{f}{P^{-\kern-6pt{\ominus}\kern-6pt-}}. \label{11.4.2}$
Since the chemical potential of a gas $\mu$ is equal to the standard chemical potential $\mu^{-\kern-6pt{\ominus}\kern-6pt-}$ when $P=P^{-\kern-6pt{\ominus}\kern-6pt-}$, it is easy to use Equation \ref{11.4.2} to demonstrate that:
$\lim_{P\rightarrow 0} \dfrac{f}{P} = 1, \label{11.4.3}$
in other words, any non-ideal gas will approach the ideal gas behavior as $P\rightarrow 0$. This condition, in conjunction with the $T\rightarrow \infty$ behavior obtained in the previous section, results in the following statement:
The highest chances for any gas to behave ideally happen at high temperature and low pressure.
We can now return our attention to the definition of fugacity. Remembering that the chemical potential is the molar Gibbs free energy of a substance, we can write:
$d \mu_{\text{ideal}} = \overline{V}_{\text{ideal}}dP, \label{11.4.4}$
and:
$d \mu_{\text{non-ideal}} = \overline{V}_{\text{non-ideal}}dP, \label{11.4.5}$
Subtracting Equation \ref{11.4.4} from Equation \ref{11.4.5}, we obtain:
$d \mu_{\text{non-ideal}}-d \mu_{\text{ideal}} = \left(\overline{V}_{\text{non-ideal}}-\overline{V}_{\text{ideal}} \right) dP, \label{11.4.6}$
which we can then integrate between $0$ and $P$:
$\mu_{\text{non-ideal}}-\mu_{\text{ideal}} = \int_0^P \left(\overline{V}_{\text{non-ideal}}-\overline{V}_{\text{ideal}} \right) dP. \label{11.4.7}$
Using eqs. \ref{11.4.1} and \ref{11.4.2} we can then replace the definition of chemical potentials, resulting into:
$\ln f - \ln P = \dfrac{1}{RT} \int_0^P \left(\overline{V}_{\text{non-ideal}} - \overline{V}_{\text{ideal}} \right) dP, \label{11.4.8}$
which gives us a mathematical definition of the fugacity, as:
$f = P \cdot \underbrace{\exp\left[ \dfrac{1}{RT} \int_0^P \left(\overline{V}_{\text{non-ideal}}-\overline{V}_{\text{ideal}} \right) dP \right]}_{\text{fugacity coefficient, }\phi(T,P)}. \label{11.4.9}$
The exponential term in Equation \ref{11.4.9} is complicated to write, but it can be interpreted as a coefficient—unique to each non-ideal gas—that can be measured experimentally. Such coefficients are dependent on pressure and temperature and are called the fugacity coefficients. Using letter $\phi$ to represent the fugacity coefficient, we can rewrite Equation \ref{11.4.9} as:
$f = \phi P, \label{11.4.10}$
which gives us a straightforward interpretation of the fugacity as an effective pressure. As such, the fugacity will have the same unit as the pressure, while the fugacity coefficients will be adimensional.
As we already saw in chapter 10, the fugacity can be used to replace the pressure in the definition of the equilibrium constant for reactions that involve non-ideal gases. The new constant is usually called $K_f$, and is obtained from:
$K_f=\prod_i f_{i,\text{eq}}^{\nu_i} = K_P \prod_i \phi_{i}^{\nu_i}. \label{11.38}$
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/11%3A_Ideal_and_Non-Ideal_Gases/11.03%3A_Critical_Phenomena.txt
|
• 12.1: Phase Stability
We have already encountered the gas, liquid, and solid phases and already discussed some of their properties. These terms are intuitive since these are the three most common states of matter.
• 12.2: Gibbs Phase Rule
In chapter 1, we have already seen that the number of independent variables required to describe an ideal gas is two. This number was derived by counting the total number of variables (3:P,V¯,T), and reduce it by one because the ideal gas law constrains the value of one of them, once the other two are fixed.
• 12.3: PT Phase Diagrams
Let’s now discuss the pressure–temperature diagram of a typical substance, as reported in Figure 12.3.1. Each of the lines reported in the diagram represents an equilibrium between two phases, and therefore it represents a condition that reduces the number of degrees of freedom to one.
• 12.4: The Clausius-Clapeyron Equation
Let’s now take a closer look at the equilibrium between a condensed phase and the gas phase.
12: Phase Equilibrium
We have already encountered the gas, liquid, and solid phases and already discussed some of their properties. These terms are intuitive since these are the three most common states of matter.$^1$ For this reason, we have previously used the terms without the necessity of formally defining their meaning. However, a formal definition of “phase” is necessary to discuss several concepts in this chapter and the following ones:
Definition: Phase
A region of the system with homogeneous chemical composition and physical state.
Let’s now use the total differential of the chemical potential and the definition of molar Gibbs free energy for one component:
\begin{aligned} d\mu &= \left( \dfrac{\partial \mu}{\partial T} \right)_P dT + \left( \dfrac{\partial \mu}{\partial P} \right)_T dP \ d\mu &= -SdT+\overline{V}dP, \end{aligned} \label{12.1.1}
to write:
$\left( \dfrac{\partial \mu}{\partial T} \right)_P=-S \qquad \left( \dfrac{\partial \mu}{\partial P} \right)_T =\overline{V}. \label{12.1.2}$
We can use these definitions to study the dependence of the chemical potential with respect to changes in pressure and temperature. If we plot $\mu$ as a function of $T$ using the first coefficient in Equation \ref{12.1.2}, we obtain the diagram in Figure $1$. The diagram presents three curves, each corresponding to one of the three most common states of matter – solid, liquid, and gas. As we saw in several previous chapters, the entropy of a phase is almost constant with respect to temperature,$^2$ and therefore the three curves are essentially straight, with negative angular coefficients $-S$. This also explains why the solid phase has a basically flat line since, according to the third law, the entropy of a perfect solid is zero and close to zero if the solid is not perfect. The difference between the three lines’ angular coefficients is explained by the fact that each of these states has a different value of entropy:
$\left( \dfrac{\partial \mu_{\text{solid}}}{\partial T} \right)_P =-S_{\text{s}} \qquad \left( \dfrac{\partial \mu_{\text{liquid}}}{\partial T} \right)_P =-S_{\text{l}} \qquad \left( \dfrac{\partial \mu_{\text{gas}}}{\partial T} \right)_P =-S_{\text{g}}, \label{12.1.3}$
and since the entropy of a gas is always bigger than the entropy of a liquid, which in turn, is yet bigger than the entropy of a solid ($S_{\text{g}} \gg S_{\text{l}}>S_{\text{s}}$), we obtain three lines with different angular coefficients that intersect each other. At each temperature, the phase with the lowest chemical potential will be the most stable (see red segments in Figure $1$). At each intersection between two lines, the two phases have the same chemical potential, representing the temperature at which they coexist. This temperature is the temperature at which the phase change happens. Recalling from general chemistry, at the junction between the solid and the liquid lines, the fusion (fus) process occurs, and the corresponding temperature is called the melting point $T_{\text{m}}$. At the junction between the liquid and the gas lines, the vaporization (vap) process happens, and the corresponding temperature is called the boiling point $T_{\text{b}}$. Depending on the substance and the pressure at which the process happens, the solid line might intersect the gas line before the liquid line. When that occurs, the liquid phase is never observed, and only the sublimation (subl) process happens at the sublimation point $T_{\text{subl}}$.
The effects of pressure on this diagram can be studied using the second coefficient in Equation \ref{12.1.2}. For the majority of substances, $\overline{V}_{\text{g}} \gg \overline{V}_{\text{l}} > \overline{V}_{\text{s}}$, hence the curves will shift to lower values when the pressure is reduced, as in Figure $2$. Notice also that since $\overline{V}_{\text{l}} \cong \overline{V}_{\text{s}}$, the shifts for both the solid and liquid lines is much smaller than the shift for the gas line. These shifts also translate to different values of the junctions, which means the phase changes will occur at different temperatures. Therefore both the melting point and the boiling point in general increase when pressure is increased (and vice versa). Notice how the change for the melting point is always much smaller than the change for the boiling point. Water is a noticeable exception to these trend because $\overline{V}_{\mathrm{H}_2\mathrm{O,l}} < \overline{V}_{\text{ice}}$. This explains the experimental observation that increasing the pressure on ice causes the ice to melt$^3$
Considering the intersections between two lines, two phases are in equilibrium with each other at each of these points. Therefore their chemical potentials must be equal:
For two or more phases to be in equilibrium, their chemical potential must be equal:
$\mu_{\alpha} = \mu_{\beta}. \label{12.1.4}$
If we now change either the temperature or the pressure, the location of the intersection will be shifted (see again Figure $2$ and the discussion above). For infinitesimal changes in variables, the new location will be:
$\mu_{\alpha} + d\mu_{\alpha}= \mu_{\beta}+d\mu_{\beta}, \label{12.1.5}$
which using Equation \ref{12.1.4}, simply becomes:
$d\mu_{\alpha}= d\mu_{\beta}. \label{12.1.6}$
Replacing the differential with the definition of chemical potential in Equation \ref{12.1.1}, we obtain:
\begin{aligned} -S_{\alpha}dT+\overline{V}_{\alpha} &= -S_{\beta}dT+\overline{V}_{\beta} \ \underbrace{\left(S_{\beta}-S_{\alpha}\right)}_{\Delta S} dT &= \underbrace{\left( \overline{V}_{\beta}-\overline{V}_{\alpha}\right)}_{\Delta \overline{V}}, \end{aligned} \label{12.1.7}
which can be rearranged into:
$\dfrac{dP}{dT}=\dfrac{\Delta S}{\Delta \overline{V}}. \label{12.1.8}$
This equation is known as the Clapeyron equation, and it is the mathematical relation at the basis of the pressure-temperature phase diagrams. Plotting the results of Equation \ref{12.1.8} on a $PT$ phase diagram for common substances results in three lines representing the equilibrium between two different phases. These diagrams are useful to study the relationship between the phases of a substance.
1. Other states of matter—such as plasma—are possible, but they are not usually observed at the values of temperature and pressure that classical thermodynamics is usually applied to. Discussion of these extreme cases is beyond the scope of this textbook.︎
2. Think, for example, at the integral $\int SdT$, for which we can assume $S$ independent of temperature to obtain $S\Delta T$. In practice, the entropy increases slightly with the temperature. Therefore the curves in Figure $1$ are slightly concave downwards (remember that they are obtained from values of $-S$, so if $S$ increase with $T$, the curves bend downwards.)︎
3. Despite the effect being minimal, it is one of the contributing causes to the fact that we can skate on ice, but we can’t on stone. If we increase our pressure on ice by reducing our footprints’ surface area using thin skates, ice will slightly melt under our own weight, creating a thin liquid film on which we can skate because of the reduced friction.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/12%3A_Phase_Equilibrium/12.01%3A_Phase_Stability.txt
|
In chapter 1, we have already seen that the number of independent variables required to describe an ideal gas is two. This number was derived by counting the total number of variables $(3: P,\overline{V},T)$, and reduce it by one because the ideal gas law constrains the value of one of them, once the other two are fixed. For a generic system potentially containing more than one chemical substance in several different phases, however, the number of independent variables can be different than two. For a system composed of $c$ components (chemical substances) and $p$ phases, the number of independent variables, $f$, is given by the Gibbs phase rule:
$f=c-p+2. \label{12.2.1}$
The Gibbs phase rule derives from the fact that different phases are in equilibrium with each other at some conditions, resulting in the reduction of the number of independent variables at those conditions. More rigorously, when two phases are in thermodynamic equilibrium, their chemical potentials are equal (see Equation 12.1.4). For each equality, the number of independent variables—also called the number of degrees of freedom—is reduced by one. For example, the chemical potentials of the liquid and its vapor depend on both $T$ and $P$. But when these phases are in equilibrium with each other, their chemical potentials must be equal. If either the pressure or the temperature is fixed, the other variable will be uniquely determined by the equality relation. In other terms, when a liquid is in equilibrium with its vapor at a given pressure, the temperature is determined by the fact that the chemical potentials of the two phases is the same, and is denoted as the boiling temperature $T_{\text{b}}$. Similarly, at a given temperature, the pressure of the vapor is uniquely determined by the same equality relation and is denoted as the vapor pressure, $P^*$.
The Gibbs phase rule is obtained considering that the number of independent variables is given by the total number of variables minus the constraints. The total number of variables is given by temperature, pressure, plus all the variables required to describe each of the phases. The composition of each phase is determined by $(c-1)$ variables.$^1$ The number of constraints is determined by the number of possible equilibrium relations, which is $c(p-1)$ since the chemical potential of each component must be equal in all phases. The number of degrees of freedom $f$ is then given by
\begin{align*} f &=(c-1)p+2-c(p-1) \[4pt] &=c-p+2 \end{align*}
which is the Gibbs phase rule, as in Equation \ref{12.2.1}.
1. For a 1-component system $c-1=1-1=0$, and no additional variable is required to determine the composition of each phase. For a 2-component system, however, each phase will contain both components, hence $c-1=2-1=1$ additional variable will be required to describe it–the mole fraction.
12.03: PT Phase Diagrams
Let’s now discuss the pressure–temperature diagram of a typical substance, as reported in Figure $1$. Each of the lines reported in the diagram represents an equilibrium between two phases, and therefore it represents a condition that reduces the number of degrees of freedom to one. The lines can be determined using the Clapeyron equation, Equation 12.1.8. The interpretation of each line is as follows:
Liquid $\rightleftarrows$ Gas equilibrium
For this equilibrium we can use Trouton’s rule, Equation 7.1.5, and write: $\Delta_{\text{vap}} S = S_{\text{g}}-S_{\text{l}} \cong 88 \; \dfrac{\text{kJ}}{\text{mol}} > 0\quad \text{always}, \label{12.3.1}$
where the entropy of vaporization is always positive, even for cases where the Trouton’s rule is violated. The difference in molar volumes is easily obtained, since the volume of the gas is always much greater than the volume of the liquid:
$\overline{V}_{\text{g}} - \overline{V}_{\text{l}} \cong \overline{V}_{\text{g}} = 22.4\; \dfrac{\text{L}}{\text{mol}} >0\quad \text{always}. \label{12.3.2}$
Replacing these values in the Clapeyron equation, we obtain:
$\dfrac{dP}{dT}=\dfrac{88}{22.4}\left( \dfrac{0.0831}{8.31} \right) = 0.004\;\text{bar} > 0 \quad \text{always}, \label{12.3.3}$
which is always positive,regardless of violations to the Trouton’s rule. Notice how small this value is, meaning that the liquid–gas equilibrium curve is mostly flat as $T\rightarrow 0$.
Solid $\rightleftarrows$ Gas equilibrium
If we look at the signs of each quantity, this case is similar to the previous one: \begin{aligned} \Delta_{\text{subl}} S &> 0 \quad \text{always} \ \Delta_{\text{subl}} \overline{V }&> 0 \quad \text{always} \ \ \dfrac{dP}{dT} &> 0 \quad \text{always}. \end{aligned}\label{12.3.4}
However, the Trouton’s rule is not valid for the solid–gas equilibrium, and $\dfrac{dP}{dT}$ will be larger than for the previous case.
Solid $\rightleftarrows$ Liquid equilibrium
The final curve is for the solid-liquid equilibrium, for which we have:
$\Delta_{\text{fus}} S = \dfrac{\Delta_{\text{fusion}} H}{T_{\text{m}}} > 0 \quad \text{always}, \label{12.3.5}$
since fusion is always an exothermic process, $(\Delta_{\text{fus}} H>0)$. On the other side:
$\Delta_{\text{fusion}} \overline{V} = \overline{V}_{\text{l}} - \overline{V}_{\text{s}} > 0 \quad \text{generally}. \nonumber$
In other words, the difference of the molar volume of the liquid and that of the solid is positive for most substances, but it might be negative (for example for $\mathrm{H}_2\mathrm{O}$). As such:
$\dfrac{dP}{dT} > 0 \quad \text{generally}. \label{12.3.6}$
For $\mathrm{H}_2\mathrm{O}$ and a few other substances, $\dfrac{dP}{dT}<0$, an anomalous behavior that has crucial consequences for the existence of life on earth.$^1$ For this importance, this behavior is also depicted in Figure $1$ using a dashed green line.
Since the differences in molar volumes between the solid and the liquid phases are usually small (changes are generally of the order of $10^{-3}\;\mathrm{L}$), $\dfrac{dP}{dT}$ is always much larger than for the previous two cases. The resulting lines for the solid–liquid equilibria are still almost vertical, regardless of the signs of their angular coefficients.
The triple point and the critical point
The only point in the $PT$ diagram where all the three phases coexist is called the triple point. The number of degrees of freedom at the triple point for every 1-component diagram is $f=1-3+2=0$. The fact that the triple point has zero degrees of freedom means that its coordinates, ${T_{\text{tp}},P_{\text{tp}},\overline{V}_{\text{tp}}}$, are uniquely determined for each chemical substance. For this reason, the value of the triple point of water was fixed by definition—rather than measured—until 2019. This definition was necessary to establish the base unit of the thermodynamic temperature scale in the SI (the Kelvin).$^2$
In addition to the triple point where the solid, liquid, and gas phases meet, a triple point may involve more than one condensed phase. Triple points are common for substances with multiple solid phases (polymorphs), involving either two solid phases and a liquid one or three solid phases. Helium is a special case that presents a triple point involving two different fluid phases, called the lambda point. Since the number of degrees of freedom cannot be negative, the Gibbs phase rule for a 1-component diagram sets the limit to how many phases can coexist to just three. Therefore, quadruple points (or higher coexistence points) are not possible for pure substances, even for polymorphs.$^3$
Another point with a fixed position in the $PT$ diagram is the critical point, ${T_{\text{c}},P_{\text{c}},\overline{V}_{\text{c}}}$. We have already given the definition of the critical temperature in Definition: Critical Temperature. This point represents the end of the liquid–gas equilibrium curve. This point is also semantically important to define different regions of the phase diagram, as in Figure $1$. A gas whose pressure and temperature are below the critical point is called a vapor. A gas whose temperature is above the critical one and the pressure is below its critical one is called a supercritical fluid. Finally, a liquid whose pressure is above the critical point is called a compressible liquid.$^4$
1. As is well explained by Wikipedia: “The unusual density curve and lower density of ice than of water is vital to life—if water were most dense at the freezing point, then in winter the very cold water at the surface of lakes and other water bodies would sink, lakes could freeze from the bottom up, and all life in them would be killed. Furthermore, given that water is a good thermal insulator (due to its heat capacity), some frozen lakes might not completely thaw in summer.[34] The layer of ice that floats on top insulates the water below. Water at about 4 °C (39 °F) also sinks to the bottom, thus keeping the temperature of the water at the bottom constant.”︎
2. For more information on the 2019 redefinition of the SI units, see this Wikipedia page.︎
3. Notice that quadruple points are possible for 2-component diagrams.︎
4. Notice that the temperature of a liquid must be below the critical point, otherwise it is no longer a liquid but rather a supercritical fluid.
12.04: The Clausius-Clapeyron Equation
Let’s now take a closer look at the equilibrium between a condensed phase and the gas phase. For both the vaporization and sublimation processes, Clausius showed that the Clapeyron equation can be simplified by using:
$\Delta_{\text{vap}} S = \dfrac{\Delta_{\text{vap}} H}{T} \qquad \Delta \overline{V}= \overline{V}_{\mathrm{g}} -\overline{V}_{\mathrm{l}} \cong \overline{V}_{\mathrm{g}}, \label{12.4.1}$
resulting in:
$\dfrac{dP}{dT} = \dfrac{ \Delta_{\text{vap}} S}{\Delta \overline{V}} \cong \dfrac{ \Delta_{\text{vap}} H}{T \overline{V}_{\mathrm{g}}}. \label{12.4.2}$
Using the ideal gas law to replace the molar volume of the gas, we obtain:
$\dfrac{dP}{dT} = \dfrac{P \Delta_{\text{vap}} H}{RT^2}, \label{12.4.3}$
which can be rearranged as:
$\dfrac{dP}{P} = \dfrac{\Delta_{\text{vap}} H}{R} \dfrac{dT}{T^2}. \label{12.4.4}$
Equation \ref{12.4.4} is known as the Clausius–Clapeyron equation, and it measures the dependence of the vapor pressure of a substance as a function of the temperature. The Clausius–Clapeyron equation can be integrated to obtain:
\begin{aligned} \int_{P_i}^{P_f} \dfrac{dP}{P} &= \dfrac{\Delta_{\text{vap}} H}{R} \int_{T_i}^{T_f} \dfrac{dT}{T^2} \ \ln \dfrac{P_f}{P_i} &=-\dfrac{\Delta_{\text{vap}} H}{R} \left( \dfrac{1}{T_f}-\dfrac{1}{T_i} \right). \end{aligned} \label{12.4.5}
The integrated Clausius–Clapeyron equation shows that the vapor pressure depends exponentially on the temperature. Thus, even a small change in the temperature will result in a significant change in the vapor pressure. In fact, we daily use the fact that the vapor pressure of water changes drastically when we increase its temperature for cooking most of our food. For example, at an external pressure of 1 bar, it rapidly grows from $P^*=0.02\;\text{bar}$ to $P^*=1\;\text{bar}$ when the temperature is increased from $T=293\;\mathrm{K}$ (around room temperature) to $T=373\;\mathrm{K}$ (boiling point). The integrated Clausius–Clapeyron equation is also often used to determine the enthalpy of vaporization from measurements of vapor pressure at different temperatures.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/12%3A_Phase_Equilibrium/12.02%3A_Gibbs_Phase_Rule.txt
|
We now move from studying 1-component systems to multi-component ones. Systems that include two or more chemical species are usually called solutions. Solutions are possible for all three states of matter:
Type: Solvent Solute Examples:
Solid solutions Solid Solid Alloys: brass, bronze
Solid Liquid Dental amalgam
Solid Gas Hydrogen stored in Palladium
Liquid solutions Liquid Solid Saltwater, bleach
Liquid Liquid Alcoholic beverages, vinegar
Liquid Gas Carbonated drinks
Gaseous solutions Gas Solid Smoke, smog
Gas Liquid Aerosols and perfumes
Gas Gas Air
The number of degrees of freedom for binary solutions (solutions containing two components) is calculated from the Gibbs phase rules at \(f=2-p+2=4-p\). When one phase is present, binary solutions require \(4-1=3\) variables to be described, usually temperature (\(T\)), pressure (\(P\)), and mole fraction (\(y_i\) in the gas phase and \(x_i\) in the liquid phase). Single-phase, 1-component systems require three-dimensional \(T,P,x_i\) diagram to be described. When two phases are present (e.g., gas and liquid), only two variables are independent: pressure and concentration. Thus, we can study the behavior of the partial pressure of a gas–liquid solution in a 2-dimensional plot. If the gas phase in a solution exhibits properties similar to those of a mixture of ideal gases, it is called an ideal solution. The obvious difference between ideal solutions and ideal gases is that the intermolecular interactions in the liquid phase cannot be neglected as for the gas phase. The main advantage of ideal solutions is that the interactions between particles in the liquid phase have similar mean strength throughout the entire phase. We will consider ideal solutions first, and then we’ll discuss deviation from ideal behavior and non-ideal solutions.
• 13.1: Raoult’s Law and Phase Diagrams of Ideal Solutions
The behavior of the vapor pressure of an ideal solution can be mathematically described by a simple law established by François-Marie Raoult (1830–1901). Raoult’s law states that the partial pressure of each component, i, of an ideal mixture of liquids, Pi, is equal to the vapor pressure of the pure component P∗i multiplied by its mole fraction in the mixture xi.
• 13.2: Phase Diagrams of Non-Ideal Solutions
Non-ideal solutions follow Raoult’s law for only a small amount of concentrations. The typical behavior of a non-ideal solution with a single volatile component is reported in the PxB plot in Figure 13.2.1.
• 13.3: Phase Diagrams of 2-Components/2-Condensed Phases Systems
This section discusses the equilibria between two condensed phases: liquid/liquid, liquid/solid, and solid/solid. These equilibria usually occur in the low-temperature region of a phase diagram (or high pressure). Three situations are possible, depending on the constituents and concentration of the mixture.
13: Multi-Component Phase Diagrams
The behavior of the vapor pressure of an ideal solution can be mathematically described by a simple law established by François-Marie Raoult (1830–1901). Raoult’s law states that the partial pressure of each component, $i$, of an ideal mixture of liquids, $P_i$, is equal to the vapor pressure of the pure component $P_i^*$ multiplied by its mole fraction in the mixture $x_i$:
$P_i=x_i P_i^*. \label{13.1.1}$
One volatile component
Raoult’s law applied to a system containing only one volatile component describes a line in the $Px_{\text{B}}$ plot, as in Figure $1$.
As emerges from Figure $1$, Raoult’s law divides the diagram into two distinct areas, each with three degrees of freedom.$^1$ Each area contains a phase, with the vapor at the bottom (low pressure), and the liquid at the top (high pressure). Raoult’s law acts as an additional constraint for the points sitting on the line. Therefore, the number of independent variables along the line is only two. Once the temperature is fixed, and the vapor pressure is measured, the mole fraction of the volatile component in the liquid phase is determined.
Two volatile components
In an ideal solution, every volatile component follows Raoult’s law. Since the vapors in the gas phase behave ideally, the total pressure can be simply calculated using Dalton’s law as the sum of the partial pressures of the two components $P_{\text{TOT}}=P_{\text{A}}+P_{\text{B}}$. The corresponding diagram is reported in Figure $2$. The total vapor pressure, calculated using Dalton’s law, is reported in red. The Raoult’s behaviors of each of the two components are also reported using black dashed lines.
Exercise $1$
Calculate the mole fraction in the vapor phase of a liquid solution composed of 67% of toluene ($\mathrm{A}$) and 33% of benzene ($\mathrm{B}$), given the vapor pressures of the pure substances: $P_{\text{A}}^*=0.03\;\text{bar}$, and $P_{\text{B}}^*=0.10\;\text{bar}$.
Answer
The data available for the systems are summarized as follows: \begin{aligned} x_{\text{A}}=0.67 \qquad & \qquad x_{\text{B}}=0.33 \ P_{\text{A}}^* = 0.03\;\text{bar} \qquad & \qquad P_{\text{B}}^* = 0.10\;\text{bar} \ & P_{\text{TOT}} = ? \ y_{\text{A}}=? \qquad & \qquad y_{\text{B}}=? \end{aligned}\label{13.1.2} The total pressure of the vapors can be calculated combining Dalton’s and Roult’s laws: \begin{aligned} P_{\text{TOT}} &= P_{\text{A}}+P_{\text{B}}=x_{\text{A}} P_{\text{A}}^* + x_{\text{B}} P_{\text{B}}^* \ &= 0.67\cdot 0.03+0.33\cdot 0.10 \ &= 0.02 + 0.03 = 0.05 \;\text{bar} \end{aligned}\label{13.1.3} We can then calculate the mole fraction of the components in the vapor phase as: \begin{aligned} y_{\text{A}}=\dfrac{P_{\text{A}}}{P_{\text{TOT}}} & \qquad y_{\text{B}}=\dfrac{P_{\text{B}}}{P_{\text{TOT}}} \ y_{\text{A}}=\dfrac{0.02}{0.05}=0.40 & \qquad y_{\text{B}}=\dfrac{0.03}{0.05}=0.60 \end{aligned}\label{13.1.4} Notice how the mole fraction of toluene is much higher in the liquid phase, $x_{\text{A}}=0.67$, than in the vapor phase, $y_{\text{A}}=0.40$.
As is clear from the results of Exercise $1$, the concentration of the components in the gas and vapor phases are different. We can also report the mole fraction in the vapor phase as an additional line in the $Px_{\text{B}}$ diagram of Figure $2$. When both concentrations are reported in one diagram—as in Figure $3$—the line where $x_{\text{B}}$ is obtained is called the liquidus line, while the line where the $y_{\text{B}}$ is reported is called the Dew point line.
The liquidus and Dew point lines determine a new section in the phase diagram where the liquid and vapor phases coexist. Since the degrees of freedom inside the area are only 2, for a system at constant temperature, a point inside the coexistence area has fixed mole fractions for both phases. We can reduce the pressure on top of a liquid solution with concentration $x^i_{\text{B}}$ (see Figure $3$) until the solution hits the liquidus line. At this pressure, the solution forms a vapor phase with mole fraction given by the corresponding point on the Dew point line, $y^f_{\text{B}}$.
$T_{\text{B}}$ phase diagrams and fractional distillation
We can now consider the phase diagram of a 2-component ideal solution as a function of temperature at constant pressure. The $T_{\text{B}}$ diagram for two volatile components is reported in Figure $4$.
Compared to the $Px_{\text{B}}$ diagram of Figure $3$, the phases are now in reversed order, with the liquid at the bottom (low temperature), and the vapor on top (high Temperature). The liquidus and Dew point lines are curved and form a lens-shaped region where liquid and vapor coexists. Once again, there is only one degree of freedom inside the lens. As such, a liquid solution of initial composition $x_{\text{B}}^i$ can be heated until it hits the liquidus line. At this temperature the solution boils, producing a vapor with concentration $y_{\text{B}}^f$. As is clear from Figure $4$, the mole fraction of the $\text{B}$ component in the gas phase is lower than the mole fraction in the liquid phase. This fact can be exploited to separate the two components of the solution. In particular, if we set up a series of consecutive evaporations and condensations, we can distill fractions of the solution with an increasingly lower concentration of the less volatile component $\text{B}$. This is exemplified in the industrial process of fractional distillation, as schematically depicted in Figure $5$.
Each of the horizontal lines in the lens region of the $Tx_{\text{B}}$ diagram of Figure $5$ corresponds to a condensation/evaporation process and is called a theoretical plate. These plates are industrially realized on large columns with several floors equipped with condensation trays. The temperature decreases with the height of the column. A condensation/evaporation process will happen on each level, and a solution concentrated in the most volatile component is collected. The theoretical plates and the $Tx_{\text{B}}$ are crucial for sizing the industrial fractional distillation columns.
1. Only two degrees of freedom are visible in the $Px_{\text{B}}$ diagram. Temperature represents the third independent variable.︎
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/13%3A_Multi-Component_Phase_Diagrams/13.01%3A_Raoults_Law_and_Phase_Diagrams_of_Ideal_Solutions.txt
|
Non-ideal solutions follow Raoult’s law for only a small amount of concentrations. The typical behavior of a non-ideal solution with a single volatile component is reported in the $Px_{\text{B}}$ plot in Figure $1$.
Raoult’s behavior is observed for high concentrations of the volatile component. This behavior is observed at $x_{\text{B}} \rightarrow 0$ in Figure $1$, since the volatile component in this diagram is $\mathrm{A}$. At low concentrations of the volatile component $x_{\text{B}} \rightarrow 1$ in Figure $1$, the solution follows a behavior along a steeper line, which is known as Henry’s law. William Henry (1774–1836) has extensively studied the behavior of gases dissolved in liquids. His studies resulted in a simple law that relates the vapor pressure of a solution to a constant, called Henry’s law solubility constants:
$P_{\text{B}}=k_{\text{AB}} x_{\text{B}}, \label{13.2.1}$
where $k_{\text{AB}}$ depends on the chemical nature of $\mathrm{A}$ and $\mathrm{B}$. The corresponding diagram for non-ideal solutions with two volatile components is reported on the left panel of Figure $2$. The total pressure is once again calculated as the sum of the two partial pressures. Positive deviations on Raoult’s ideal behavior are not the only possible deviation from ideality, and negative deviation also exits, albeit slightly less common. An example of a negative deviation is reported in the right panel of Figure $2$.
If we move from the $Px_{\text{B}}$ diagram to the $Tx_{\text{B}}$ diagram, the behaviors observed in Figure $2$ will correspond to the diagram in Figure $3$.
The minimum (left plot) and maximum (right plot) points in Figure $3$ represent the so-called azeotrope.
An azeotrope is a constant boiling point solution whose composition cannot be altered or changed by simple distillation. This happens because the liquidus and Dew point lines coincide at this point. Therefore, the liquid and the vapor phases have the same composition, and distillation cannot occur. Two types of azeotropes exist, representative of the two types of non-ideal behavior of solutions. The first type is the positive azeotrope (left plot in Figure $3$). A notorious example of this behavior at atmospheric pressure is the ethanol/water mixture, with composition 95.63% ethanol by mass. This positive azeotrope boils at $T=78.2\;^\circ \text{C}$, a temperature that is lower than the boiling points of the pure constituents, since ethanol boils at $T=78.4\;^\circ \text{C}$ and water at $T=100\;^\circ \text{C}$. The second type is the negative azeotrope (right plot in Figure $3$). An example of this behavior at atmospheric pressure is the hydrochloric acid/water mixture with composition 20.2% hydrochloric acid by mass. This negative azeotrope boils at $T=110\;^\circ \text{C}$, a temperature that is higher than the boiling points of the pure constituents, since hydrochloric acid boils at $T=-84\;^\circ \text{C}$ and water at $T=100\;^\circ \text{C}$.
13.03: Phase Diagrams of 2-Components 2-Condensed Phases Systems
We now consider equilibria between two condensed phases: liquid/liquid, liquid/solid, and solid/solid. These equilibria usually occur in the low-temperature region of a phase diagram (or high pressure). Three situations are possible, depending on the constituents and concentration of the mixture.
Totally miscible
We have already encountered the situation where the components of a solution mix entirely in the liquid phase. All the diagrams that we’ve discussed up to this point belong to this category.
Totally immiscible
A more complicated case is that for components that do not mix in the liquid phase. The liquid region of the temperature–composition phase diagram for a solution with components that do not mix in the liquid phase below a specific temperature is reported in Figure $1$.
While the liquid 1+liquid 2 region (white area in Figure $1$) might seem similar to the liquid region that sits on top of it (blue area in Figure $1$), it is substantially different in nature. To prove this, we can calculate the degrees of freedom in each region using the Gibbs phase rule. For the liquid region at the top of the diagram, at constant pressure, we have $f=2-1+1=2$. In other words, the temperature and the composition are independent, and their values can be changed regardless of each other. In the liquid 1+liquid 2 at the bottom, however, we have $f=2-2+1=1$, which means that only one variable is independent of the others. The white region in Figure $1$ is a 2-phase region, and it behaves similarly to the other 2-phases regions that we encountered before, such as the inner portion of the lens in Figure 13.1.4. In other words, since the two components are entirely immiscible, once we set the temperature at a value below the immiscibility line, the concentration of the two liquids will be determined by tracing a horizontal line and by reading the concentrations on the left and right of the diagram (corresponding to 100% $\mathrm{A}$ and 100% $\mathrm{B}$, respectively).
Partially miscible
The third and final case is undoubtedly the most interesting since several behaviors are possible. In fact, there might be components that are partially miscible at low temperatures but totally miscible at higher temperatures, for which the diagram will assume the general shape depicted in Figure $2$. A typical example of this behavior is the mixture between water and phenol, whose liquids are completely miscible at $T>66\;^\circ \text{C}$, and only partially miscible below this temperature. The composition of the 2-phases region (white area in Figure $2$) is determined by tracing a horizontal line and reading the mole fraction on the line that delimits the area, as for the previous case.$^1$
On the opposite side of the spectrum, the diagram for a mixture whose components are partially miscible at high temperature, but completely miscible at lower temperatures is depicted in Figure $3$. A typical example of this behavior is the mixture between water and triethylamine, whose liquids are completely miscible at $T<18.5\;^\circ \text{C}$, and only partially miscible above this temperature.
Finally, both situations described above are possible simultaneously. For some particular solutions, there exists a range of temperature where the two components are only partially miscible. A typical example of this behavior is given by the water/nicotine mixture, whose liquids are completely miscible at $T>210\;^\circ \text{C}$ and $T<61\;^\circ \text{C}$, but only partially miscible in between these two temperatures, as in the diagram of Figure $4$.
Eutectic systems
For some particular mixture, the temperature of partial miscibility in the liquid/liquid region might be close to the azeotrope temperature. In some cases, these two regions might even overlap. These characteristic behaviors are reported in Figure $5$.
When the azeotrope and partially miscibility temperature overlap, the system forms what is known as an eutectic. Eutectic diagrams are possible at the liquid/gas equilibrium. Still, they are widespread at the liquid/solid equilibrium, where two components are completely miscible in the liquid phase, but only partially miscible in the solid phase. Eutectics with completely immiscible components in the solid phase are also very common, as the diagram reported in Figure $6$.
1. The only noticeable difference, in this case, is that the two concentrations will be different than 0 and 100% since the component mix partially.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/13%3A_Multi-Component_Phase_Diagrams/13.02%3A_Phase_Diagrams_of_Non-Ideal_Solutions.txt
|
In chapter 13, we have qualitatively described the deviation of real solutions from ideal behavior. In this section, we are discussing it quantitatively. We will be able to do so by using a concept that we have already encountered in chapter 10: Lewis’s activity.
• 14.1: Activity
For non-ideal gases, we introduced in chapter 11 the concept of fugacity as an effective pressure that accounts for non-ideal behavior.
• 14.2: Colligative Properties
Colligative properties are properties of solutions that depend on the number of particles in the solution and not on the nature of the chemical species. More specifically, a colligative property depends on the ratio between the number of particles of the solute and the number of particles of the solvent. This ratio can be measured using any unit of concentration, such as mole fraction, molarity, and normality.
14: Properties of Solutions
For non-ideal gases, we introduced in chapter 11 the concept of fugacity as an effective pressure that accounts for non-ideal behavior. If we extend this concept to non-ideal solution, we can introduce the activity of a liquid or a solid, $a$, as:
$\mu_{\text{non-ideal}} = \mu^{- {\ominus} } + RT \ln a, \label{14.1.1}$
where $\mu$ is the chemical potential of the substance or the mixture, and $\mu^{-\ominus}$ is the chemical potential at standard state. Comparing this definition to Equation 11.4.2, it is clear that the activity is equal to the fugacity for a non-ideal gas (which, in turn, is equal to the pressure for an ideal gas). However, for a liquid and a liquid mixture, it depends on the chemical potential at standard state. This means that the activity is not an absolute quantity, but rather a relative term describing how “active” a compound is compared to standard state conditions. The choice of the standard state is, in principle, arbitrary, but conventions are often chosen out of mathematical or experimental convenience. We already discussed the convention that standard state for a gas is at $P^{-\ominus}=1\;\text{bar}$, so the activity is equal to the fugacity. The standard state for a component in a solution is the pure component at the temperature and pressure of the solution. This definition is equivalent to setting the activity of a pure component, $i$, at $a_i=1$.
For a component in a solution we can use Equation 11.4.2 to write the chemical potential in the gas phase as:
$\mu_i^{\text{vapor}} = \mu_i^{- {\ominus} } + RT \ln \dfrac{P_i}{P^{-\ominus}}. \label{14.1.2}$
If the gas phase is in equilibrium with the liquid solution, then:
$\mu_i^{\text{solution}} = \mu_i^{\text{vapor}} = \mu_i^*, \label{14.1.3}$
where $\mu_i^*$ is the chemical potential of the pure element. Subtracting Equation \ref{14.1.3} from Equation \ref{14.1.2}, we obtain:
$\mu_i^{\text{solution}} = \mu_i^* + RT \ln \dfrac{P_i}{P^*_i}. \label{14.1.4}$
For an ideal solution, we can use Raoult’s law, Equation 13.1.1, to rewrite Equation \ref{14.1.4} as:
$\mu_i^{\text{solution}} = \mu_i^* + RT \ln x_i, \label{14.1.5}$
which relates the chemical potential of a component in an ideal solution to the chemical potential of the pure liquid and its mole fraction in the solution. For a non-ideal solution, the partial pressure in Equation \ref{14.1.4} is either larger (positive deviation) or smaller (negative deviation) than the pressure calculated using Raoult’s law. The chemical potential of a component in the mixture is then calculated using:
$\mu_i^{\text{solution}} = \mu_i^* + RT \ln \left(\gamma_i x_i\right), \label{14.1.6}$
where $\gamma_i$ is a positive coefficient that accounts for deviations from ideality. This coefficient is either larger than one (for positive deviations), or smaller than one (for negative deviations). The activity of component $i$ can be calculated as an effective mole fraction, using:
$a_i = \gamma_i x_i, \label{14.1.7}$
where $\gamma_i$ is defined as the activity coefficient. The partial pressure of the component can then be related to its vapor pressure, using:
$P_i = a_i P_i^*. \label{14.1.8}$
Comparing Equation \ref{14.1.8} with Raoult’s law, we can calculate the activity coefficient as:
$\gamma_i = \dfrac{P_i}{x_i P_i^*} = \dfrac{P_i}{P_i^{\text{R}}}, \label{14.1.9}$
where $P_i^{\text{R}}$ is the partial pressure calculated using Raoult’s law. This result also proves that for an ideal solution, $\gamma=1$. Equation \ref{14.1.9} can also be used experimentally to obtain the activity coefficient from the phase diagram of the non-ideal solution. This is achieved by measuring the value of the partial pressure of the vapor of a non-ideal solution. Examples of this procedure are reported for both positive and negative deviations in Figure $1$.
• As we already discussed in chapter 10, the activity is the most general quantity that we can use to define the equilibrium constant of a reaction (or the reaction quotient). The advantage of using the activity is that it’s defined for ideal and non-ideal gases and mixtures of gases, as well as for ideal and non-ideal solutions in both the liquid and the solid phase.$^1$
1. Notice that, since the activity is a relative measure, the equilibrium constant expressed in terms of the activities is also a relative concept. In other words, it measures equilibrium relative to a standard state. This fact, however, should not surprise us, since the equilibrium constant is also related to $\Delta_{\text{rxn}} G^{-\ominus}$ using Gibbs’ relation. This is why the definition of a universally agreed-upon standard state is such an essential concept in chemistry, and why it is defined by the International Union of Pure and Applied Chemistry (IUPAC) and followed systematically by chemists around the globe.︎
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/14%3A_Properties_of_Solutions/14.01%3A_Activity.txt
|
Colligative properties are properties of solutions that depend on the number of particles in the solution and not on the nature of the chemical species. More specifically, a colligative property depends on the ratio between the number of particles of the solute and the number of particles of the solvent. This ratio can be measured using any unit of concentration, such as mole fraction, molarity, and normality. For diluted solutions, however, the most useful concentration for studying colligative properties is the molality, $m$, which measures the ratio between the number of particles of the solute (in moles) and the mass of the solvent (in kg):
$m = \dfrac{n_{\text{solute}}}{m_{\text{solvent}}}. \label{14.2.1}$
Colligative properties usually result from the dissolution of a nonvolatile solute in a volatile liquid solvent, and they are properties of the solvent, modified by the presence of the solute. They are physically explained by the fact that the solute particles displace some solvent molecules in the liquid phase, thereby reducing the concentration of the solvent. This explanation shows how colligative properties are independent of the nature of the chemical species in a solution only if the solution is ideal. For non-ideal solutions, the formulas that we will derive below are valid only in an approximate manner. We will discuss the following four colligative properties: relative lowering of the vapor pressure, elevation of the boiling point, depression of the melting point, and osmotic pressure.
Vapor pressure lowering
As we have already discussed in chapter 13, the vapor pressure of an ideal solution follows Raoult’s law. Its difference with respect to the vapor pressure of the pure solvent can be calculated as:
\begin{aligned} P_{\text{solvent}}^* &- P_{\text{solution}} = P_{\text{solvent}}^* - x_{\text{solvent}} P_{\text{solvent}}^* \ & = \left( 1-x_{\text{solvent}}\right)P_{\text{solvent}}^* =x_{\text{solute}} P_{\text{solvent}}^*, \end{aligned} \label{14.2.2}
which shows that the vapor pressure lowering depends only on the concentration of the solute. As such, it is a colligative property.
Boiling point elevation and melting point depression
The following two colligative properties are explained by reporting the changes due to the solute molecules in the plot of the chemical potential as a function of temperature (Figure $1$).
At the boiling point, the chemical potential of the solution is equal to the chemical potential of the vapor, and the following relation can be obtained:
\begin{aligned} \mu_{\text{solution}} &=\mu_{\text{vap}}=\mu_{\text{solvent}}^{-\kern-6pt{\ominus}\kern-6pt-} + RT \ln P_{\text{solution}} \ &= \mu_{\text{solvent}}^{-\kern-6pt{\ominus}\kern-6pt-} + RT \ln \left(x_{\text{solution}} P_{\text{solvent}}^* \right)\ &= \underbrace{\mu_{\text{solvent}}^{-\kern-6pt{\ominus}\kern-6pt-} + RT \ln P_{\text{solvent}}^*}_{\mu_{\text{solvent}}^*} + RT \ln x_{\text{solution}} \ &= \mu_{\text{solvent}}^* + RT \ln x_{\text{solution}}, \end{aligned} \label{14.2.3}
and since $x_{\text{solution}}<1$, the logarithmic term in the last expression is negative, and:
$\mu_{\text{solution}} < \mu_{\text{solvent}}^*. \label{14.2.4}$
Equation \ref{14.2.3} proves that the addition of a solute always stabilizes the solvent in the liquid phase, and lowers its chemical potential, as shown in Figure $1$.
The elevation of the boiling point can be quantified using:
$\Delta T_{\text{b}}=T_{\text{b}}^{\text{solution}}-T_{\text{b}}^{\text{solvent}}=iK_{\text{b}}m, \label{14.2.5}$
where $i$ is the van ’t Hoff factor, a coefficient that measures the number of solute particles for each formula unit, $K_{\text{b}}$ is the ebullioscopic constant of the solvent, and $m$ is the molality of the solution, as introduced in Equation \ref{14.2.1} above. For a solute that does not dissociate in solution, $i=1$. For a solute that dissociates in solution, the number of particles in solutions depends on how many particles it dissociates into, and $i>1$. For example, the strong electrolyte $\mathrm{Ca}\mathrm{Cl}_2$ completely dissociates into three particles in solution, one $\mathrm{Ca}^{2+}$ and two $\mathrm{Cl}^-$, and $i=3$. For cases of partial dissociation, such as weak acids, weak bases, and their salts, $i$ can assume non-integer values.
If we assume ideal solution behavior, the ebullioscopic constant can be obtained from the thermodynamic condition for liquid-vapor equilibrium. At the boiling point of the solution, the chemical potential of the solvent in the solution phase equals the chemical potential in the pure vapor phase above the solution:
$\mu_{\text{solution}} (T_{\text{b}}) = \mu_{\text{solvent}}^*(T_b) + RT\ln x_{\text{solvent}}, \label{14.2.6}$
from which we can derive, using the Gibbs–Helmholtz equation, Equation 9.2.4:
$K_{\text{b}}=\dfrac{RMT_{\text{b}}^{2}}{\Delta_{\mathrm{vap}} H}, \label{14.2.7}$
where $R$ is the ideal gas constant, $M$ is the molar mass of the solvent, and $\Delta_{\mathrm{vap}} H$ is its molar enthalpy of vaporization.
The reduction of the melting point is similarly obtained by:
$\Delta T_{\text{m}}=T_{\text{m}}^{\text{solution}}-T_{\text{m}}^{\text{solvent}}=-iK_{\text{m}}m, \label{14.2.8}$
where $i$ is the van ’t Hoff factor introduced above, $K_{\text{m}}$ is the cryoscopic constant of the solvent, $m$ is the molality, and the minus sign accounts for the fact that the melting temperature of the solution is lower than the melting temperature of the pure solvent ($\Delta T_{\text{m}}$ is defined as a negative quantity, while $i$, $K_{\text{m}}$, and $m$ are all positive). Similarly to the previous case, the cryoscopic constant can be related to the molar enthalpy of fusion of the solvent using the equivalence of the chemical potential of the solid and the liquid phases at the melting point, and employing the Gibbs–Helmholtz equation:
$K_{\text{m}}=\dfrac{RMT_{\text{m}}^{2}}{\Delta_{\mathrm{fus}}H}. \label{14.2.9}$
Notice from Figure $1$ how the depression of the melting point is always smaller than the elevation of the boiling point. This is because the chemical potential of the solid is essentially flat, while the chemical potential of the gas is steep. Consequently, the value of the cryoscopic constant is always bigger than the value of the ebullioscopic constant. For example, for water $K_{\text{m}} = 1.86\; \dfrac{\text{K kg}}{\text{mol}}$, while $K_{\text{b}} = 0.512\; \dfrac{\text{K kg}}{\text{mol}}$. This is also proven by the fact that the enthalpy of vaporization is larger than the enthalpy of fusion.
Osmotic pressure
The osmotic pressure of a solution is defined as the difference in pressure between the solution and the pure liquid solvent when the two are in equilibrium across a semi-permeable (osmotic) membrane. The osmotic membrane is made of a porous material that allows the flow of solvent molecules but blocks the flow of the solute ones. The osmosis process is depicted in Figure $2$.
Starting from a solvent at atmospheric pressure in the apparatus depicted in Figure $2$, we can add solute particles to the left side of the apparatus. The increase in concentration on the left causes a net transfer of solvent across the membrane. This flow stops when the pressure difference equals the osmotic pressure, $\pi$. The formula that governs the osmotic pressure was initially proposed by van ’t Hoff and later refined by Harmon Northrop Morse (1848–1920). The Morse formula reads:
$\pi = imRT, \label{14.2.10}$
where $i$ is the van ’t Hoff factor introduced above, $m$ is the molality of the solution, $R$ is the ideal gas constant, and $T$ the temperature of the solution. As with the other colligative properties, the Morse equation is a consequence of the equality of the chemical potentials of the solvent and the solution at equilibrium.$^1$
1. For a derivation, see the osmotic pressure Wikipedia page.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/14%3A_Properties_of_Solutions/14.02%3A_Colligative_Properties.txt
|
From thermodynamics, we can determine the spontaneity of a reaction and its extent, using $\Delta G$ and $K_{\text{eq}}$, respectively. However, thermodynamics does not provide any information on how fast the reaction is going to happen. For example, while the reaction that converts solid carbon from its diamond allotropic form into hexagonal graphite is thermodynamically spontaneous, it is so slow as to be virtually non-existent. Diamond is effectively a meta-stable phase. The speed of a chemical reaction is the subject of a branch of physical chemistry called chemical kinetics.
A chemical kinetics study aims to find the rate of a reaction and to find the microscopic steps that compose it, determining its mechanism.
• 15.1: Differential and integrated rate laws
The rate law of a chemical reaction is an equation that links the initial rate with the concentrations (or pressures) of the reactants. Rate laws usually include a constant parameter, k , called the rate coefficient, and several parameters found at the exponent of the concentrations of the reactants, and are called reaction orders. The rate coefficient depends on several conditions, including the reaction type, the temperature, the surface area of an adsorbent, light irradiation, and others.
• 15.2: Complex Rate Laws
It is essential to specify that the order of a reaction and its molecularity are equal only for elementary reactions. Reactions that follow complex laws are composed of several elementary steps, and they usually have non-integer reaction orders, for at least one of the reactants.
• 15.3: Experimental Methods for Determination of Reaction Orders
To experimentally measure the reaction rate, we need a method to measure concentration changes with respect to time. The simplest way to determine the reaction rate is to monitor the entire reaction as it proceeds and then plot the resulting data differently until a linear plot is found.
• 15.4: Temperature Dependence of the Rate Coefficients
The dependence of the rate coefficient, k , on the temperature is given by the Arrhenius equation. This formula was derived by Svante August Arrhenius (1859–1927) in 1889 and is based on the simple experimental observation that every chemical process gets faster when the temperature is increased. Working on data from equilibrium reactions previously reported by van ’t Hoff, Arrhenius proposed the following simple exponential formula to explain the increase of k when T is increased.
15: Chemical Kinetics
The rate law of a chemical reaction is an equation that links the initial rate with the concentrations (or pressures) of the reactants. Rate laws usually include a constant parameter, $k$, called the rate coefficient, and several parameters found at the exponent of the concentrations of the reactants, and are called reaction orders. The rate coefficient depends on several conditions, including the reaction type, the temperature, the surface area of an adsorbent, light irradiation, and others. The reaction rate is usually represented with the lowercase letter $k$, and it should not be confused with the thermodynamic equilibrium constant that is generally designated with the uppercase letter $K$. Another useful concept in kinetics is the half-life, usually abbreviated with $t_{1/2}$. The half-life is defined as the time required to reach half of the initial reactant concentration.
A reaction that happens in one single microscopic step is called elementary. Elementary reactions have reaction orders equal to the (integer) stoichiometric coefficients for each reactant. As such, only a limited number of elementary reactions are possible (four types are commonly observed), and they are classified according to their overall reaction order. The global reaction order of a reaction is calculated as the sum of each reactant’s individual orders and is, at most, equal to three. We examine in detail the four most common reaction orders below.
Zeroth-order reaction
For a zeroth-order reaction, the reaction rate is independent of the concentration of a reactant. In other words, if we have a reaction of the type:
$\text{A}\longrightarrow\text{products} \nonumber$
the differential rate law can be written:
$- \dfrac{d[\mathrm{A}]}{dt}=k_0 [\mathrm{A}]^0 = k_0, \label{15.1.1}$
which shows that any change in the concentration of $\mathrm{A}$ will have no effect on the speed of the reaction. The minus sign at the right-hand-side is required because the rate is always defined as a positive quantity, while the derivative is negative because the concentration of the reactant is diminishing with time. Separating the variables $[\mathrm{A}]$ and $t$ of Equation \ref{15.1.1} and integrating both sides, we obtain the integrated rate law for a zeroth-order reaction as:
\begin{aligned} \int_{[\mathrm{A}]_0}^{[A]} d[\mathrm{A}] &= -k_0 \int_{t=0}^{t} dt \ [\mathrm{A}]-[\mathrm{A}]_0 &= -k_0 t \ \ [\mathrm{A}]&=[\mathrm{A}]_0 -k_0 t. \end{aligned} \label{15.1.2}
Using the integrated rate law, we notice that the concentration on the reactant diminishes linearly with respect to time. A plot of $[\mathrm{A}]$ as a function of $t$, therefore, will result in a straight line with an angular coefficient equal to $-k_0$, as in the plot of Figure $1$.
Eq. (15.1.2) also suggests that the units of the rate coefficient for a zeroth-order reaction are of concentration divided by time, typically $\dfrac{\mathrm{M}}{\mathrm{s}}$, with $\mathrm{M}$ being the molar concentration in $\dfrac{\mathrm{mol}}{\mathrm{L}}$ and $s$ the time in seconds. The half-life of a zero order reaction can be calculated from Equation \ref{15.1.2}, by replacing $[\mathrm{A}]$ with $\dfrac{1}{2}[\mathrm{A}]_0$:
\begin{aligned} \dfrac{1}{2}[\mathrm{A}]_0 &=[\mathrm{A}]_0 -k_0 t_{1/2} \ t_{1/2} &= \dfrac{[\mathrm{A}]_0}{2k_0}. \end{aligned} \label{15.1.3}
Zeroth-order reactions are common in several biochemical processes catalyzed by enzymes, such as the oxidation of ethanol to acetaldehyde in the liver by the alcohol dehydrogenase enzyme, which is zero-order in ethanol.
First-order reaction
A first-order reaction depends on the concentration of only one reactant, and is therefore also called a unimolecular reaction. As for the previous case, if we consider a reaction of the type:
$\mathrm{A}\rightarrow \text{products} \nonumber$
the differential rate law for a first-order reaction is:
$- \dfrac{d[\mathrm{A}]}{dt}=k_1 [\mathrm{A}]. \label{15.1.4}$
Following the usual blueprint of separating the variables, and integrating both sides, we obtain the integrated rate law as:
\begin{aligned} \int_{[\mathrm{A}]_0}^{[A]} \dfrac{d[\mathrm{A}]}{[\mathrm{A}]} &= -k_1 \int_{t=0}^{t} dt \ \ln \dfrac{[\mathrm{A}]}{[\mathrm{A}]_0}&=-k_1 t\ \ [\mathrm{A}] &= [\mathrm{A}]_0 \exp(-k_1 t). \end{aligned} \label{15.1.5}
Using the integrated rate law to plot the concentration of the reactant, $[\mathrm{A}]$, as a function of time, $t$, we obtain an exponential decay, as in Figure $2$.
However, if we plot the logarithm of the concentration, $\ln[\mathrm{A}]$, as a function of time, we obtain a line with angular coefficient $-k_1$, as in the plot of Figure $3$. From Equation \ref{15.1.5}, we can also obtain the units for the rate coefficient for a first-order reaction, which typically is $\dfrac{1}{\mathrm{s}}$, independent of concentration. Since the rate coefficient for first-order reactions has units of inverse time, it is sometimes called the frequency rate.
The half-life of a first-order reaction is:
\begin{aligned} \ln \dfrac{\dfrac{1}{2}[\mathrm{A}]_0}{[\mathrm{A}]_0}&=-k_1 t_{1/2}\ t_{1/2} &= \dfrac{\ln 2}{k_1}. \end{aligned} \label{15.1.6}
The half-life of a first-order reaction is independent of the initial concentration of the reactant. Therefore, the half-life can be used in place of the rate coefficient to describe the reaction rate. Typical examples of first-order reactions are radioactive decays. For radioactive isotopes, it is common to report their rate of decay in terms of their half-life. For example, the most stable uranium nucleotide, $^{238}\mathrm{U}$, has a half-life of $4.468\times 10^9$ years, while the most common fissile isotope of uranium, $^{235}\mathrm{U}$, has a half-life of $7.038\times 10^8$ years.$^1$ Other examples of first-order reactions in chemistry are the class of SN1 nucleophilic substitution reactions in organic chemistry.
Second-order reaction
A reaction is second-order when the sum of the reaction orders is two. Elementary second-order reactions are also called bimolecular reactions. There are two possibilities, a simple one, where the reaction order of one reagent is two, or a more complicated one, with two reagents having each a reaction order of one.
• For the simple case, we can write the reaction as: $2\mathrm{A}\rightarrow \text{products} \nonumber$ the differential rate law for a first-order reaction is: $-\dfrac{d[\mathrm{A}]}{dt}=k_2 [\mathrm{A}]^2. \label{15.1.7}$ Following the same procedure used for the two previous cases, we can obtain the integrated rate law as: \begin{aligned} \int_{[\mathrm{A}]_0}^{[A]} \dfrac{d[\mathrm{A}]}{[\mathrm{A}]^2} &= -k_2 \int_{t=0}^{t} dt \ \dfrac{1}{[\mathrm{A}]}-\dfrac{1}{[\mathrm{A}]_0} &= k_2 t\ \ \dfrac{1}{[\mathrm{A}]}&=\dfrac{1}{[\mathrm{A}]_0} + k_2 t. \end{aligned} \label{15.1.8} As for first-order reactions, the plot of the concentration as a function of time shows a non-linear decay. However, if we plot the inverse of the concentration, $\dfrac{1}{[\mathrm{A}]}$, as a function of time, $t$, we obtain a line with angular coefficient $+k_2$, as in the plot of Figure $4$.
Notice that the line has a positive angular coefficient, in contrast with the previous two cases, for which the angular coefficients were negative. The units of $k$ for a simple second order reaction are calculated from Equation \ref{15.1.8} and typically are $\dfrac{1}{\mathrm{M}\cdot \mathrm{s}}$. The half-life of a simple second-order reaction is: \begin{aligned} \dfrac{1}{\dfrac{1}{2}[\mathrm{A}]_0}-\dfrac{1}{[\mathrm{A}]_0} &= k_2 t_{1/2} \ t_{1/2} &= \dfrac{1}{k_2 [\mathrm{A}]_0}, \end{aligned} \label{15.1.9} which, perhaps not surprisingly, depends on the initial concentration of the reactant, $[\mathrm{A}]_0$. Therefore, if we start with a higher concentration of the reactant, the half-life will be shorter, and the reaction will be faster. An example of simple second-order behavior is the reaction $\mathrm{NO}_2 + \mathrm{CO} \rightarrow \mathrm{NO} + \mathrm{CO}_2$, which is second-order in $\mathrm{NO}_2$ and zeroth-order in $\mathrm{CO}$.
• For the complex second-order case, the reaction is: $\mathrm{A}+\mathrm{B}\rightarrow \text{products} \nonumber$ and the differential rate law is: $-\dfrac{d[\mathrm{A}]}{dt}=k'_2 [\mathrm{A}][\mathrm{B}]. \label{15.1.10}$ The differential equation in Equation \ref{15.1.10} has two variables, and cannot be solved exactly unless an additional relationship is specified. If we assume that the initial concentration of the two reactants are equal, then $[\mathrm{A}]=[\mathrm{B}]$ at any time $t$, and Equation \ref{15.1.10} reduces to Equation \ref{15.1.7}. If the concentration of the reactants are different, then the integrated rate law will assume the following shape: $\dfrac{\mathrm{[A]}}{\mathrm{[B]}} = \dfrac{\mathrm{[A]_0}}{\mathrm{[B]_0}} \exp \left\{ \left(\mathrm{[A]_0} - \mathrm{[B]_0}\right) k'_2t \right\}. \label{15.1.11}$ The units of $k$ for a complex second order reaction can be calculated from Equation \ref{15.1.11}, and are the same as those for the simple case, $\dfrac{1}{\mathrm{M}\cdot \mathrm{s}}$. The half-life of a complex second-order reaction cannot be easily written since two different half-lives could, in principle, be defined for each of the corresponding reactants.
Third and higher orders reaction
Although elementary reactions with order higher than two are possible, they are in practice infrequent, and only very few experimental third-order reactions are observed. Fourth-order or higher have never been observed because the probabilities for a simultaneous interaction between four molecules are essentially zero. Third-order elementary reactions are also called termolelucar reactions. While termolelucar reactions with three identical reactants are possible in principle, there is no known experimental example. Some complex third-order reactions are known, such as:
$2\text{NO}_{(g)}+\text{O}_{2(g)}\longrightarrow 2\text{NO}_{2(g)} \nonumber$
for which the differential rate law can be written as:
$-\dfrac{dP_{\mathrm{O}_2}}{dt}=k_3 P_{\mathrm{NO}}^2 P_{\mathrm{O}_2}. \label{15.1.12}$
1. Notice how large these numbers are for uranium. To put these numbers in perspective, we can compare them with the half-life of the most unstable isotope of plutonium, $^{241}\mathrm{Pu}$, which is $t_{1/2}=14.1$ years.︎
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/15%3A_Chemical_Kinetics/15.01%3A_Differential_and_integrated_rate_laws.txt
|
It is essential to specify that the order of a reaction and its molecularity are equal only for elementary reactions. Reactions that follow complex laws are composed of several elementary steps, and they usually have non-integer reaction orders, for at least one of the reactants.
Consecutive reactions
A reaction that happens following a sequence of two elementary steps can be written as follows:
$\text{A}\xrightarrow{\;k_1\;}\text{B}\xrightarrow{\;k_2\;}\text{C} \nonumber$
Assuming that each of the steps follows a first order kinetic law, and that only the reagent $\mathrm{A}$ is present at the beginning of the reaction, we can write the differential change in concentration of each species with respect to infinitesimal time $dt$, using the following formulas:
\begin{aligned} -\dfrac{d[\mathrm{A}]}{dt}&=k_1 [\mathrm{A}] \Rightarrow [\mathrm{A}] = [\mathrm{A}]_0 \exp(-k_1 t) \ \dfrac{d[\mathrm{B}]}{dt} &=k_1 [\mathrm{A}]-k_2 [\mathrm{B}] \ \dfrac{d[\mathrm{C}]}{dt} &=k_2 [\mathrm{B}]. \end{aligned}\label{15.2.1}
These three equations represent a system of differential equations with three unknown variables. Unfortunately, these equations are linearly dependent on each other, and they are not sufficient to solve the system for each variable. To do so, we need to include a fourth equation, coming from the conservation of mass:
$[\mathrm{A}]_0=[\mathrm{A}]+[\mathrm{B}]+[\mathrm{C}]. \label{15.2.2}$
Using the first equation in Equation \ref{15.2.1}, we can now replace the concentration $[\mathrm{A}]$ in the second equation and solve for $[\mathrm{B}]$:
$\dfrac{d[\mathrm{B}]}{dt}+k_2 [\mathrm{B}]=k_1 [\mathrm{A}]_0 \exp(-k_1 t), \label{15.2.3}$
which can be simplified by multiplying both sides by $\exp (k_2t)$:
\begin{aligned} \left( \dfrac{d[\mathrm{B}]}{dt}+k_2 [\mathrm{B}] \right) \exp (k_2t) &= k_1 [\mathrm{A}]_0 \exp[(k_2-k_1) t] \ \Rightarrow \dfrac{d\left\{[\mathrm{B}]\exp (k_2t)\right\}}{dt} &= k_1 [\mathrm{A}]_0 \exp[(k_2-k_1) t], \end{aligned}\label{15.2.4}
which can then be integrated remembering that $[B]_0=0$, and $\int \exp(kx)=\dfrac{1}{k}\exp(kx)$:
$[\mathrm{B}] = \dfrac{k_1}{k_2-k_1} [\mathrm{A}]_0 [\exp(-k_1t)-\exp(-k_2t)]. \label{15.2.5}$
We can then use both $[\mathrm{A}]$, from Equation \ref{15.2.1}, and $[\mathrm{B}]$, from Equation \ref{15.2.5}, in Equation \ref{15.2.2} to solve for $[\mathrm{C}]$:
\begin{aligned} \left[\mathrm{C}\right] &= [\mathrm{A}]_0-[\mathrm{A}]-[\mathrm{B}] \ &= [\mathrm{A}]_0-[\mathrm{A}]_0 \exp(-k_1 t)-\dfrac{k_1}{k_2-k_1} [\mathrm{A}]_0 [\exp(-k_1t)-\exp(-k_2t)] \ &= [\mathrm{A}]_0\left\{1+\dfrac{-k_2 \exp(-k_1t)+ k_1 \exp(-k_2t)}{k_2-k_1} \right\}. \end{aligned}\label{15.2.6}
From these results, we can distinguish two extreme behaviors. The first one is observed when $k_1 \cong k_2$, and it produces a plot of the concentration of species with respect to time reported in Figure $1$. This behavior is observed when a process undergoing a series of consecutive reactions present a rate-determining step in the middle of the sequence (the second reaction, in the simple case analyzed above). Once the process is established, its rate will equate the rate of the slowest step.
The second behavior is observed when $k_1\ll k_2$, and it produces the plot in Figure $2$ In this case, the concentration of the intermediate species $B$ is not relevant throughout the process, and the rate-determining step is the first reaction. As such, the process has the same rate law as an elementary reaction going directly from $A$ to $C$.
Since the concentration of $B$ is small and relatively constant throughout the process, $\dfrac{d[\mathrm{B}]}{dT}=0$. We can then simplify the mathematical treatment of these reactions by eliminating it from the process altogether. This simplification is known as the steady-state approximation. It is used in chemical kinetics to study processes that undergo a series of reactions producing intermediate species whose concentrations are constants throughout the entire process.
\begin{aligned} \text{A} &\xrightarrow{\;k_1\;} \text{I}_1 \xrightarrow{\;k_2\;} \text{I}_2 \xrightarrow{\quad} \cdots \xrightarrow{\;k_n\;}\text{products} \ & \text{Steady State Approximation:} \ \text{A}&\xrightarrow{\qquad\qquad\qquad\qquad\quad\quad\;\;}\text{products} \end{aligned}\label{15.2.7}
Competitive reactions
A process where two elementary reactions happen in parallel, competing with each can be written as follows:
$\begin{matrix} &_{k_1} & B\ &\nearrow & \ A & & \ &\searrow& \ &_{k_2} & C \end{matrix} \nonumber$
Assuming that each step follows first order kinetic, we can write:
\begin{aligned} -\dfrac{d[\mathrm{A}]}{dt} &=k_1 [\mathrm{A}]+k_2 [\mathrm{A}] \Rightarrow [\mathrm{A}]=[\mathrm{A}]_0\exp \left[ -(k_1+k_2)t \right] \ \dfrac{d[\mathrm{B}]}{dt} &=k_1 [\mathrm{A}] \Rightarrow [\mathrm{B}]=\dfrac{k_1}{k_1+k_2}[\mathrm{A}]_0 \left\{ 1-\exp \left[ -(k_1+k_2)t \right] \right\} \ \dfrac{d[\mathrm{C}]}{dt} &=k_2 [\mathrm{A}]\Rightarrow [\mathrm{C}]=\dfrac{k_2}{k_1+k_2}[\mathrm{A}]_0 \left\{ 1-\exp \left[ -(k_1+k_2)t \right] \right\}. \end{aligned}\nonumber
The concentration of each of the species can then be plotted against time, obtaining the diagram reported in Figure $3$. The final concentrations of the products, $[\mathrm{B}]_f$ and $[\mathrm{C}]_f$, will depend on the values of the two rate coefficients. For example, if $k_1>k_2$, $[\mathrm{B}]_f>[\mathrm{C}]_f$, as in Figure $3$, but if $k_1<k_2$, $[\mathrm{B}]_f<[\mathrm{C}]_f$.
An important relationship that can be derived from Equation \ref{15.2.1} is that:
$\dfrac{[\mathrm{B}]}{[\mathrm{C}]} =\dfrac{k_1}{k_2}. \nonumber$
Opposed reactions
Another case of complex kinetic law happens when a pair of forward and reverse reactions occur simultaneously:
$\mathrm{A}\ce{<=>[k_1][k_{-1}]}\mathrm{B} \nonumber$
where the rate coefficients for the forward and backwards reaction, $k_1$ and $k_{-1}$ respectively, are not necessarily equal to each other, but comparable in magnitude. We can write the rate laws for each of these elementary steps as:
\begin{aligned} -\dfrac{d[\mathrm{A}]}{dt} &=k_1 [\mathrm{A}]-k_{-1} [\mathrm{B}] = k_1 [\mathrm{A}]-k_{-1}\left([\mathrm{A}]_0-[\mathrm{A}]\right) \ \dfrac{d[\mathrm{A}]}{dt} &=-(k_1+k_{-1})[\mathrm{A}] + k_{-1}[\mathrm{A}]_0, \end{aligned}\label{15.2.8}
which can then be integrated to:
\begin{aligned} \left[\mathrm{A}\right] &=[\mathrm{A}]_0\dfrac{k_{-1}+k_1\exp[-(k_1+k_{-1})t]}{k_1+k_{-1}} \ \left[\mathrm{B}\right] &=[\mathrm{A}]_0\left\{ 1-\dfrac{k_{-1}+k_1\exp[-(k_1+k_{-1})t]}{k_1+k_{-1}}\right\}. \end{aligned}\label{15.2.9}
These formulas can then be used to obtain the plots in Figure $4$.
As can be seen from the plots in Figure $4$, after a sufficiently long time, the systems reach a dynamic equilibrium, where the concentration of $\mathrm{A}$ and $\mathrm{B}$ don’t change. These equilibrium concentrations can be calculated replacing $t=\infty$ in Equation \ref{15.2.8}:
\begin{aligned} \left[\mathrm{A} \right] _{\mathrm{eq}} &= [\mathrm{A}]_0 \dfrac{k_{-1}}{k_1+k_{-1}} \ [\mathrm{B}]_{\mathrm{eq}} &= [\mathrm{A}]_0 \dfrac{k_{1}}{k_1+k_{-1}}. \end{aligned}\label{15.2.10}
Considering that the concentrations of the species don’t change at equilibrium:
\begin{aligned} -\dfrac{d[\mathrm{A}]_{\mathrm{eq}}}{dt} &= \dfrac{d[\mathrm{B}]_{\mathrm{eq}}}{dt} = 0\ & \Rightarrow \; k_1[\mathrm{A}]_{\mathrm{eq}} = k_{-1}[\mathrm{B}]_{\mathrm{eq}} \ & \Rightarrow \; \dfrac{k_1}{k_{-1}} = \dfrac{[\mathrm{B}]_{\mathrm{eq}}}{[\mathrm{A}]_{\mathrm{eq}}} = K_C, \ \end{aligned} \label{15.2.11}
where $K_C$ is the equilibrium constant as defined in chapter 10. This is a rare link between kinetics and thermodynamics and appears only for opposed reactions after sufficient time has passed so that the system can reach the dynamic equilibrium.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/15%3A_Chemical_Kinetics/15.02%3A_Complex_Rate_Laws.txt
|
To experimentally measure the reaction rate, we need a method to measure concentration changes with respect to time. The simplest way to determine the reaction rate is to monitor the entire reaction as it proceeds and then plot the resulting data differently until a linear plot is found. A summary of the results obtained in section 15.1 and that is useful for this task is reported in the following table:
Zeroth-Order First-Order Simple Second-Order Complex Second-Order
Differential Rate Law $-\dfrac{d[\mathrm{A}]}{dt}=k_0 [\mathrm{A}]^0 = k_0$ $-\dfrac{d[\mathrm{A}]}{dt}=k_1 [\mathrm{A}]$ $-\dfrac{d[\mathrm{A}]}{dt}=k_2 [\mathrm{A}]^2$ $-\dfrac{d[\mathrm{A}]}{dt}=k'_2 [\mathrm{A}][\mathrm{B}]$
Integrated Rate Law $[\mathrm{A}]=[\mathrm{A}]_0 -k_0 t$ $[\mathrm{A}]=[\mathrm{A}]_0 e^{-k_1 t}$ $\dfrac{1}{[\mathrm{A}]}=\dfrac{1}{[\mathrm{A}]_0} + k_2 t$ $\dfrac{\mathrm{[A]}}{\mathrm{[B]}}=\dfrac{\mathrm{[A]_0}}{\mathrm{[B]_0}}e^{\left(\mathrm{[A]_0}-\mathrm{[B]_0}\right)k'_2t}$
Units of $k$ $\dfrac{\mathrm{M}}{\mathrm{s}}$ $\dfrac{1}{\mathrm{s}}$ $\dfrac{1}{\mathrm{M}\cdot \mathrm{s}}$ $\dfrac{1}{\mathrm{M}\cdot \mathrm{s}}$
Linear Plot vs. $t$ $[\mathrm{A}]$ $\ln [\mathrm{A}]$ $\dfrac{1}{[\mathrm{A}]}$ $\ln \dfrac{[\mathrm{A}]_0[\mathrm{B}]}{[\mathrm{B}]_0[\mathrm{A}]}$
Half-life $t_{1/2}=\dfrac{[\mathrm{A}]_0}{2k_0}$ $t_{1/2}=\dfrac{\ln 2}{k_1}$ $t_{1/2}=\dfrac{1}{k_2 [\mathrm{A}]_0}$ not easily defined
However, this method works only if the reaction has few reactants, and it requires several measurements, each of which might be complicated to make. More useful methods to determine the reaction rate are the initial rate and the isolation methods that we describe below.
Initial rates method
The initial rates method involves measuring the rate of a reaction as soon as it starts before any significant change in the concentrations of the reactants occurs. The initial rate method is practical only if the reaction is reasonably slow, but it can measure the rate unambiguously when more than one reactant is involved. For example, if we have a reaction with the following stoichiometry:
$\alpha \mathrm{A} + \beta \mathrm{B} \xrightarrow{k} \text{products} \nonumber$
the initial rate method can be used to determine the coefficients of the rate law:
$\text{Rate}=k[\mathrm{A}]^{\alpha}[\mathrm{B}]^{\beta} \label{15.3.1}$
by designing three experiment, where the initial concentrations of $\mathrm{A}$ and $\mathrm{B}$ are appropriately changed. For example, let’s consider the following experimental data from three different experiments:
$[\mathrm{A}]_0 \; (\text{M})$ $[\mathrm{B}]_0 \; (\text{M})$ $\text{initial rate}\;\left(\dfrac{M}{s}\right)$
$\text{Experiment 1:}$ 0.10 0.10 4.32
$\text{Experiment 2:}$ 0.15 0.10 9.70
$\text{Experiment 3:}$ 0.10 0.20 4.29
we can calculate $\alpha$ by taking the ratio of the rates measured in experiment 1 and 2:
\begin{aligned} \dfrac{\text{Rate}(1)}{\text{Rate}(2)}&=\dfrac{k(0.10\;\text{M})^\alpha(0.10\;\text{M})^\beta}{k(0.15\;\text{M})^\alpha(0.10\;\text{M})^\beta} \ \dfrac{4.32}{9.70}&=\dfrac{(0.10\;\text{M})^\alpha}{(0.15\;\text{M})^\alpha} \ 0.445&=0.667^\alpha \;\rightarrow\; \ln0.445=\alpha \ln0.667 \ \alpha &= \dfrac{-0.81}{-0.405}=2. \end{aligned} \label{15.3.2}
$\beta$ can be calculated similarly by taking the ratio between experiments 1 and 3. Alternatively, we can also notice that the reaction rate does not change when the initial concentration $[\mathrm{B}]_0$ is doubled, therefore $\beta=0$.
Isolation method
Another method that is widely used to determine reaction orders is the isolation method. This method is performed by using large excess concentrations of all reactants but one. For example, if we have the following reaction with three reagents and unknown rate law:
$\alpha \mathrm{A} + \beta \mathrm{B} + \gamma \mathrm{C} \xrightarrow{k} \text{products} \nonumber$
we can perform three different experiments, in each of which we use an excessive amount of one of the two reagents, such as:
• Experiment 1: $[\mathrm{A}]_0=1\;\text{M},\quad [\mathrm{B}]_0=1000\;\text{M}, \quad [\mathrm{C}]_0=1000\;\text{M},$ in which the reaction order with respect to $\mathrm{A}$ is measured.
• Experiment 2: $[\mathrm{A}]_0=1000\;\text{M},\quad [\mathrm{B}]_0=1\;\text{M}, \quad [\mathrm{C}]_0=1000\;\text{M},$ in which the reaction order with respect to $\mathrm{B}$ is measured.
• Experiment 3: $[\mathrm{A}]_0=1000\;\text{M},\quad [\mathrm{B}]_0=1000\;\text{M}, \quad [\mathrm{C}]_0=1\;\text{M},$ in which the reaction order with respect to $\mathrm{C}$ is measured.
From each experiment we can determine the pseudo-order of the reaction with respect to the reagent that is in minority concentration. For example, for the reaction above, we can write the rate law as:
$\text{Rate}=k[\mathrm{A}]^{\alpha}[\mathrm{B}]^{\beta}[\mathrm{C}]^{\gamma} \label{15.3.3}$
and we can write the initial concentrations, $[X]_0$, and the final concentrations, $[X]_f$, of each of the species in experiment 1, as:
\begin{aligned} \left[\mathrm{A}\right]_0 =1\;\text{M}\;\longrightarrow &[\mathrm{A}]_f=0\;\text{M} \qquad &\text{(100\% change)} \ \left[\mathrm{B}\right]_0 =1000\;\text{M}\;\longrightarrow &[\mathrm{B}]_f=1000-1=999\;\text{M}\cong [\mathrm{B}]_0\qquad &\text{(0.1\% change)}\ \left[\mathrm{C}\right]_0 =1000\;\text{M}\;\longrightarrow &[\mathrm{C}]_f=1000-1=999\;\text{M} \cong \left[\mathrm{C}\right]_0. \qquad &\text{(0.1\% change)} \end{aligned} \nonumber
The coefficient $\alpha$ can then be determined by incorporating the concentration of the reactants in excess into the rate constant as:
\begin{aligned} \text{rate}&=k[\mathrm{A}]^{\alpha}\underbrace{[\mathrm{B}]^{\beta}[\mathrm{C}]^{\gamma}}_{\text{constant}} \ &= k'[\mathrm{A}]^{\alpha} \end{aligned} \nonumber
and then determine $\alpha$ by verifying which order the data collected for $[\mathrm{A}]$ at various time fit. This can be simply achieved by using the zero-, first-, and second-order kinetic plots, as reported in the table above. We can determine $\beta$ and $\gamma$ by repeating the same procedure for the data from the other two experiments. For example, if we find for a specific reaction that $\alpha=$ 1, $\beta=2$, and $\gamma=0$, we can then say that the reaction is pseudo-order one in $\mathrm{A}$, pseudo-order two in $\mathrm{B}$, and pseudo-order zero in $\mathrm{C}$, with an overall reaction order of three.
15.04: Temperature Dependence of the Rate Coefficients
The dependence of the rate coefficient, $k$, on the temperature is given by the Arrhenius equation. This formula was derived by Svante August Arrhenius (1859–1927) in 1889 and is based on the simple experimental observation that every chemical process gets faster when the temperature is increased. Working on data from equilibrium reactions previously reported by van ’t Hoff, Arrhenius proposed the following simple exponential formula to explain the increase of $k$ when $T$ is increased:
$k=A\exp\left( \dfrac{E_a}{RT}\right), \label{15.4.1}$
where $A$ is the so-called Arrhenius pre-exponential factor, and $E_a$ is the activation energy. Both of these terms are independent of temperature,$^1$ and they represent experimental quantities that are unique to each individual reaction. Since there is no known exception to the fact that a temperature increase speeds up chemical reactions, both $A$ and $E_a$ are always positive. The pre-exponential factor units are the same as the rate constant and will vary depending on the order of the reaction. As suggested by its name, the activation energy has units of energy per mole of substance, $\dfrac{\mathrm{J}}{\mathrm{mol}}$ in SI.
The Arrhenius equation is experimentally useful in its linearized form, which is obtained from two Arrhenius experiments, taken at different temperatures. Applying Equation \ref{15.4.1} to two different experiments, and taking the ratio between the results, we obtain:
$\ln \dfrac{k_{T_2}}{k_{T_1}}=-\dfrac{E_a}{RT}\left(\dfrac{1}{T_2}-\dfrac{1}{T_1}\right), \label{15.4.2}$
which gives the plot of Figure $1$, from which $E_a$ can be determined.
From empirical arguments, Arrhenius proposed the idea that reactants must acquire a minimum amount of energy before they can form any product. He called this amount of minimum energy the activation energy. We can motivate this assumption by plotting energy of a reaction along the reaction coordinate, as in Figure $3$.$^2$ The reaction coordinate is defined as the minimum energy path that connects the reactants with the products.
1. In theory, both $A$ and $E_a$ show a weak temperature dependence. However, they can be considered constants at most experimental conditions, since kinetic studies are usually performed in a small temperature range.︎
2. This plot is taken from Wikipedia, and have been generated and distributed by Author Grimlock under CC-BY-SA license.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/15%3A_Chemical_Kinetics/15.03%3A_Experimental_Methods_for_Determination_of_Reaction_Orders.txt
|
• 16.1: Introduction
Quantum mechanics is an important intellectual achievement of the 20th century. It is one of the more sophisticated field in physics that has affected our understanding of nano-meter length scale systems important for chemistry, materials, optics, and electronics. The existence of orbitals and energy levels in atoms can only be explained by quantum mechanics.
• 16.2: Quantum Mechanics is Bizarre
The development of quantum mechanics is a great intellectual achievement, but at the same time, it is bizarre. The reason is that quantum mechanics is quite different from classical physics. The development of quantum mechanics is likened to watching two players having a game of chess, but the watchers have not a clue as to what the rules of the game are.
• 16.3: The Ultraviolet Catastrophe
The ultraviolet (UV) catastrophe, also called the Rayleigh–Jeans catastrophe, is the prediction of classical electromagnetism that the intensity of the radiation emitted by an ideal black body at thermal equilibrium goes to infinity as wavelength decreases.
• 16.4: The Photoelectric Effect
In 1886 and 1887, Heinrich Hertz discovered that ultraviolet light can cause electrons to be ejected from a metal surface. According to the classical wave theory of light, the intensity of the light determines the amplitude of the wave, and so a greater light intensity should cause the electrons on the metal to oscillate more violently and to be ejected with a greater kinetic energy.
• 16.5: Wave-Particle Duality
Einstein had shown that the momentum of a photon is p=h/λ.
Thumbnail: The Photoelectric effect require quantum mechanics to describe accurately (CC BY-SA-NC 3.0; anonymous via LibreTexts).
16: The Motivation for Quantum Mechanics
Quantum mechanics is an important intellectual achievement of the 20th century. It is one of the more sophisticated field in physics that has affected our understanding of nano-meter length scale systems important for chemistry, materials, optics, and electronics. The existence of orbitals and energy levels in atoms can only be explained by quantum mechanics. Quantum mechanics can explain the behaviors of insulators, conductors, semi-conductors, and giant magneto-resistance. It can explain the quantization of light and its particle nature in addition to its wave nature. Quantum mechanics can also explain the radiation of hot body, and its change of color with respect to temperature. It explains the presence of holes and the transport of holes and electrons in electronic devices. Quantum mechanics has played an important role in photonics, quantum electronics, and micro-electronics. But many more emerging technologies require the understanding of quantum mechanics; and hence, it is important that scientists and engineers understand quantum mechanics better. One area is nano-technologies due to the recent advent of nano-fabrication techniques. Consequently, nano-meter size systems are more common place. In electronics, as transistor devices become smaller, how the electrons move through the device is quite different from when the devices are bigger: nano-electronic transport is quite different from micro-electronic transport. The quantization of electromagnetic field is important in the area of nano-optics and quantum optics. It explains how photons interact with atomic systems or materials. It also allows the use of electromagnetic or optical field to carry quantum information. Moreover, quantum mechanics is also needed to understand the interaction of photons with materials in solar cells, as well as many topics in material science. When two objects are placed close together, they experience a force called the Casimir force that can only be explained by quantum mechanics. This is important for the understanding of micro/nano-electromechanical sensor systems (M/NEMS). Moreover, the understanding of spins is important in spintronics, another emerging technology where giant magneto-resistance, tunneling magneto-resistance, and spin transfer torque are being used. Quantum mechanics is also giving rise to the areas of quantum information, quantum communication, quantum cryptography, and quantum computing. It is seen that the richness of quantum physics will greatly affect the future generation technologies in many aspects.︎
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/16%3A_The_Motivation_for_Quantum_Mechanics/16.01%3A_Introduction.txt
|
The development of quantum mechanics is a great intellectual achievement, but at the same time, it is bizarre. The reason is that quantum mechanics is quite different from classical physics. The development of quantum mechanics is likened to watching two players having a game of chess, but the watchers have not a clue as to what the rules of the game are. By observations, and conjectures, finally the rules of the game are outlined. Often, equations are conjectured like conjurors pulling tricks out of a hat to match experimental observations. It is the interpretations of these equations that can be quite bizarre. Quantum mechanics equations were postulated to explain experimental observations, but the deeper meanings of the equations often confused even the most gifted. Even though Einstein received the Nobel prize for his work on the photo-electric effect that confirmed that light energy is quantized, he himself was not totally at ease with the development of quantum mechanics as charted by the younger physicists. He was never comfortable with the probabilistic interpretation of quantum mechanics by Born and the Heisenberg uncertainty principle: “God doesn’t play dice,” was his statement assailing the probabilistic interpretation. He proposed “hidden variables” to explain the random nature of many experimental observations. He was thought of as the “old fool” by the younger physicists during his time. Schrödinger came up with the bizarre “Schrödinger cat paradox” that showed the struggle that physicists had with quantum mechanics’s interpretation. But with today’s understanding of quantum mechanics, the paradox is a thing of yesteryear. The latest twist to the interpretation in quantum mechanics is the parallel universe view that explains the multitude of outcomes of the prediction of quantum mechanics. All outcomes are possible, but with each outcome occurring in different universes that exist in parallel with respect to each other.\(^1\)
The development of quantum mechanics was initially motivated by two observations which demonstrated the inadeqacy of classical physics. These are the “ultraviolet catastrophe” and the photoelectric effect.
1. This section was adapted in part from Prof. Weng Cho CHEW’s Quantum Mechanics Made Simple Lecture Notes available here.
16.03: The Ultraviolet Catastrophe
The ultraviolet (UV) catastrophe, also called the Rayleigh–Jeans catastrophe, is the prediction of classical electromagnetism that the intensity of the radiation emitted by an ideal black body at thermal equilibrium goes to infinity as wavelength decreases (see figure $1$ )$^1$.
A black body is an idealized object that absorbs and emits all frequencies. Classical physics can be used to derive an approximated equation describing the intensity of a black body radiation as a function of frequency for a fixed temperature. The result is known as the Rayleigh-Jeans law, which for wavelength $\lambda$, is:
$B_{\lambda }(T)={\dfrac {2ck_{\mathrm {B} }T}{\lambda ^{4}}} \label{17.3.1}$
where $B_{\lambda }$ is the intensity of the radiation —expressed as the power emitted per unit emitting area, per steradian, per unit wavelength (spectral radiance)— $c$ is the speed of light, $k_{\mathrm{B}}$ is the Boltzmann constant, and $T$ is the temperature in kelvins. The paradox —or rather the breakdown of the Rayleigh–Jeans formula— happens at small wavelength $\lambda$. If we take the limit for $\lambda \rightarrow 0$ in Equation \ref{17.3.1}, we obtain that $B_{\lambda } \rightarrow \infty$. In other words, as the wavelength of the emitted light gets smaller (approaching the UV range), the intensity of the radiation approaches infinity, and the black body emits an infinite amount of energy. This divergence for low wavelength (high frequencies) is called the ultraviolet catastrophe, and it is clearly unphysical.
Max Planck explained the black body radiation in 1900 by assuming that the energies of the oscillations of the electrons responsible for the radiation must be proportional to integral multiples of the frequency, i.e.,
$E = n h \nu = n h \dfrac{c}{\lambda} \label{17.3.2}$
Planck’s assumptions led to the correct form of the spectral function for a black body: $B_{\lambda }(\lambda ,T)={\dfrac {2hc^{2}}{\lambda ^{5}}}{\dfrac {1}{e^{hc/(\lambda k_{\mathrm {B} }T)}-1}}. \label{17.3.3}$
If we now take the limit for $\lambda \rightarrow 0$ of Equation \ref{17.3.3}, it is easy to prove that $B_{\lambda }$ goes to zero, in agreement with the experimental results, and our intuition. Planck also found that for $h = 6.626 \times 10^{-34} \; \text{J s}$, the experimental data could be reproduced exactly. Nevertheless, Planck could not offer a good justification for his assumption of energy quantization. Physicists did not take this energy quantization idea seriously until Einstein invoked a similar assumption to explain the photoelectric effect.
1. This picture is taken from Wikipedia by user Darth Kule, and in in the Public Domain
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/16%3A_The_Motivation_for_Quantum_Mechanics/16.02%3A_Quantum_Mechanics_is_Bizarre.txt
|
In 1886 and 1887, Heinrich Hertz discovered that ultraviolet light can cause electrons to be ejected from a metal surface. According to the classical wave theory of light, the intensity of the light determines the amplitude of the wave, and so a greater light intensity should cause the electrons on the metal to oscillate more violently and to be ejected with a greater kinetic energy. In contrast, the experiment showed that the kinetic energy of the ejected electrons depends on the frequency of the light. The light intensity affects only the number of ejected electrons and not their kinetic energies. Einstein tackled the problem of the photoelectric effect in 1905. Instead of assuming that the electronic oscillators had energies given by Planck’s formula, Equation 17.3.2, Einstein assumed that the radiation itself consisted of packets of energy $E = h \nu$, which are now called photons. Einstein successfully explained the photoelectric effect using this assumption, and he calculated a value of $h$ close to that obtained by Planck.
Two years later, Einstein showed that not only is light quantized, but so are atomic vibrations. Classical physics predicts that the molar heat capacity at constant volume ($C_V$) of a crystal is $3 R$, where $R$ is the molar gas constant. This works well for high temperatures, but for low temperatures $C_V$ actually falls to zero. Einstein was able to explain this result by assuming that the oscillations of atoms about their equilibrium positions are quantized according to $E = n h \nu$, Planck’s quantization condition for electronic oscillators. This demonstrated that the energy quantization concept was important even for a system of atoms in a crystal, which should be well-modeled by a system of masses and springs (i.e., by classical mechanics).
16.05: Wave-Particle Duality
Einstein had shown that the momentum of a photon is
$p = \dfrac{h}{\lambda}. \label{17.5.1}$
This can be easily shown as follows. Assuming $E = h \nu$ for a photon and $\lambda \nu = c$ for an electromagnetic wave, we obtain
$E = \dfrac{h c}{\lambda} \label{17.5.2}$
Now we use Einstein’s relativity result, $E = m c^2$, and the definition of mementum $p=mc$, to find: $\lambda = \dfrac{h}{p}, \label{17.5.3}$
which is equivalent to Equation \ref{17.5.1}. Note that $m$ refers to the relativistic mass, not the rest mass, since the rest mass of a photon is zero. Since light can behave both as a wave (it can be diffracted, and it has a wavelength), and as a particle (it contains packets of energy $h \nu$), de Broglie reasoned in 1924 that matter also can exhibit this wave-particle duality. He further reasoned that matter would obey the same Equation \ref{17.5.3} as light. In 1927, Davisson and Germer observed diffraction patterns by bombarding metals with electrons, confirming de Broglie’s proposition.$^1$
Rewriting the previous equations in terms of the wave vector, $k=\dfrac{2\pi}{\lambda}$, and the angular frequency, $\omega=2\pi\nu$, we obtain the following two equations
\begin{aligned} p &= \hbar k \ E &= \hbar \omega, \end{aligned} \label{17.5.4}
which are known as de Broglie’s equations. We will use those equation to develop wave mechanics in the next chapters.
1. The previous 3 sections were adapted in part from Prof. C. David Sherrill’s A Brief Review of Elementary Quantum Chemistry Notes available here.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/16%3A_The_Motivation_for_Quantum_Mechanics/16.04%3A_The_Photoelectric_Effect.txt
|
Quantum mechanics cannot be derived from classical mechanics, but classical mechanics can inspire quantum mechanics. Quantum mechanics is richer and more sophisticated than classical mechanics. Quantum mechanics was developed during the period when physicists had rich knowledge of classical mechanics. In order to better understand how quantum mechanics was developed in this environment, it is better to understand some fundamental concepts in classical mechanics. Classical mechanics can be considered as a special case of quantum mechanics. We will review some classical mechanics concepts here.
• 17.1: Newtonian Formulation
Classical mechanics as formulated by Isaac Newton (1652-1727) is all about forces. Newtonian mechanics works well for problems where we know the forces and have a reasonable coordinate system.
• 17.2: Lagrangian Formulation
Another way to derive the equations of motion for classical mechanics is via the use of the Lagrangian and the principle of least action.
• 17.3: Hamiltonian Mechanics
A third way of obtaining the equation of motion is Hamiltonian mechanics, which uses the generalized momentum in place of velocity as a coordinate.
17: Classical Mechanics
Classical mechanics as formulated by Isaac Newton (1652-1727) is all about forces. Newtonian mechanics works well for problems where we know the forces and have a reasonable coordinate system. In these cases, the net force acting on a system at position $q$ is simply:
$F_{\mathrm{net}}(q) = m\ddot{q} = m \dfrac{d^2 q}{dt^2}. \label{18.1.1}$
Or, in other words, if we know the net force acting on a system of mass $m$ at position $q$ at some time $t_0$, we can use Equation \ref{18.1.1} to calculate the position of the system at any future (or past) time. We have completely determined the dynamical evolution of the system.$^1$
Example $1$
A ball of mass $m$ is at ground level and tossed straight up from an initial position $q_0$ with an initial velocity $\dot{q}_0$ and subject to gravity alone. Calculate the equation of motion for the ball (i.e. where is the ball going to be after some time $t$?).2
Answer
Since the only force acting on the ball is gravity, we can use the equation for the gravitational force to start our derivation:
$F_{\mathrm{gravity}}=-mG, \nonumber$
with $G$ the usual gravitational constant ($G=9.8\; \mathrm{m}/\mathrm{s}^{2}$). We can then replace this expression into Equation \ref{18.1.1}, to obtain:
\begin{aligned} -mG &=m \ddot{q} \ -G &=\ddot{q} \ -G &=\dfrac{d\dot{q}}{dt}, \ \end{aligned} \nonumber
which can then be integrated with respect to time, to obtain:
\begin{aligned} -G\int_{t=0}^{t} dt &=\int_{\dot{q}_0}^{\dot{q}} d\dot{q}\ \dot{q} &= \dot{q}_0-Gt\ \dfrac{dq}{dt} &= \dot{q}_0-Gt,\ \end{aligned} \nonumber
which can be further integrated with respect to time, to give:
\begin{aligned} \int_{q_0}^{q} dq &= \int_{t=0}^{t} \dot{q}_0 dt -G \int_{t=0}^{t}tdt\ q &= q_0 + \dot{q}_0 t -\dfrac{1}{2}Gt^2. \end{aligned} \nonumber
This final equation is the equation of motion for the ball, from which we can calculate the position of the ball at any time $t$. Notice how the equation of motion does not depend on the mass of the ball!
Exercise $2$
How much time will a ball ejected from a height of $1 \;\mathrm{m}$ at an initial velocity of $10 \;\mathrm{m/s}$ take to hit the floor?
Answer
We can use the equation of motion obtained above to solve this problem, and obtain for this specific case $t\simeq 2.12 \;\mathrm{s}$.3
The formula of Newtonian mechanics are not the only one we can use to solve a problem in classical mechanics. We have at least two other equivalent approaches to the same problem that might end up being more useful in certain situations.
1. Notice that, in principle, $q$ $\in$ $\mathbb{R}^{n}$ is the position vector and $\dot{q}$ $\in$ $\mathbb{R}^{n}$ is the velocity vector. As such, all the equation of classical mechanics are vector equation, and not just simple numerical equation, as we present them here! For our purposes, we can restrict ourselves to a 1-dimensional space, hence forgetting the complications of vector algebra.︎
2. This example is based on Rhett Allain’s blog post that can be found (here)[https://rhettallain.com/2018/10/31/classical-mechanics-newtonian-lagrangian-and-hamiltonian/] ︎
3. Can you write a python program to do this calculation?
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/17%3A_Classical_Mechanics/17.01%3A_Newtonian_Formulation.txt
|
Another way to derive the equations of motion for classical mechanics is via the use of the Lagrangian and the principle of least action. The Lagrangian formulation is obtained by starting from the definition of the Lagrangian of the system:
$L = K - V, \label{18.2.1}$
where $K$ is the kinetic energy, and $V$ is the potential energy. Both are expressed in terms of the coordinates $(q,\dot{q})$. Notice that for a fixed time, $t$, $q$ and $\dot{q}$ are independent variables, since $\dot{q}$ cannot be derived from $q$ alone.
The time integral of the Lagrangian is called the action, and is defined as:
$S = \int_{t_1}^{t_2} L\, dt, \label{18.2.2}$
which is a functional: it takes in the Lagrangian function for all times between $t_1$ and $t_2$ and returns a scalar value. The equations of motion can be derived from the principle of least action,$^1$ which states that the true evolution of a system $q(t)$ described by the coordinate $q$ between two specified states $q_1 = q(t_1)$ and $q_2 = q(t_2)$ at two specified times $t_1$ and $t_2$ is a minimum of the action functional. For a minimum point:
$\delta S = \dfrac{dS}{dq}= 0 \label{18.2.3}$
Requiring that the true trajectory $q(t)$ minimizes the action functional $S$, we obtain the equation of motion (Figure $1$).$^2$ This can be achieved applying classical variational calculus to the variation of the action integral $S$ under perturbations of the path $q(t)$, Equation \ref{18.2.3}. The resulting equation of motion (or set of equations in the case of many dimensions) is sometimes also called the Euler—Lagrange equation:$^3$
$\dfrac{d}{dt}\left(\dfrac{\partial L}{\partial\dot q}\right)=\dfrac{\partial L}{\partial q}. \label{18.2.4}$
Example $1$
Let’s apply the Lagrangian mechanics formulas to the same problem as in the previous Example.
Solution
The expression of the kinetic energy, the potential energy, and the Lagrangian for our system are:
\begin{aligned} K &= \dfrac{1}{2}m\dot{q}^2 \ V &= mGq \ L &= K-V = \dfrac{1}{2}m\dot{q}^2 - mGq. \end{aligned} \nonumber
To get the equation of motion using Equation \ref{18.2.4}, we need to first take the partial derivative of $L$ with respect to $q$ (right hand side):
$\dfrac{\partial L}{\partial q}=-mG, \nonumber$
and then we need the derivative with respect to $t$ the derivative of the Lagrangian with respect to $\dot{q}$ at the hand side:
$\dfrac{d}{dt}\dfrac{\partial L}{\partial \dot{q}} = \dfrac{d\left(\dfrac{1}{2}m\dot{q}^2 - mGq\right)}{dt}= m\ddot{q}. \nonumber$
Putting this together, we get:
\begin{aligned} m\ddot{q}&=-mG \ \ddot{q} &= -G \ \end{aligned} \nonumber
Which is the same result as obtained from the Newtonian method. Integrating twice, we get the exact same formulas that we can use the same way.
The advantage of Lagrangian mechanics is that it is not constrained to use a coordinate system. For example, if we have a bead moving along a wire, we can define the coordinate system as the distance along the wire, making the formulas much simpler than in Newtonian mechanics. Also, since the Lagrangian depends on kinetic and potential energy it does a much better job with constraint forces.
1. Sometimes also called principle of stationary action, or variational principle, or Hamilton’s principle.︎
2. This diagram is taken from Wikipedia by user Maschen, and distributed under CC0 license︎
3. The mathematical derivation of the Euler—Lagrange equaiton is rather long and unimportant at this stage. For the curious, it can be found here.
17.03: Hamiltonian Mechanics
A third way of obtaining the equation of motion is Hamiltonian mechanics, which uses the generalized momentum in place of velocity as a coordinate. The generalized momentum is defined in terms of the Lagrangian and the coordinates $(q,\dot{q})$:
$p = \dfrac{\partial L}{\partial\dot{q}}. \label{18.3.1}$
The Hamiltonian is defined from the Lagrangian by applying a Legendre transformation as:$^1$
$H(p,q) = p\dot{q} - L(q,\dot{q}), \label{18.3.2}$
The Lagrangian equation of motion becomes a pair of equations known as the Hamiltonian system of equations:
\begin{aligned} \dot{p}=\dfrac{dp}{dt} &= -\dfrac{\partial H}{\partial q} \ \dot{q}=\dfrac{dq}{dt} &= +\dfrac{\partial H}{\partial p}, \end{aligned} \label{18.3.3}
where $H=H(q,p,t)$ is the Hamiltonian of the system, which often corresponds to its total energy. For a closed system, it is the sum of the kinetic and potential energy in the system:
$H = K + V. \label{18.3.4}$
Notice the difference between the Hamiltonian, Equation \ref{18.3.4}, and the Lagrangian, Equation 18.2.1. In Newtonian mechanics, the time evolution is obtained by computing the total force being exerted on each particle of the system, and from Newton’s second law, the time evolutions of both position and velocity are computed. In contrast, in Hamiltonian mechanics, the time evolution is obtained by computing the Hamiltonian of the system in the generalized momenta and inserting it into Hamilton’s equations. This approach is equivalent to the one used in Lagrangian mechanics, since the Hamiltonian is the Legendre transform of the Lagrangian. The main motivation to use Hamiltonian mechanics instead of Lagrangian mechanics comes from the more simple description of complex dynamic systems.
Example $1$
Let’s apply the Hamiltonian mechanics formulas to the same problem in the previous examples.
Solution
Using Equation \ref{18.3.2}, the Hamiltonian can be written as:
$H = m\dot{q}\dot{q} - \dfrac{1}{2} m \dot{q}^2+m G q = \dfrac{1}{2}m\dot{q} ^2+mGq. \label{18.3.5}$
Since the Hamiltonian really depends on position and momentum, we need to get this in terms of $q$ and $p$, with $p = m\dot{q}$ for the momentum. This is not always the case, since it depends on the choice of coordinate system. For a trivial coordinate system for our simple 1-dimensional problem, we have:
$H=\dfrac{p^2}{2m}+mGq, \nonumber$
from which we can use eqs. \ref{18.3.3} to get:
\begin{aligned} \dot{q} &= \dfrac{\partial H}{\partial p} = \dfrac{p}{m} \ \dot{p} &=-\dfrac{\partial H}{\partial q} = -mG. \end{aligned} \nonumber
These equations represent a major diffference of the Hamiltonian method, since we describe the system using two first-order differential equations, rather than one second-order differential equation. In order to get the equation of motion, we need to take the derivative of $\dot{q}$:
$\ddot{q} = \dfrac{d}{dt} \left( \dfrac{p}{m} \right) = \dfrac{\dot{p}}{m}, \nonumber$
and then replacing the definition of $\dot{p}$ obtained above, we get:
$\ddot{q} = \dfrac{-mG}{m} = -G \nonumber$
which—once again—is the same result obtained for the two previous cases. Integrating this twice, we get the familiar equation of motion for our problem.
1. We have already encountered Legendre transform in The Live Textbook of Physical Chemistry 1 when transforming from the thermodynamic energy to any of the other thermodynamic potentials.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/17%3A_Classical_Mechanics/17.02%3A_Lagrangian_Formulation.txt
|
In 1925, Erwin Schrödinger and Werner Heisenberg independently developed the new quantum theory. Schrödinger’s method involves partial differential equations, whereas Heisenberg’s method employs matrices; however, a year later the two methods were shown to be mathematically equivalent. Most textbooks begin with Schrödinger’s equation, since it seems to have a better physical interpretation via the classical wave equation. Indeed, the Schrödinger equation can be viewed as a form of the wave equation applied to matter waves.
• 18.1: The Time-Independent Schrödinger Equation
We can start the derivation of the single-particle time-independent Schrödinger equation (TISEq) from the equation that describes the motion of a wave in classical mechanics: ψ(x,t)=exp[i(kx−ωt)]
• 18.2: The Time-Dependent Schrödinger Equation
Unfortunately, the analogy with the classical wave equation that allowed us to obtain the TISEq in the previous section cannot be extended to the time domain by considering the equation that involves the partial first derivative with respect to time. Schrödinger himself presented his time-independent equation first, and then went back and postulated the more general time-dependent equation.
18: The Schrodinger Equation
We can start the derivation of the single-particle time-independent Schrödinger equation (TISEq) from the equation that describes the motion of a wave in classical mechanics:
$\psi(x,t)=\exp[i(kx-\omega t)], \label{19.1.1}$
where $x$ is the position, $t$ is time, $k=\dfrac{2\pi}{\lambda}$ is the wave vector, and $\omega=2\pi\nu$ is the angular frequency of the wave. If we are not concerned with the time evolution, we can consider uniquely the derivatives of Equation \ref{19.1.1} with respect to the location, which are:
\begin{aligned} \dfrac{\partial \psi}{\partial x} &=ik\exp[i(kx-\omega t)] = ik\psi, \ \dfrac{\partial^2 \psi}{\partial x^2} &=i^2k^2\exp[i(kx-\omega t)] = -k^2\psi, \end{aligned} \label{19.1.2}
where we have used the fact that $i^2=-1$.
Assuming that particles behaves as wave—as proven by de Broglie’s we can now use the first of de Broglie’s equation, Equation 17.5.4, we can replace $k=\dfrac{p}{\hbar}$ to obtain:
$\dfrac{\partial^2 \psi}{\partial x^2} = -\dfrac{p^2\psi}{\hbar^2}, \label{19.1.3}$
which can be rearranged to:
$p^2 \psi = -\hbar^2 \dfrac{\partial^2 \psi}{\partial x^2}. \label{19.1.4}$
The total energy associated with a wave moving in space is simply the sum of its kinetic and potential energies:
$E = \dfrac {p^{2}}{2m} + V(x), \label{19.1.5}$
from which we can obtain:
$p^2 = 2m[E - V(x)], \label{19.1.6}$
which we can then replace into Equation \ref{19.1.4} to obtain:
$2m[E-V(x)]\psi = - \hbar^2 \dfrac{\partial^2 \psi}{\partial x^2}, \label{19.1.7}$
which can then be rearranged to the famous time-independent Schrödinger equation (TISEq):
$- \dfrac{\hbar^2}{2m} \dfrac{\partial^2 \psi}{\partial x^2} + V(x) \psi = E\psi, \label{19.1.8}$
A two-body problem can also be treated by this equation if the mass $m$ is replaced with a reduced mass $\mu = \dfrac{m_1 m_2}{m_1+m_2}$.
18.02: The Time-Dependent Schrodinger Equation
Unfortunately, the analogy with the classical wave equation that allowed us to obtain the TISEq in the previous section cannot be extended to the time domain by considering the equation that involves the partial first derivative with respect to time. Schrödinger himself presented his time-independent equation first, and then went back and postulated the more general time-dependent equation. We are following here the same strategy and just give the time-independent variable as a postulate. The single-particle time-dependent Schrödinger equation is:
$i\hbar\dfrac{\partial \psi(x,t)}{\partial t}=-\dfrac{\hbar^2}{2m} \dfrac{\partial^2 \psi(x,t)}{\partial x^2}+V(x)\psi(x,t) \label{19.2.1}$
where $V \in \mathbb{R}^{n}$ represents the potential energy of the system. Obviously, the time-dependent equation can be used to derive the time-independent equation. If we write the wavefunction as a product of spatial and temporal terms, $\psi(x, t) = \psi(x) f(t)$, then Equation \ref{19.2.1} becomes:
$\psi(x) i \hbar \dfrac{df(t)}{dt} = f(t) \left[-\dfrac{\hbar^2}{2m} \dfrac{\partial^2}{\partial x^2} + V(x) \right] \psi(x), \label{19.2.2}$
which can be rearranged to:
$\dfrac{i \hbar}{f(t)} \dfrac{df(t)}{dt} = \dfrac{1}{\psi(x)} \left[-\dfrac{\hbar^2}{2m} \dfrac{\partial^2}{\partial x^2} + V(x) \right] \psi(x). \label{19.2.3}$
Since the left-hand side of Equation \ref{19.2.3} is a function of $t$ only and the right hand side is a function of $x$ only, the two sides must equal a constant. If we tentatively designate this constant $E$ (since the right-hand side clearly must have the dimensions of energy), then we extract two ordinary differential equations, namely:
$\dfrac{1}{f(t)} \dfrac{df(t)}{dt} = - \dfrac{i E}{\hbar} \label{19.2.4}$
and: $-\dfrac{\hbar^2}{2m} \dfrac{\partial^2\psi(x)}{\partial x^2} + V(x) \psi(x) = E \psi(x). \label{19.2.5}$
The latter equation is the TISEq. The former equation is easily solved to yield
$f(t) = e^{-iEt / \hbar} \label{19.2.6}$
The solutions of Equation \ref{19.2.6}, $f(t)$, are purely oscillatory, since $f(t)$ never changes in magnitude. Thus if:
$\psi(x, t) = \psi(x) \exp\left(\dfrac{-iEt}{\hbar}\right), \label{19.2.7}$
then the total wave function $\psi(x, t)$ differs from $\psi(x)$ only by a phase factor of constant magnitude. There are some interesting consequences of this. First of all, the quantity $\vert \psi(x, t) \vert^2$ is time independent, as we can easily show:
$\vert \psi(x, t) \vert^2 = \psi^{*}(x, t) \psi(x, t)= \psi^{*}(x)\exp\left(\dfrac{iEt}{\hbar}\right)\psi(x)\exp\left(\dfrac{-iEt}{\hbar}\right)= \psi^{*}(x) \psi(x). \label{19.2.8}$
Wave functions of the form of Equation \ref{19.2.7} are called stationary states. The state $\psi(x, t)$ is “stationary,” but the particle it describes is not! Of course Equation \ref{19.2.6} represents only a particular solution to the time-dependent Schrödinger equation. The general solution is much more complicated, and the factorization of the temporal part is often not possible:$^1$
$\psi({\bf r}, t) = \sum_i c_i e^{-iE_it / \hbar} \psi_i({\bf r}) \nonumber$
1. This sections was adapted in part from Prof. C. David Sherrill’s A Brief Review of Elementary Quantum Chemistry Notes available here.
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/18%3A_The_Schrodinger_Equation/18.01%3A_The_Time-Independent_Schrodinger_Equation.txt
|
The TISEq can be solved analytically only in a few special cases. In this section, we will analyze the main four. Luckily, we can use these solutions to explain most of the effects in chemistry since we can combine them to describe the hydrogen atom upon which we can build more complex chemical systems, as we will show in the next chapters.
Thumbnail: The quantum wavefunction of a particle in a 2D infinite potential well of dimensions \(L_x\) and \(L_y\). The wavenumbers are \(n_x=2\) and \(n_y=2\). (Public Domain; Inductiveload).
19: Analytically Soluble Models
By definition, the particle does not feel any external force, therefore $V(x)=0$, and the TISEq is written simply:
$- \dfrac{\hbar^2}{2m} \dfrac{d^2\psi}{dx^2} = E \psi(x). \label{20.1.1}$
This equation can be rearranged to:
$\dfrac{d^2\psi}{dx^2} =- \dfrac{2mE}{\hbar^2} \psi(x), \label{20.1.2}$
which corresponds to a mathematical problem where the second derivative of a function should be equal to a constant, $- \dfrac{2mE}{\hbar^2}$ multiplied by the function itself. Such a problem is easily solved by the function:
$\psi(x) = A \exp(\pm ikx). \label{20.1.3}$
The first and second derivatives of this function are:
\begin{aligned} \dfrac{d \psi(x)}{dx} &= \pm ik A \exp(\pm ikx) = \pm ik \psi(x) \ \dfrac{d^2 \psi(x)}{dx^2} &= \mp k^2 A \exp(\pm ikx) = -(\pm k^2) \psi(x). \end{aligned} \label{20.1.4}
Comparing the second derivative in Equation \ref{20.1.4} with Equation \ref{20.1.2}, we immediately see that if we set:
$k^2 = \dfrac{2mE}{\hbar^2}, \label{20.1.5}$
we solve the original differential equation. Considering de Broglie’s equation, Equation 17.5.4, we can replace $k=\dfrac{p}{\hbar}$, to obtain:
$E = \dfrac{k^2 \hbar^2}{2m} = \dfrac{p^2}{2m}, \label{20.1.6}$
which is exactly the classical value of the kinetic energy of a free particle moving in one direction of space. Since the function in Equation \ref{20.1.3} solves the Schrödinger equation for the free particle, it is called an eigenfunction (or eigenstate) of the TISEq. The energy result of Equation \ref{20.1.6} is called eigenvalue of the TISEq. Notice that, since $k$ is continuous in the eigenfunction, the energy eigenvalue is also continuous (i.e., all values of $E$ are acceptable).
19.02: The Particle in a Box
Now we can consider a particle constrained to move in a single dimension, under the influence of a potential $V(x)$ which is zero for $0 \leq x \leq a$ and infinite elsewhere. Since the wavefunction is not allowed to become infinite, it must have a value of zero where $V(x)$ is infinite, so $\psi(x)$ is nonzero only within $[0,a]$. The Schrödinger equation is thus:
$- \dfrac{\hbar^2}{2m} \dfrac{d^2\psi}{dx^2} = E \psi(x) \qquad 0 \leq x \leq a. \label{20.2.1}$
In other words, inside the box $\psi(x)$ describes a free particle, but outside the box $\psi(x)=0$. Since the Schrödinger equation involves derivatives, the function that solves it, $\psi(x)$, must be everywhere continuous and everywhere continuously differentiable. This fact means that the value of the wave function at the two extremes must be equal to zero:
$\psi(0)=\psi(a)=0. \label{20.2.2}$
Inside the box we can use Euler’s formula to write the wave function as a linear combination of the positive and negative solutions:
$\psi(x)=A \exp(\pm ix)=A \sin kx + B \cos kx, \label{20.2.3}$
where $A$ and $B$ are constants that we need to determine using the two constraints in Equation \ref{20.2.2}. For $B$ it is straightforward to see that:
$\psi(0)= 0 + B =0 \; \implies \; B=0. \label{20.2.4}$
For $A$ we have:
$\psi(a)= A\sin ka = 0, \label{20.2.5}$
which is trivially solved by $A=0$, or by the more interesting condition of $ka=n\pi$. The trivial solution corresponds to a wave function uniformly equal to zero everywhere. This wave function is uninteresting, since it describes no particles in no boxes. The second set of solutions, however, is very interesting, since we can write it as:
$\psi_n(x)= A\sin\left(\dfrac{n\pi x}{a} \right)\quad n=1,2,\ldots,\infty, \label{20.2.6}$
which represents an infinite set of functions, $\psi_n(x)$, determined by a positive integer number $n$, called quantum number. Since these functions solve the TISEq, they are also called eigenfunctions, but they are not a continuous set, unlike in the previous case. To calculate the energy eigenvalues, we can replace \ref{20.2.6} into Equation \ref{20.2.1}, to obtain:
$E_n = \dfrac{h^2 n^2}{8 m a^2} \quad n=1,2,\ldots,\infty. \label{20.2.7}$
A few interesting considerations can be made from the results of Equation \ref{20.2.7}. First, although there is an infinite number of acceptable values of the energy (eigenvalues), these values are not continuous. Second, the lowest value of the energy is not zero, and it depends on the size of the box, $a$, since:
$E_1 = \dfrac{h^2 }{8 m a^2} \neq 0. \label{20.2.8}$
This value is called zero-point energy (ZPE), and is a purely quantum mechanical effect. Notice that we did not solve for the constant $A$. This task is not straightforward, and it can be achieved by requiring the wave function to describe one particle exclusively (we will come back to this task after chapter 23). Extending the problem to three dimensions is relatively straightforward, resulting in a set of three separate quantum numbers (one for each of the 3-dimensional coordinate $n_x,n_y,n_z$).
19.03: The Harmonic Oscillator
We now consider a particle subject to a restoring force $F = -kx$, as might arise for a mass-spring system obeying Hooke’s Law. The potential is then:
$V(x) = - \int_{-\infty}^{\infty} (-kx) dx = V_0 + \dfrac{1}{2} kx^2. \label{20.3.1}$
If we choose the energy scale such that $V_0 = 0$ then: $V(x) = \dfrac{1}{2}kx^2$, and the TISEq looks:
$- \dfrac{\hbar^2}{2 \mu} \dfrac{d^2\psi}{dx^2} + \dfrac{1}{2} kx^2 \psi(x) = E \psi(x) \label{20.3.2}$
After some effort, the eigenfunctions are:
$\psi_n(x) = N_n H_n(\alpha^{1/2} x) e^{-\alpha x^2 / 2} \quad n=0,1,2,\ldots,\infty, \label{20.3.3}$
where $H_n$ is the Hermite polynomial of degree $n$, and $\alpha$ and $N_n$ are defined by
$\alpha = \sqrt{\dfrac{k \mu}{\hbar^2}} \hspace{1.5cm} N_n = \dfrac{1}{\sqrt{2^n n!}} \left( \dfrac{\alpha}{\pi} \right)^{1/4}. \label{20.3.4}$
The eigenvalues are:
$E_n = \hbar \omega \left(n + \dfrac{1}{2} \right), \label{20.3.5}$
with $\omega = \sqrt{k/ \mu}$. Notice how, once again, the eigenfunctions and eigenvalues are not continuous. In this case, however, the first eigenvalue corresponds to $n=0$, but because of the $\dfrac{1}{2}$ factor in Equation \ref{20.3.5}, the lowest energy state is, once again, not zero. In other words, the two masses of a quantum harmonic oscillator are always in motion. The frequencies at which they vibrate do not form a continuous spectrum. That is, the vibration frequency cannot take any value that we can think of, but only those given by Equation \ref{20.3.5}. The lowest possible energy (the ZPE) will be $E_0 = \dfrac{1}{2} \hbar \omega$.
19.04: The Rigid Rotor
The rigid rotor is a simple model of a rotating stick in three dimensions (or, if you prefer, of a molecule). We consider the stick to consist of two point-masses at a fixed distance. We then reduce the model to a one-dimensional system by considering the rigid rotor to have one mass fixed at the origin, which is orbited by the reduced mass $\mu$, at a distance $r$. The cartesian coordinates, $x,y,z$, are then replaced by three spherical polar coordinates: the co-latitude (zenith) angle $\theta$, the longitudinal (azimuth) angle $\phi$, and the distance $r$. The TISEq of the system in spherical coordinates is:
$- \dfrac{\hbar^2}{2I} \left[ \dfrac{1}{\sin \theta} \dfrac{\partial}{\partial \theta} \left(\sin\theta\dfrac{\partial}{\partial \theta} \right) + \dfrac{1}{\sin^2 \theta} \dfrac{\partial^2}{\partial \phi^2} \right] \psi(r) = E_{\ell} \psi(r), \label{20.4.1}$
where $I=\mu r^2$ is the moment of inertia. After a little effort, the eigenfunctions can be shown to be the spherical harmonics $Y_{\ell}^{m_{\ell}}(\theta, \phi)$.$^1$ The eigenvalues are simply:
$E_{\ell} = \dfrac{\hbar^2}{2I} \ell(\ell+1), \label{20.4.2}$
where $\ell=0,1,2,\ldots$ is the azimuthal quantum number, and $m_{\ell}=-\ell, -\ell+1, \ldots, \ell-1, \ell$ is the magnetic quantum number. Each energy level $E_{\ell}$ is $(2\ell+1)$-fold degenerate in $m_{\ell}$. Notice that the energy does not depend on the second index $m_{\ell}$, and the functions with fixed $m_{\ell}=-\ell,-\ell+1,\dots,\ell-1,\ell$ have the same energy. Since this problem was, in fact, a one-dimensional problem, it results in just one quantum number $\ell$, similarly to the previous two cases. The index $m_{\ell}$ that appears in the spherical harmonics will assume some importance in future chapters.
1. For a description of the spherical harmonics see here
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/19%3A_Analytically_Soluble_Models/19.01%3A_The_Free_Particles.txt
|
In this chapter we will consider the hydrogen atom as a proton fixed at the origin, orbited by an electron of reduced mass $\mu$. The potential due to electrostatic attraction is:
$V(r) = - \dfrac{e^2}{4 \pi \varepsilon_0 r}, \label{21.1}$
where $\varepsilon_0$ is the constant permittivity of vacuum. The kinetic energy term in the Hamiltonian is
$K = - \dfrac{\hbar^2}{2 \mu} \nabla^2, \label{21.2}$
where $\nabla^2$ is the Laplace operator (Laplacian) representing the divergence of the gradient of a function. Recall that in 1-dimension the kinetic energy is proportional to the second derivative of the wave function with respect to the position. In 3-dimension, the first derivative along all three dimension of space is called gradient, which is written in cartesian coordinates $\nabla = \left(\dfrac{\partial}{\partial x},\dfrac{\partial}{\partial y},\dfrac{\partial}{\partial z} \right)$. The Laplacian is the divergence $\nabla \cdot$ of the gradient (effectively, it replaces the second derivatives in the 1-D case), and can be written in cartesian coordinates as $\nabla^2=\nabla\cdot\nabla=\dfrac{\partial^2}{\partial x^2}+\dfrac{\partial^2}{\partial y^2}+\dfrac{\partial^2}{\partial z^2}$. The TISEq for the Hydrogen atom is therefore:
${\displaystyle \left(-{\dfrac {\hbar ^{2}}{2\mu }}\nabla ^{2}-{\dfrac {e^{2}}{4\pi \varepsilon _{0}r}}\right)\psi (r,\theta ,\phi )=E\psi (r,\theta ,\phi )}, \label{21.3}$
which, replacing the Laplacian in spherical coordinates, becomes:
$-{\dfrac {\hbar ^{2}}{2\mu }}\left[{\dfrac {1}{r^{2}}}{\dfrac {\partial }{\partial r}}\left(r^{2}{\dfrac {\partial \psi }{\partial r}}\right)+{\dfrac {1}{r^{2}\sin \theta }}{\dfrac {\partial }{\partial \theta }}\left(\sin \theta {\dfrac {\partial \psi }{\partial \theta }}\right)+{\dfrac {1}{r^{2}\sin ^{2}\theta }}{\dfrac {\partial ^{2}\psi }{\partial \phi ^{2}}}\right]-{\dfrac {e^{2}}{4\pi \varepsilon _{0}r}}\psi =E\psi. \label{21.4}$
This equation seems very complicated, but comparing the term in between square brackets with the TISEq of the rigid rotor, we immediately see some connections. Equation \ref{21.4} is a separable, partial differential equation that can be solved by factorizing the wave function $\psi(r, \theta, \phi)$ into $R_{nl}(r)Y_{\ell}^{m_{\ell}}(\theta, \phi)$, where $Y_{\ell}^{m_{\ell}}(\theta, \phi)$ are again the spherical harmonics that solved the TISEq for the rigid rotor. The radial part $R(r)$ obeys the equation:
$- \dfrac{\hbar^2}{2 \mu r^2} \dfrac{d}{dr} \left( r^2 \dfrac{dR}{dr} \right) \left[\dfrac{\hbar^2 \ell(\ell+1)}{2 \mu r^2} + V(r) - E \right] R(r) = 0, \label{21.5}$
which is called the radial equation for the hydrogen atom. The solutions of the radial part are:
$R_{n\ell}(r) = - \left[ \dfrac{(n - \ell - 1)!}{2n[(n+\ell)!]^3} \right]^{1/2}\left(\dfrac{2}{na_0}\right)^{\ell+3/2}r^{\ell} e^{-r/na_0} L_{n+\ell}^{2\ell+1} \left( \dfrac{2r}{n a_0} \right) \label{21.6}$
where $0 \leq \ell \leq n - 1$, and $a_0 = \dfrac{\varepsilon_0 h^2}{\pi \mu e^2}$ is the Bohr radius. The functions $L_{n+\ell}^{2\ell+1}\left(\dfrac{2r}{na_0}\right)$ are the associated Laguerre functions.
The hydrogen atom eigenfunctions are:
\begin{aligned} \psi_{n\ell m_{\ell}}(r,\theta,\phi) &= R_{n\ell}(r)Y_{\ell}^{m_{\ell}}(\theta,\phi) = \ &= - \left[ \dfrac{(n - \ell - 1)!}{2n[(n+\ell)!]^3} \right]^{1/2}\left(\dfrac{2}{na_0}\right)^{\ell+3/2}r^{\ell} e^{-r/na_0} L_{n+\ell}^{2\ell+1} \left( \dfrac{2r}{n a_0} \right) Y_{\ell}^{m_{\ell}}(\theta,\phi) \end{aligned} \label{21.7}
The quantum numbers $n,\ell,m_{\ell}$ can take the following values:
• $n=1,2,3,\ldots,\infty$ (principal quantum number),
• $\ell =0,1,2,\ldots ,n-1$ (azimuthal quantum number),
• $m_{\ell}=-\ell ,\ldots ,\ell$ (magnetic quantum number).
These functions are called the hydrogen atom orbitals, and are usually first encountered in introductory chemistry textbooks. Notice that—by definition—an orbital is a complex function (i.e., it has both a real and an imaginary component) that describes exclusively one electron. Spherical harmonics are orthogonal to each other and they can be linearly combined them to form new solution to the TISEq where the imaginary part is removed. (Because of the orthogonality of spherical harmonics, the energy spectrum will not be affected by this operation.) These corresponding real orbitals are three-dimensional function that are not easily visualized in a three-dimensional space, since they require a four-dimensional one.$^1$ Since there is no real consensus on what a wave function represent, the interpretation of orbitals is not straightforward.$^2$ We will return on the interpretation problem in future chapters, but for now it is important to keep in mind these following facts:
• The shape of every hydrogen atom orbital—complex or real—is that of a function on the surface of a sphere (yes, this is true for every single one of them, since they all come from spherical harmonics, which are special functions defined on the surface of a sphere. Hydrogen $2p$ orbitals in real space do not have the shape of a dumbbell, as often is said in general chemistry textbooks. Same goes for $d, f, \ldots$ orbitals.)
• Each orbital is the mathematical description of one—and only one—electron (in other words, orbitals do not “contain” electrons, they “are” the functions that describe each electron.)
• Hydrogen orbitals are defined only for stystems that contain one electron. When more than one electron is present in an atom, the TISEq in Equation \ref{21.3} does not describe the system anymore. In these more complicated situations the TISEq cannot be solved analytically, and orbitals cannot be easily defined (we will see in chapter 26 how we can circumvent this issue in an approximate manner, and why general chemistry textbook talk of orbitals for every atom and molecule.)
The hydrogen atom eigenvalues are:
$E_n = - \dfrac{e^2}{8 \pi \varepsilon_0 a_0 n^2} \quad n=1,2,\ldots,\infty. \label{21.8}$
Notice how the eigenvalues (i.e., the energy spectrum) do not depend on the azimuthal and magnetic quantum numbers, $\ell$ and $m$. Energy levels with the same $n$, but different $\ell$ and/or $m$ are called degenerate states, since they all have the same energy. This is, once again, source of misinterpretation in most general chemistry textbook:
• According to the TISEq, the $2s$ and $2p$ orbitals of the hydrogen atom have the same energy.$3$
1. If it is not clear why you need a 4-D space to visualize a 3-D function, think at the fact that we use a 2-D space (the Cartesian plane) to visualize a 1-D function ($f(x)$).︎
2. At least not as straightforward as it is given in introductory chemistry textbooks.︎
3. In practice, this is not true, because of a tiny effect called the Lamb shift. The description of this effect requires to go beyond the Schrödinger equation—and essentially beyond quantum mechanics—into the field of quantum electrodynamics. The Lamb shift, however, is not what is usually depicted in general chemistry textbook as the $2s-2p$ energy difference. The difference that is usually discussed in the context of the aufbau principle is a many-electron effect, as we will discuss in chapter 10, and does not apply to hydrogen.︎
Thumbnail: Hydrogen atom. (Public Domain; Bensaccount via Wikipedia)
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/20%3A_The_Hydrogen_Atom.txt
|
So far, we have seen a few simple examples of how to solve the TISEq. For the general case, the mathematical formulation of quantum mechanics is built upon the concept of an operator. An operator is a function over a space of physical states onto another space of physical states. Operators do not exist exclusively in quantum mechanics, but they can also be used in classical mechanics. In chapter 2, we have seen at least a couple of them, namely the Lagrangian, \(L\), and Hamiltonian, \(H\). In quantum mechanics, however, the concept of an operator is the basis of the complex mathematical treatment that is necessary for more complicated cases. In this chapter, we will discuss the mathematics of quantum mechanical operators, and we will recast the results for the analytical cases in light of the new framework. As we will see, this framework is even simpler than what we have seen in the previous chapter. This simplicity, however, will open the door to the “stranger” side of quantum mechanics.
• 21.1: Operators in Quantum Mechanics
The central concept in this new framework of quantum mechanics is that every observable (i.e., any quantity that can be measured in a physical experiment) is associated with an operator. To distinguish between classical mechanics operators and quantum mechanical ones, we use a hat symbol ^ on top of the latter.
• 21.2: Eigenfunctions and Eigenvalues
As we have already seen, an eigenfunction of an operator A^ is a function f such that the application of A^ on f gives f again, times a constant.
• 21.3: Common Operators in Quantum Mechanics
Some common operators occurring in quantum mechanics are collected in the table below.
21: Operators and Mathematical Background
The central concept in this new framework of quantum mechanics is that every observable (i.e., any quantity that can be measured in a physical experiment) is associated with an operator. To distinguish between classical mechanics operators and quantum mechanical ones, we use a hat symbol $\hat{}$ on top of the latter. Physical pure states in quantum mechanics are represented as unit-norm (probabilities are normalized to one) vectors in a special complex Hilbert space. Following the definition, an operator is a function that projects a vector in the Hilbert space onto the space of physical observables. Since observables are values that come up as the result of the experiment, quantum mechanical operators must yield real eigenvalues.16 Operators that possess this property are called Hermitian. In the wave mechanics formulation of quantum mechanics that we have seen so far, the wave function varies with space and time—or equivalently momentum and time—and observables are differential operators. A completely analogous formulation is possible in terms of matrices. In the matrix formulation of quantum mechanics, the norm of the physical state should stay fixed, so the evolution operator should be unitary, and the operators can be represented as matrices.
The expectation value of an operator $\hat{A}$ for a system with wave function $\psi(\mathbf{r})$ living in a Hilbert space with unit vector $\mathbf{r}$ (i.e., in three-dimensional Cartesian space $\mathbf{r} = \left\{ x,y,z \right\}$), is given by:
$\langle A \rangle = \int \psi^{*}({\bf r}) \hat{A} \psi({\bf r}) d{\bf r}, \label{22.1.1}$
and if $\hat{A}$ is a Hermitian operator, all physical observables are represented by such expectation values. It is easy to show that if $\hat{A}$ is a linear operator with an eigenfunction $g$, then any multiple of $g$ is also an eigenfunction of $\hat{A}$.
Basic Properties of Operators
Most of the properties of operators are obvious, but they are summarized below for completeness. The sum and difference of two operators $\hat{A}$ and $\hat{B}$ are given by:
\begin{aligned} (\hat{A} + \hat{B}) f &= \hat{A} f + \hat{B} f \ (\hat{A} - \hat{B}) f &= \hat{A} f - \hat{B} f. \end{aligned}\label{22.1.2}
The product of two operators is defined by:
$\hat{A} \hat{B} f \equiv \hat{A} [ \hat{B} f ] \label{22.1.3}$
Two operators are equal if $\hat{A} f = \hat{B} f \label{22.1.4}$
for all functions $f$. The identity operator $\hat{1}$ does nothing (or multiplies by 1):
${\hat 1} f = f \label{22.1.5}$
The associative law holds for operators:
$\hat{A}(\hat{B}\hat{C}) = (\hat{A}\hat{B})\hat{C} \label{22.1.6}$
The commutative law does not generally hold for operators. In general, $\hat{A} \hat{B} \neq \hat{B} \hat{A}$. It is convenient to define the quantity:
$[\hat{A}, \hat{B}]\equiv \hat{A} \hat{B} - \hat{B} \hat{A} \label{22.1.7}$
which is called the commutator of $\hat{A}$ and $\hat{B}$. Note that the order matters, so that $[ \hat{A}, \hat{B}] = - [ \hat{B}, \hat{A}]$. If $\hat{A}$ and $\hat{B}$ happen to commute, then $[\hat{A}, \hat{B}] = 0$.
Linear Operators
Almost all operators encountered in quantum mechanics are linear. A linear operator is any operator $\hat{A}$ satisfying the following two conditions:
\begin{aligned} \hat{A} (f + g) &= \hat{A} f + \hat{A} g, \ \hat{A} (c f) &= c \hat{A} f, \end{aligned}\label{22.1.8}
where $c$ is a constant and $f$ and $g$ are functions. As an example, consider the operators $\dfrac{d}{dx}$ and $()^2$. We can see that $\dfrac{d}{dx}$ is a linear operator because:
\begin{aligned} \dfrac{d}{dx}[f(x) + g(x)] &=\dfrac{d}{dx}f(x) + \dfrac{d}{dx}g(x), \ \dfrac{d}{dx}[c f(x)] &= c (d/dx) f(x). \end{aligned}\label{22.1.9}
However, $()^2$ is not a linear operator because:
$(f(x) + g(x))^2 \neq (f(x))^2 + (g(x))^2 \label{22.1.10}$
Hermitian Operators
Hermitian operators are characterized by the self-adjoint property:
$\int \psi_a^{*} (\hat{A} \psi_a)d{\bf r} = \int \psi_a (\hat{A} \psi_a)^{*}d{\bf r}, \label{22.1.11}$
where the integral is performed over all space. This property guarantees that all the eigenvalues of the operators are real. Defining $a$ as the eigenvalue of operator $\hat{A}$ using:
$\hat{A} \psi({\bf r}) = a \psi({\bf r}), \label{22.1.12}$
we can prove that $a$ is real by replacing Equation \ref{22.1.12} into Equation \ref{22.1.11}:
\begin{aligned} a \int \psi_a^{*} \psi_a d{\bf r}&= a^{*} \int \psi_a \psi_a^{*} d{\bf r}\ (a - a^{*}) \int \vert\psi_a\vert^2 d{\bf r} &= 0, \end{aligned}\label{22.1.13}
and since $\vert\psi_a\vert^2$ is never negative, either $a = a^{*}$ or $\psi_a = 0$. Since $\psi_a = 0$ is not an acceptable wavefunction, $a = a^{*}$, and $a$ is real.
The following additional properties of Hermitian operators can also be proven with some work:
$\int \psi^{*}\hat{A} \psi d{\bf r} = \int (\hat{A} \psi)^{*} \psi d{\bf r}, \label{22.1.14}$
and for any two states $\psi_1$ and $\psi_2$:
$\int \psi_1^{*} \hat{A} \psi_2 d{\bf r}= \int (\hat{A} \psi_1)^{*} \psi_2 d{\bf r}. \label{22.1.15}$
Taking $\psi_a$ and $\psi_b$ as eigenfunctions of $\hat{A}$ with eigenvalues $a$ and $b$ with $a \neq b$, and using Equation \ref{22.1.15}, we obtain:
\begin{aligned} \int \psi_a^{*} \hat{A} \psi_b d{\bf r} &= \int (\hat{A} \psi_a)^{*} \psi_b d{\bf r}\ b \int \psi_a^{*} \psi_b d{\bf r} &= a^{*} \int \psi_a^{*} \psi_b d{\bf r}\ (b - a) \int \psi_a^{*} \psi_b d{\bf r} &= 0. \end{aligned}\label{22.1.16}
Thus, since $a = a^{*}$, and since we assumed $b \neq a$, we must have $\int \psi_a^{*} \psi_b d{\bf r} = 0$, i.e. $\psi_a$ and $\psi_b$ are orthogonal. In other words, eigenfunctions of a Hermitian operator with different eigenvalues are orthogonal (or can be chosen to be so).
1. But they might not be strictly real.︎
|
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/21%3A_Operators_and_Mathematical_Background/21.01%3A_Operators_in_Quantum_Mechanics.txt
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.