chapter
stringlengths
1.97k
1.53M
path
stringlengths
47
241
This page takes an introductory look at how you can get useful information from a C-13 NMR spectrum. Typical chemical shifts in C-13 NMR spectra In the table, the "R" groups won't necessarily be simple alkyl groups. In each case there will be a carbon atom attached to the one shown in red, but there may well be other things substituted into the "R" group. carbon environment chemical shift (ppm) C=O (in ketones) 205 - 220 C=O (in aldehydes) 190 - 200 C=O (in acids and esters) 170 - 185 C in aromatic rings 125 - 150 C=C (in alkenes) 115 - 140 RCH2OH 50 - 65 RCH2Cl 40 - 45 RCH2NH2 37 - 45 R3CH 25 - 35 CH3CO- 20 - 30 R2CH2 16 - 25 RCH3 10 - 15 If a substituent is very close to the carbon in question, and very electronegative, that might affect the values given in the table slightly. For example, ethanol has a peak at about 60 because of the CH2OH group. It also has a peak due to the RCH3 group. The "R" group this time is CH2OH. The electron pulling effect of the oxygen atom increases the chemical shift slightly from the one shown in the table to a value of about 18. Example 1: Ethanol Remember that each peak identifies a carbon atom in a different environment within the molecule. In this case there are two peaks because there are two different environments for the carbons. The carbon in the CH3 group is attached to 3 hydrogens and a carbon. The carbon in the CH2 group is attached to 2 hydrogens, a carbon and an oxygen. So which peak is which? You might remember from the introductory page that the external magnetic field experienced by the carbon nuclei is affected by the electronegativity of the atoms attached to them. The effect of this is that the chemical shift of the carbon increases if you attach an atom like oxygen to it. That means that the peak at about 60 (the larger chemical shift) is due to the CH2 group because it has a more electronegative atom attached. Example 2: But-3-en-2-one This is also known as 3-buten-2-one (amongst many other things!) Here is the structure for the compound: You can pick out all the peaks in this compound using the simplified table above. • The peak at just under 200 is due to a carbon-oxygen double bond. The two peaks at 137 and 129 are due to the carbons at either end of the carbon-carbon double bond. And the peak at 26 is the methyl group which, of course, is joined to the rest of the molecule by a carbon-carbon single bond. If you want to use the more accurate table, you have to put a bit more thought into it - and, in particular, worry about the values which don't always exactly match those in the table! • The carbon-oxygen double bond in the peak for the ketone group has a slightly lower value than the table suggests for a ketone. There is an interaction between the carbon-oxygen and carbon-carbon double bonds in the molecule which affects the value slightly. This isn't something which we need to look at in detail for the purposes of this topic. You must be prepared to find small discrepancies of this sort in more complicated molecules - but don't worry about this for exam purposes at this level. Your examiners should give you shift values which exactly match the compound you are given. The two peaks for the carbons in the carbon-carbon double bond are exactly where they would be expected to be. Notice that they aren't in exactly the same environment, and so don't have the same shift values. The one closer to the carbon-oxygen double bond has the larger value. • And the methyl group on the end has exactly the sort of value you would expect for one attached to C=O. The table gives a range of 20 - 30, and that's where it is. One final important thing to notice. There are four carbons in the molecule and four peaks because they are all in different environments. But they aren't all the same height. In C-13 NMR, you cannot draw any simple conclusions from the heights of the various peaks. Example 3: Isopropyl propanoate 1-methylethyl propanoate is also known as isopropyl propanoate or isopropyl propionate. Here is the structure for 1-methylethyl propanoate: Two simple peaks There are two very simple peaks in the spectrum which could be identified easily from the second table above. 1. The peak at 174 is due to a carbon in a carbon-oxygen double bond. (Looking at the more detailed table, this peak is due to the carbon in a carbon-oxygen double bond in an acid or ester.) 2. The peak at 67 is due to a different carbon singly bonded to an oxygen. Those two peaks are therefore due to: If you look back at the more detailed table of chemical shifts, you will find that a carbon singly bonded to an oxygen has a range of 50 - 65. 67 is, of course, a little bit higher than that. As before, you must expect these small differences. No table can account for all the fine differences in environment of a carbon in a molecule. Different tables will quote slightly different ranges. At this level, you can just ignore that problem! Before we go on to look at the other peaks, notice the heights of these two peaks we've been talking about. They are both due to a single carbon atom in the molecule, and yet they have different heights. Again, you can't read any reliable information directly from peak heights in these spectra. The three right-hand peaks From the simplified table, all you can say is that these are due to carbons attached to other carbon atoms by single bonds. But because there are three peaks, the carbons must be in three different environments. The more detailed table is more helpful. The easiest peak to sort out is the one at 28. If you look back at the table, that could well be a carbon attached to a carbon-oxygen double bond. The table quotes the group as CH3CO-, but replacing one of the hydrogens by a simple CH3 group won't make much difference to the shift value. The right-hand peak is also fairly easy. This is the left-hand methyl group in the molecule. It is attached to an admittedly complicated R group (the rest of the molecule). It is the bottom value given in the detailed table. The tall peak at 22 must be due to the two methyl groups at the right-hand end of the molecule - because that's all that's left. These combine to give a single peak because they are both in exactly the same environment. If you are looking at the detailed table, you need to think very carefully which of the environments you should be looking at. Without thinking, it is tempting to go for the R2CH2 with peaks in the 16 - 25 region. But you would be wrong! The carbons we are interested in are the ones in the methyl group, not in the R groups. These carbons are again in the environment: RCH3. The R is the rest of the molecule. The table says that these should have peaks in the range 10 - 15, but our peak is a bit higher. This is because of the presence of the nearby oxygen atom. Its electronegativity is pulling electrons away from the methyl groups - and, as we've seen above, this tends to increase the chemical shift slightly. Once again, don't worry about the discrepancies. In an exam, perhaps your examiners will just want you to have learnt the simple table above - in which case, they can't expect you to work out which peak is which in a complicated spectrum of this sort. Or they will give you tables of chemical shifts - in which case, they will give you values which match the peaks in the spectra. Working out structures from C-13 NMR spectra So far, we've just been trying to see the relationship between carbons in particular environments in a molecule and the spectrum produced. We've had all the information necessary. Now let's make it a little more difficult - but we'll work from much easier examples! In each example, try to work it out for yourself before you read the explanation. Example 1 How could you tell from just a quick look at a C-13 NMR spectrum (and without worrying about chemical shifts) whether you had propanone or propanal (assuming those were the only options)? Because these are isomers, each has the same number of carbon atoms, but there is a difference between the environments of the carbons which will make a big impact on the spectra. In propanone, the two carbons in the methyl groups are in exactly the same environment, and so will produce only a single peak. That means that the propanone spectrum will have only 2 peaks - one for the methyl groups and one for the carbon in the C=O group. However, in propanal, all the carbons are in completely different environments, and the spectrum will have three peaks. Example 2 Thare are four alcohols with the molecular formula C4H10O. Which one produced the C-13 NMR spectrum below? You can do this perfectly well without referring to chemical shift tables at all. In the spectrum there are a total of three peaks - that means that there are only three different environments for the carbons, despite there being four carbon atoms. In A and B, there are four totally different environments. Both of these would produce four peaks. In D, there are only two different environments - all the methyl groups are exactly equivalent. D would only produce two peaks. That leaves C. Two of the methyl groups are in exactly the same environment - attached to the rest of the molecule in exactly the same way. They would only produce one peak. With the other two carbon atoms, that would make a total of three. The alcohol is C. Example 3 This follows on from Example 2, and also involves an isomer of C4H10O but which isn't an alcohol. Its C-13 NMR spectrum is below. Work out what its structure is. Because we don't know what sort of structure we are looking at, this time it would be a good idea to look at the shift values. The approximations are perfectly good, and we will work from this table: carbon environment chemical shift (ppm) C-C 0 - 50 C-O 50 - 100 C=C 100 - 150 C=O 150 - 200 There is a peak for carbon(s) in a carbon-oxygen single bond and one for carbon(s) in a carbon-carbon single bond. That would be consistent with C-C-O in the structure. It isn't an alcohol (you are told that in the question), and so there must be another carbon on the right-hand side of the oxygen in the structure in the last paragraph. The molecular formula is C4H10O, and there are only two peaks. The only solution to that is to have two identical ethyl groups either side of the oxygen. The compound is ethoxyethane (diethyl ether), CH3CH2OCH2CH3. Example 4 Using the simplified table of chemical shifts above, work out the structure of the compound with the following C-13 NMR spectrum. Its molecular formula is C4H6O2. Let's sort out what we've got. • There are four peaks and four carbons. No two carbons are in exactly the same environment. • The peak at just over 50 must be a carbon attached to an oxygen by a single bond. • The two peaks around 130 must be the two carbons at either end of a carbon-carbon double bond. • The peak at just less than 170 is the carbon in a carbon-oxygen double bond. Putting this together is a matter of playing around with the structures until you have come up with something reasonable. But you can't be sure that you have got the right structure using this simplified table. In this particular case, the spectrum was for the compound: If you refer back to the more accurate table of chemical shifts towards the top of the page, you will get some better confirmation of this. The relatively low value of the carbon-oxygen double bond peak suggests an ester or acid rather than an aldehyde or ketone. It can't be an acid because there has to be a carbon attached to an oxygen by a single bond somewhere - apart from the one in the -COOH group. We've already accounted for that carbon atom from the peak at about 170. If it was an acid, you would already have used up both oxygens in the structure in the -COOH group. Without this information, though, you could probably come up with reasonable alternative structures. If you were working from the simplified table in an exam, your examiners would have to allow any valid alternatives. Contributors and Attributions Template:ContribCalark
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Spectroscopy/Magnetic_Resonance_Spectroscopies/Nuclear_Magnetic_Resonance/NMR%3A_Structural_Assignment/Interpreting.txt
This page describes what a proton NMR spectrum is and how it tells you useful things about the hydrogen atoms in organic molecules. The background to NMR spectroscopy Nuclear magnetic resonance is concerned with the magnetic properties of certain nuclei. On this page we are focusing on the magnetic behaviour of hydrogen nuclei - hence the term proton NMR or 1H-NMR.1H NMR spectroscopy is used more often than 13C NMR, partly because proton spectra are much easier to obtain than carbon spectra. The 13C isotope is only present in about 1% of carbon atoms, and that makes it difficult to detect. The 1H isotope is almost 99% abundant, which helps make it easier to observe. Another advantage is that 1H NMR spectroscopy gives more information than 13C NMR, as you will find out later. Note that in this discussion, the word "proton" is used for "hydrogen atom", because it is the proton in the nucleus of the 1H isotope that is observed in these experiments. Although 2H (deuterium) and 3H (tritium) are also NMR-active, they absorb at frequencies that are different from the ones used in 1H NMR. The 1H isotope is also much more common than the other two, so 1H NMR spectroscopy is more conveniently done than 2H NMR spectroscopy. Hydrogen atoms as little magnets If you have a compass needle, it normally lines up with the Earth's magnetic field with the north-seeking end pointing north. Provided it isn't sealed in some sort of container, you could twist the needle around with your fingers so that it pointed south - lining it up opposed to the Earth's magnetic field. It is very unstable opposed to the Earth's field, and as soon as you let it go again, it will flip back to its more stable state. Hydrogen nuclei also behave as little magnets and a hydrogen nucleus can also be aligned with an external magnetic field or opposed to it. Again, the alignment where it is opposed to the field is less stable (at a higher energy). It is possible to make it flip from the more stable alignment to the less stable one by supplying exactly the right amount of energy. The energy needed to make this flip depends on the strength of the external magnetic field used, but is usually in the range of energies found in radio waves - at frequencies of about 60 - 100 MHz. (BBC Radio 4 is found between 92 - 95 MHz!) It's possible to detect this interaction between the radio waves of just the right frequency and the proton as it flips from one orientation to the other as a peak on a graph. This flipping of the proton from one magnetic alignment to the other by the radio waves is known as the resonance condition. The importance of the hydrogen atom's environment What we've said so far would apply to an isolated proton, but real protons have other things around them - especially electrons. The effect of the electrons is to cut down the size of the external magnetic field felt by the hydrogen nucleus. Suppose you were using a radio frequency of 90 MHz, and you adjusted the size of the magnetic field so that an isolated proton was in the resonance condition. If you replaced the isolated proton with one that was attached to something, it wouldn't be feeling the full effect of the external field any more and so would stop resonating (flipping from one magnetic alignment to the other). The resonance condition depends on having exactly the right combination of external magnetic field and radio frequency. How would you bring it back into the resonance condition again? You would have to increase the external magnetic field slightly to compensate for the effect of the electrons. Now suppose that you attached the hydrogen to something more electronegative. The electrons in the bond would be further away from the hydrogen nucleus, and so would have less effect on the magnetic field around the hydrogen. The external magnetic field needed to bring the hydrogen into resonance will be smaller if it is attached to a more electronegative element, because the hydrogen nucleus feels more of the field. Even small differences in the electronegativities of the attached atom or groups of atoms will make a difference to the magnetic field needed to achieve resonance. Summary For a given radio frequency (say, 90 MHz) each hydrogen atom will need a slightly different magnetic field applied to it to bring it into the resonance condition depending on what exactly it is attached to - in other words the magnetic field needed is a useful guide to the hydrogen atom's environment in the molecule. Features of an NMR spectrum A simple NMR spectrum looks like this: The peaks There are two peaks because there are two different environments for the hydrogens - in the CH3 group and attached to the oxygen in the COOH group. They are in different places in the spectrum because they need slightly different external magnetic fields to bring them in to resonance at a particular radio frequency. The sizes of the two peaks gives important information about the numbers of hydrogen atoms in each environment. It isn't the height of the peaks that matters, but the ratio of the areas under the peaks. If you could measure the areas under the peaks in the diagram above, you would find that they were in the ratio of 3 (for the larger peak) to 1 (for the smaller one). That shows a ratio of 3:1 in the number of hydrogen atoms in the two environments - which is exactly what you would expect for CH3COOH. The need for a standard for comparison - TMS Before we can explain what the horizontal scale means, we need to explain the fact that it has a zero point - at the right-hand end of the scale. The zero is where you would find a peak due to the hydrogen atoms in tetramethylsilane - usually called TMS. Everything else is compared with this. You will find that some NMR spectra show the peak due to TMS (at zero), and others leave it out. Essentially, if you have to analyse a spectrum which has a peak at zero, you can ignore it because that's the TMS peak. TMS is chosen as the standard for several reasons. The most important are: • It has 12 hydrogen atoms all of which are in exactly the same environment. They are joined to exactly the same things in exactly the same way. That produces a single peak, but it's also a strong peak (because there are lots of hydrogen atoms). • The net effect of this is that TMS produces a peak on the spectrum at the extreme right-hand side. Almost everything else produces peaks to the left of it. The chemical shift The horizontal scale is shown as (ppm). is called the chemical shift and is measured in parts per million - ppm. A peak at a chemical shift of, say, 2.0 means that the hydrogen atoms which caused that peak need a magnetic field two millionths less than the field needed by TMS to produce resonance. A peak at a chemical shift of 2.0 is said to be downfield of TMS. The further to the left a peak is, the more downfield it is. Solvents for NMR spectroscopy NMR spectra are usually measured using solutions of the substance being investigated. It is important that the solvent itself does not contain any simple hydrogen atoms, because they would produce confusing peaks in the spectrum. There are two ways of avoiding this. You can use a solvent such as tetrachloromethane, CCl4, which does not contain any hydrogen, or you can use a solvent in which any ordinary hydrogen atoms are replaced by its isotope, deuterium - for example, CDCl3 instead of CHCl3. All the NMR spectra used on this site involve CDCl3 as the solvent. Deuterium atoms have sufficiently different magnetic properties from ordinary hydrogen that they do not produce peaks in the area of the spectrum that we are looking at. Contributors and Attributions Jim Clark (Chemguide.co.uk)
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Spectroscopy/Magnetic_Resonance_Spectroscopies/Nuclear_Magnetic_Resonance/NMR%3A_Structural_Assignment/Introduction.txt
This page describes how you interpret simple low resolution nuclear magnetic resonance (NMR) spectra. It assumes that you have already read the background page on NMR so that you understand what an NMR spectrum looks like and the use of the term "chemical shift". Difference between high and low resolution spectra A low resolution spectrum looks much simpler because it cannot distinguish between the individual peaks in the various groups of peaks. The numbers against the peaks represent the relative areas under each peak. That information is extremely important in interpreting the spectra. Interpreting a low resolution spectrum Using the total number of peaks Each peak represents a different environment for hydrogen atoms in the molecule. In the methyl propanoate spectrum above, there are three peaks because there are three different environments for the hydrogens. Remember that methyl propanoate is CH3CH2COOCH3. The hydrogens in the CH2 group are obviously in a different environment from those in the CH3 groups. The two CH3 groups aren't in the same environment either. One is attached to a CH2 group, the other to an oxygen. Using the areas under the peaks The ratio of the areas under the peaks tell you the ratio of the numbers of hydrogens in the various environments. In the methyl propanoate case, the areas were in the ratio of 3:2:3, which is exactly what you want for the two differently placed CH3 groups and the CH2 group. You will probably be told the relative areas under the peaks - especially if you are only looking at low resolution spectra, but it is just possible that you might have to work them out. NMR spectrometers have a device which draws another line on the spectrum called an integrator trace (or integration trace). You can measure the relative areas from this trace. Using chemical shifts The position of the peaks tells you useful things about what groups the various hydrogen atoms are in. The important shifts for the groups present in methyl propanoate are: Showing these groups on the low resolution spectrum gives: Questions 1. An organic compound was known to be one of the following. Use its low resolution NMR spectrum to decide which it is. Notice that there are three peaks showing three different environments for the hydrogens. That eliminates methyl ethanoate as a possibility because that would only give two peaks - due to the two differently situated CH3 group hydrogens. Does the ratio of the areas under the peaks help? Not in this case - both the other compounds would have three peaks in the ratio of 1:2:3. Now you need to look at the chemical shifts: Checking the positions of the various hydrogens in the two possible compounds against the chemical shift table gives you this pattern of shifts: Comparing these with the actual spectrum means that the substance was propanoic acid, CH3CH2COOH. 2. How would you use low resolution NMR to distinguish between the isomers propanone and propanal? The propanone would only give one peak in its NMR spectrum because both CH3 groups are in an identical environment - both are attached to -COCH3. The propanal would give three peaks with the areas underneath in the ratio 3:2:1. You could refer to the chemical shift table above to decide where the peaks are likely to be found, but it isn't really necessary. 3. How many peaks would there be in the low resolution NMR spectrum of the following compound, and what would be the ratio of the areas under the peaks? All the CH3 groups are exactly equivalent so would only produce 1 peak. There would also be peaks for the hydrogens in the CH2 group and the COOH group. There would be three peaks in total with areas in the ratio 9:2:1. Contributors and Attributions Jim Clark (Chemguide.co.uk)
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Spectroscopy/Magnetic_Resonance_Spectroscopies/Nuclear_Magnetic_Resonance/NMR%3A_Structural_Assignment/Low_Resoluti.txt
Methoxybenzene or anisole has six carbons, but only four peaks in the spectrum because of symmetry. These peaks are all above 100 ppm, but some peaks are as far downfield as 160 ppm. Figure NMR9.13C NMR spectrum of methoxybenzene (anisole). Source: SDBSWeb : http://riodb01.ibase.aist.go.jp/sdbs/ (National Institute of Advanced Industrial Science and Technology of Japan, 15 August 2008) • Benzaldehyde has peaks between 130 and 140 ppm, as well as one near 190 ppm. Just as in the sp3 region of the spectrum, when a carbon is attached to an electronegative element, it moves further downfield, and since the carbonyl (or C=O) carbon in the aldehyde has two bonds to oxygen, it shows up considerably downfield. The carbonyl carbon in some ketones can show up as far as 210 ppm. Figure NMR10. 13C NMR spectrum of benzaldehyde. Source: SDBSWeb : http://riodb01.ibase.aist.go.jp/sdbs/ (National Institute of Advanced Industrial Science and Technology of Japan, 15 August 2008) Multiplicity Another type of additional data available from 1H NMR spectroscopy is called multiplicity or coupling. Coupling is useful because it reveals how many hydrogens are on the next carbon in the structure. That information helps to put an entire structure together piece by piece. In ethanol, CH3CH2OH, the methyl group is attached to a methylene group. The 1H spectrum of ethanol shows this relationship through the shape of the peaks. The peak near 3.5 ppm is the methylene group with an integral of 2H. • The integral of 2H means that this group is a methylene, so it has two hydrogens. The carbon bearing these two hydrogens can have two other bonds. There could be two hydrogens on one neighbouring carbon and one on another. Otherwise, all three hydrogens could be on one neighbouring carbon. However, the shift of 3.5 ppm means that this carbon is attached to an oxygen. Mutliplicity usually only works with hydrogens on neighbouring carbons. If there is an oxygen on one side of the methylene, all three neighbouring hydrogens must be on a carbon on the other side. Alternatively, look at the spectrum the other way around. The peak at 1 ppm is the methyl group with an integral of 3H. • The neighbouring H could be on two different neighbouring carbons or both on the same one. The number of lines in a peak is always one more than the number of hydrogens on the neighboring carbon. The triplet for the methyl peak means that there are two neighbors on the next carbon (3 - 1 = 2H); the quartet for the methylene peak indicates that there are three hydrogens on the next carbon (4 - 1 = 3H). Table NMR 1 summarizes coupling patterns that arise when protons have different numbers of neighbors. # of lines ratio of lines term for peak # of neighbors 1 - singlet 0 2 1:1 doublet 1 3 1:2:1 triplet 2 4 1:3:3:1 quartet 3 5 1:4:6:4:1 quintet 4 6 1:5:10:10:5:1 sextet 5 7 1:6:15:20:15:6:1 septet 6 8 1:7:21:35:35:21:7:1 octet 7 9 1:8:28:56:70:56:28:8:1 nonet 8 The third peak in the ethanol spectrum is usually a "broad singlet." This is the peak due to the OH. You would expect it to be a triplet because it is next to a methylene. Under very specific circumstances, it does appear that way. However, coupling is almost always lost on hydrogens bound to heteroatoms (OH and NH). The lack of communication between an OH or NH and its neighbours is related to rapid proton transfer, in which that proton can trade places with another OH or NH in solution. This exchange happens quite easily if there are even tiny traces of water in the sample. In summary, multiplicity or coupling is what we call the appearance of a group of symmetric peaks representing one hydrogen in NMR spectroscopy. • There are limitations on coupling:
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Spectroscopy/Magnetic_Resonance_Spectroscopies/Nuclear_Magnetic_Resonance/NMR%3A_Structural_Assignment/More_About_E.txt
The n + 1 rule (number of lines in a multiplet = number of neighboring H + 1) will work for the majority of problems you may encounter. Occasionally, you may see more complicated coupling. The spectrum of methyl acrylate is a good example. There are a couple of points to note in this spectrum, beginning with the number of peaks. Figure NMR21.1H NMR spectrum of methyl acrylate. Source: Simulated spectrum. • In addition, there is a problem with coupling in the vinyl region. • This pattern is called a "doublet of doublets." The two symmetry-inequivalent neighbors on the other end of the double bond each act as if the other one isn't there. They couple to the proton next to the carbonyl independently, each one splitting the peak for this proton into a separate doublet. There are a few cases in which this independent coupling will occur rather than the (n+1) type coupling we saw first. Generally, independent coupling occurs when protons are not freely rotating. That can happen if one of the protons is attached to a double-bonded carbon, because we can't rotate around a double bond. It may also happen with protons that are directly attached to the carbons of a ring. • Sometimes coupling information is depicted as an arrow. This arrow stands for the coupling constant between two protons. The coupling constant is related to the spin of a hydrogen atom. The spin (related to magnetic moment) can be aligned with the external magnetic field (we will show it pointing up) or else against it; no other possibilities are allowed. If there are two neighboring hydrogens, both spins could be aligned with the external field, both could be aligned against it, or one could be aligned each way. That means there are three different magnetic combinations that will each have a different effect on the observed proton: increased magnetic field, decreased magnetic field, and no net effect (canceling out). These three combinations result in the observed proton absorbing at three different frequencies, because the frequency it absorbs is sensitive to the magnetic field it experiences. Note that there are two ways to arrive at the middle possibility, with one neighbour spin up and the other spin down. Statistically this possibility is twice as likely as either both spins up or both spins down. It is thus twice as likely that the observed proton experiences that effect, and so the middle line in a triplet is twice as high as the other two lines. Figure NMR23. Effects of neighboring protons on an observed peak. This case assumes all the neighboring protons have an equivalent effect (they have the same coupling constant with the observed proton). However, the size of that arrow, the coupling constant, is only the same for two neighboring hydrogens if they have the same spatial relationship with the observed hydrogen. That isn't always true. • As a result, the two coupling constants are different. We can depict that situation using arrows of different lengths for the two neighboring proton spins. Each spin can be either up or down, but now two opposing spins do not cancel out. The result is four spin combinations of equal probability, not just three. Figure NMR23. Effects of neighboring protons on an observed peak. This case assumes neighboring protons have inequivalent effects (they have differing coupling constants with the observed proton). The dihedral angle is limited in only a few specific cases:
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Spectroscopy/Magnetic_Resonance_Spectroscopies/Nuclear_Magnetic_Resonance/NMR%3A_Structural_Assignment/NMR11._More_.txt
Butane shows two different peaks in the 13C NMR spectrum, below. Note that: the chemical shifts of these peaks are not very different from methane. The carbons in butane are in a similar environment to the one in methane. • In the 13C NMR spectrum of pentane (below), you can see three different peaks, even though pentane just contains methyl carbons and methylene carbons like butane. As far as the NMR spectrometer is concerned, pentane contains three different kinds of carbon, in three different environments. That result comes from symmetry. Figure NMR3.13C NMR spectrum of pentane. Source: SDBSWeb : http://riodb01.ibase.aist.go.jp/sdbs/ (National Institute of Advanced Industrial Science and Technology of Japan, 15 August 2008) Symmetry is an important factor in spectroscopy. Nature says: • To learn about symmetry, take a model of pentane and do the following: • Wire Frame Ball & Stick Animation NMR1. A three-dimensional model of pentane. Grab the model with the mouse and rotate it so that you are convinced that the second and fourth carbons are symmetry-equivalent, but the third carbon is not. By the same process, you can see that the second and fourth carbons along the chain are also symmetry-equivalent. However, the middle carbon is not; it never switches places with the other carbons if you rotate the model. There are three different sets of inequivalent carbons; these three groups are not the same as each other according to symmetry. NMR8. Chemic The trends here are exactly the same as in carbon spectra. Wherever the carbon goes, it takes the proton with it. By analogy with carbon spectra, • Source: Simulated spectrum. Figure NMR12.1H NMR spectrum of 1-hexene. Source: Simulated spectrum. Figure NMR13.1H NMR spectrum of butanal. Source: Simulated spectrum. As before, there are also hydrogens on linear carbons, although they are much less common than tetrahedral or trigonal carbons. • Remember, these are general rules that you should know. There will occasionally be exceptions; the proton in a carboxylic acid may be seen at 12 ppm, and the proton in chloroform shows up at 7 ppm although it is attached to a tetrahedral carbon. (World-record shifts occur for hydrogens attached to transition metals: "late" metals like ruthenium or rhodium can move hydrogen peaks all the way up to -20 ppm, but "early" metals like tantalum can move them down as far as 25 ppm.) NMR Appendix Table of 13C NMR Frequencies Common in Organic Compounds. Note that effects are additive: two or more electron-withdrawing groups move the absorbance further to the left than just one group. Table of 1H NMR Frequencies Common in Organic Compounds. This chart shows the frequancies of protons that are attached to carbons. In general, protons follow the trend seen in the carbon to which they are attached. Note again the additive effects of multiple attached groups. This table does not include OH (or NH) protons. Protons attached to heteroatoms are more difficult to pinpoint because their locations in the spectrum are much less specific. Instead, they may be found across a very broad range. Table of 1H NMR Frequencies of OH Common in Organic Compounds.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Spectroscopy/Magnetic_Resonance_Spectroscopies/Nuclear_Magnetic_Resonance/NMR%3A_Structural_Assignment/NMR3._Symmet.txt
Page Under Construction! Introduction This section will be devoted to understanding basic math and physics that are commonly encountered in NMR theory. These explanations are going to be very simple and concise and the reader is encouraged to look at the references and other wiki pages for fuller discussions. Spin Angular Momentum When talking about the spin angular momentum, nucleus can be considered as a mass point moving on a circular path. While the momentum of a mass point moving along the straight path can be defined as $\vec{p}=m\vec{v}$ (where p and v are vectors), angular velocity is used to describe the motion of nucleus. $\vec{L}=\vec{r} \times \vec{p}$ (where L is angular momentum and r is the radius of the circle path) Since L and p are perpendicular to each other, so $L=rp \sin\,90^o=rp$ ${p}=m{v}$ $L=rmv$ The direction of L is determined by right-hand rule, so it is perpendicular to the circle plane. Spherical Coordinates For more advanced concepts in NMR, a spherical basis set is easier to use. Lets first consider a vector in Cartesian space which can be described by $A=A_xe_x+A_ye_yA_ze_z$ where Ax,y,z is the projection of A onto the x y and z axes, and ex,y,z are the basis vectors. The length of the vector is $|A|=\sqrt{A_x^2+A_y^2+A_z^2}$ Now switching to a spherical basis set A becomes $A=A^{+1}e_{+1}+A^{0}e_{0}+A^{-1}e_{-1}=A_{+1}e+{+1}+A_{0}e+{0}+A_{-1}e+{-1}$ where the relation of the spherical basis set to the Cartesian basis set is $e_{\pm 1}=-e^{\mp 1}=\mp\frac{1}{\sqrt{2}}(e_x \pm ie_y)$ $e_0=e^0=e_z$ $A_{\pm 1}=-A^{\mp 1}=\mp \frac{1}{\sqrt{2}}(A_x \pm iA_y)$ $A_0=A^0=A_z$ Typically, Ap and Ap are called the contravariant and covariant components, while ep and ep are the contravariant and covariant basis vectors, respectively. Euler Angles Euler Angles are a set of 3 angles that transform reference frames. These are commonly employed in NMR to switch between reference frames. For example one set of Euler angles takes you from the laboratory frame to the rotating frame. A second set of Euler angle can take you from the laboratory from the the CSA tensor frame. Below I've shown the ranges and how the rotations work. More information may be found in Rotations section. insert figure $\gamma$ and $\alpha$ range the full 2$\pi$ radians while $\beta$ ranges $\pi$ radians Unitary Evolution According the 4th postulate of quantum mechanics "The evolution of a closed system is unitary (reversible). The evolution given by the time-dependent Schrödinger equation $i\ bar{h} \frac{d| \psi>}{dt}=H| \psi >$ where H is the Hamiltonian of the system and $\bar{h}$ is the reduced Planck constant." We can then express the evolution of a state using a unitary operator, known as the propagator $\psi(x,t)=\hat{U}(t) \psi (x,0)$ where the adjoint of the propagator is equal to the unity operator. We can show that this is equivalent to the time dependent shcrodinger equation. $i \bar{h} \frac{\hat{U} \psi(x,0)}{dt}=H \hat{U} \psi (x,0)$ Since this equation must hold for any wave function then it also msut hold for $\psi(x,0)$ as well giving $i \bar{h}\frac{d \hat{U}}{dt}=H \hat{U}$ The only way this equation is solved is if the Hamiltonian is time independent and $\hat{U}(t)=e^{\frac{-iHt}{\bar{h}}}$ Fourier Transform Mathmatical formula that takes you between frequency space and time space Cross Product of 2 Matricies Consider 2 2x2 matricies of the form $A=\begin{bmatrix} a_{11}&a_{12}\a_{21}&a_{22} \end{bmatrix}$ $B=\begin{bmatrix} b_{11}&b_{12}\b_{21}&b_{22} \end{bmatrix}$ the cross product is then defines as $A \times B=\begin{bmatrix} a_{11}b_{11} & a_{11}b_{12} & a_{12}b_{11}& a_{12}b_{12}\ a_{11}b_{21} & a_{11}b_{22} & a_{12}b_{21}& a_{12}b_{22}\a_{21}b_{11} & a_{21}b_{12} & a_{22}b_{11}& a_{22}b_{12}\a_{21}b_{21} & a_{21}b_{22} & a_{22}b_{21}& a_{22}b_{22}\end{bmatrix}$
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Spectroscopy/Magnetic_Resonance_Spectroscopies/Nuclear_Magnetic_Resonance/NMR_-_Background_Physics_and_Mathematics.txt
Nuclear magnetic resonance plays an important role in the fields chemistry, materials science, physics and engineering. It is becoming a more and more useful method to probe the structure of molecules. The primary object of this module is to understand the fundamental concepts of NMR. It is assumed that the reader already understands the quantum numbers associated with electrons. For a more basic understanding of how NMR works, the reader is directed to the NMR introduction page. Introduction Nuclear magnetic resonance, NMR, is a physical phenomenon of resonance transition between magnetic energy levels, happening when atomic nuclei are immersed in an external magnetic field and applied an electromagnetic radiation with specific frequency. By detecting the absorption signals, one can acquire NMR spectrum. According to the positions, intensities and fine structure of resonance peaks, people can study the structures of molecules quantitatively. The size of molecules of interest varies from small organic molecules, to biological molecules of middle size, and even to some macromolecules such as nucleic acids and proteins. Apart from these commonly utilized applications in organic compound, NMR also play an important role in analyzing inorganic molecules, which makes NMR spectroscopy a powerful technique. But the a major question still remains- Why does NMR work? This module will begin by developing the concept of nuclear spin then moving into a discussion about energy levels and the relative populations and the interactions of a nucleus with the magnetic field. Nuclear Spin Origins The concept of spin is regularly addressed in subatomic particle physics. However, to most people spin seems like an abstract concept. This is due to the fact there is no macroscopic equivalent of what spin is. However, for those people who have taken an introduction to chemistry course have seen the concept of spin in electrons. Electrons are subatomic particles which have spin intrinsic to them. The nucleus is not much different. Spin is just another form of angular momentum. The nucleus consists of protons and neutrons and neutrons and protons are comprised of subatomic particles known as quarks and gluons. The neutron has 2 quarks with a -e/3 charge and one quark with a +2e/3 charge resulting in a total charge of 0. The proton however, has 2 quarks with +2e/3 charge and only one quark with a -e/3 charge giving it a net positive charge. Both protons and neutrons are spin=1/2. For any system consisting of n multiple parts, each with an angular momentum the total angular momentum can be described by $J$ where $J=|J_1+J_2+...+J_n|, |J_1+J_2+...+J_n| -1,...|J_1-J_2-...-J_n|$ Here are some examples using the isotopes of hydrogen • $^1H$ = 1 proton so J=1/2 • $^2H$= 1 proton and 1 neutron so $J=1$ or 0. For larger nuclei, it is not immediately evident what the spin should be as there are a multitude of possible values. For the remainder of the discussion we will attribute the spin of the nucleus, I, to be an intrinsic value. There are some rules that the nuclei do follow with respect to nuclear spin. They are summarized in the table below. Table 1. General rules for determination of nuclear spin quantum numbers Mass Number Number of Protons Number of Neutrons Spin (I) Example Even Even Even 0 $^{16}O$ Odd Odd Integer (1,2,...) $^{2}H$ Odd Even Odd Half-Integer (1/2, 3/2,...) $^{13}C$ Odd Even Half-Integer (1/2, 3/2,...) $^{15}N$ Nuclear Spin Angular Momentum and Quantum Numbers As mentioned above, spin is a type of angular momentum. Nuclear spin angular momentum was first reported by Pauli in 1924 and will be described here. Analogous to the angular momentum commonly encountered in electron, the angular momentum is a vector which can be described by a magnitude $L$ and a direction, $m$. The magnitude is given by $L=\hbar\sqrt{I(I+1)}$ The projection of the vector on the z axis (arbitrarily chosen), takes on discretized values according to m, where $m=-I, -I+1, -I+2,...+I$ The angular momentum along the z-axis is now $I_z=m\hbar$ Pictorially, this is represented in the figure below for three values of $I$. The quantum numbers of the nucleus are denoted below. Interaction Symbol Quantum Numbers Nuclear Spin Angular Momentum I 0<I<9/2 by 1/2 Spin Angular Momentum Magnitude L $L=\hbar \sqrt{I(I+1)}$ Spin Angular Momentum Direction m $m=-I, -I+1, -I+2,...+I$ Magnetic Moment of a Nucleus We have now established that the nucleus has spin which can be denoted using specific quantum numbers. As with all charged particles, if the nucleus is moved in a loop it will generate a magnetic field. The magnetic moment $\mu$ is related to the angular momentum of the nucleus by $\mu=\gamma I$ where $\gamma$ is the gyromagnetic ratio, a proportionality constant unique to each nucleus. The table below shows some of the gyromagnetic ratios for some commonly studies nuclei. Nuclei Unpaired Protons Unpaired Neutrons Net Spin 1H 1 0 1/2 2H 1 1 1 31P 1 0 1/2 23Na 1 2 3/2 14N 1 1 1 13C 0 1 1/2 19F 1 0 1/2 The net or bulk magnetization of the sample is given by $M$ and is the sum of each individual magnetic vector, or $M=\sum{\mu}$ since these magnetic moments are vectors and are randomly aligned, the bulk magnetization arising from the nucleus is zero. There may be unpaired electrons which give rise to paramagnetic, anti ferromagnetic, or ferromagnetic properties. However, if an external magnetic field is applied, the nuclei will align either with or against the field and result in a non-zero bulk magnetization. Nuclear Energy Levels in a Magnetic Field We now understand why the nucleus has a magnetic moment associated with it. Now we are getting to the crux of NMR, the use of an external magnetic field. Initially, the nucleus is in the nuclear ground state which is degenerate. The degeneracy of the ground state is $2I+1$. The application of a magnetic field splits the degenerate 2I+1 nuclear energy levels. The energy of a particular level is $E=-\mu\cdot B_0$ where $B_0$ is the external magnetic field. Along the z-direction, which we assume the magnetic field is applied, $E=-\mu B_0$ by substitution, $E=-m\hbar\gamma B_0$ The magnitude of the splitting therefore depends on the size of the magnetic field. In most labs this magnetic field is somewhere between 1 and 21T. Those spins which align with the magnetic field are lower in energy, while those that align against the field are higher in energy. Energy Level Spin Distribution In the absence of a magnetic field the magnetic dipoles are oriented randomly and there is no net magnetization (vector sum of µ is zero). Application of an external magnetic field, as was shown above, creates distinct energy levels based on the spin angular momentum of the nucleus. Each energy level is populated by the spins which have the same angular momentum. To illustrate this, consider a I=1/2 system. There are two energy levels, +1/2 and -1/2, which are populated by spins that have aligned against or with the external magnetic field, respectively. The energy separation between these states is relatively small and the energy from thermal collisions is sufficient to place many nuclei into higher energy spin states. The number of nuclei in each spin state can be described by the Boltzmann distribution. The Boltzmann equation expresses the relationship between temperature and the related energy as shown below. $\large \frac{N_{upper}}{N_{lower}}=e^{\frac{-\Delta{E}}{kT}} = e^{\frac{-h\nu}{kT}}$ Where Nupper and Nlower represent the population of nuclei in upper and lower energy states, E is the energy difference between the spin states, k is the Boltzmann constant (1.3805x10-23 J/Kelvin ) and T is the temperature in K. At room temperature, the number of spins in the lower energy level, N lower, slightly outnumbers the number in the upper level, N upper. Selection Rules The selection rule in NMR is $\Delta m=\pm1$ For a nucleus with I=1/2 there is only one allowed transition. For nuclei with $I>1/2$, there are multiple transitions which can take place. Consider the case of I=3/2. The following transitions can take place $-\dfrac{3}{2}\leftrightarrow-\frac{1}{2}$ $-\dfrac{1}{2}\leftrightarrow\frac{1}{2}$ $\dfrac{1}{2}\leftrightarrow\frac{3}{2}$ which is illustrated below. The transition from $-\frac{3}{2}\leftrightarrow-\frac{1}{2}$ $\frac{1}{2}\leftrightarrow\frac{3}{2}$ are known as satellite transitions, while the $-\frac{1}{2}\leftrightarrow\frac{1}{2}$ transition is known as the central transition. The central transition is primarily observed in an NMR experiment. For more information about satellite transitions please look at quarupole interactions. The NMR Experiment During the NMR experiment several things happen to the nucleus, the bulk magnetization is rotated from the z axis into the xy plane and then allowed to relax back along the z-axis. A full theoretical explanation for a single atom was developed by Bloch into a set of equations known as the Bloch equations. From the NMR experiment chosen a variety of information can be gleaned by studying different interactions. Contributors • Derrick Kaseman (UC Davis), Sureyya OZCAN, Siyi Du NMR - Theory In 1946 Felix Bloch, co-discoverer of NMR, proposed a set of equations to described the time dependence of the net magnetization during the course of the NMR experiment. These equations are known as the Bloch equations and give insights into many processes in NMR. The Bloch equations follow first order kinetics and the derivations are first-order differentials. Bloch Equations in the Lab Frame It has been shown that nuclei in placed in a magnetic field precess with a characteristic Larmor frequency, $\omega$. $\omega_0=-\gamma (1-\sigma) B_0$ which can be made time-dependent by considering a time-dependent magnetic field B(t). In addition we have also seen that there is a bulk magnetization, based on Boltzmann statistics, which lies along the direction of the applied magnetic field, $B_0$. This bulk magnetization can then be thought of preces\sing at the Larmor frequency. Assuming that the $B_0$ is along the Z-axis, we can describe the time dependence of the magnetization $M$, by $\dfrac{dM}{dt}=\omega (t) x M(t)-[R][M(t)-M_eq]$ where R is a rotation matrix, and $M_{eq}$ is the magnetization at equilibrium along the z-axis. The rotation matrix accounts for relaxation processes and is given by $R= \begin{pmatrix} \dfrac{1}{T_2} & 0 & 0 \ 0 & \dfrac{1}{T_2} &0 \ 0& 0 & \dfrac{1}{T_1} \end{pmatrix}$ The $[R][M(t)-M_{eq}]$ term then describes the decay of the magnetization in the x-y plane and the growth of the equilibrium magnetization, due to T2 and T1 effects, repectively. Expansion of the equation yeilds the magnetization in each direction $\begin{pmatrix} \dfrac{dM^*_x(t)}{dt} \ \dfrac{dM^*_y(t)}{dt} \ \dfrac{dM^*_z(t)}{dt} \end{pmatrix}= \begin{pmatrix} [M_x(t) \omega _y(t)-M_y\omega _z(t)]- \dfrac{M_x(t)}{T_2} \ [M_x(t) \omega _z(t)-M_z\omega_x(t)]- \dfrac{M_y(t)}{T_2} \ [M_y(t) \omega _x(t)-M_x\omega _y(t)]-\dfrac{[M_z(t)-M_{eq}]}{T_1} \end{pmatrix}$ Assuming that we have a static magnetic field these equations simplify to $\begin{pmatrix} \dfrac{dM^*_x(t)}{dt} \ \dfrac{dM^*_y(t)}{dt} \ \dfrac{dM^*_z(t)}{dt} \end{pmatrix} = \begin{pmatrix} -\omega _0M_y(t)-\dfrac{M_x(t)}{T_2} \ \omega _0M_x(t)-\dfrac{M_y(t)}{T_2} \ -\dfrac{[M_z(t)-M_{eq}]}{T_1} \end{pmatrix}$ As the magnetization is preces\sing about the Z axis as it recovers from the x-y plane it is realized that the time dependance will follow an oscillatory pattern which can be described u\sing trigonometric functions. Therefore at a given time the magnetization in any direction will be $\begin{pmatrix} M_x(t) \ M_y(t) \ M_z(t) \end{pmatrix} = \begin{pmatrix} [M_x(0)\cos \omega _0t - M_y(0)\sin \omega _0t]e^{\dfrac{-t}{T_2}} \ [M_y(0)\cos \omega _0t - M_x(0)\sin \omega _0t]e^{\dfrac{-t}{T_2}} \M_z(0)e^{\dfrac{-t}{T_1}} + M_{eq}(1-e^{\dfrac{-t}{T_1}})\end{pmatrix}$ Where $M_{x,y,z}(0)$ is the magnetization at t=0. What Do The Equations Describe? From this derivation we are able to describe the motion of the bulk magnetization of the nuclei after a pulse as a function of time accounting for relaxation. Bloch Equations in the Rotating Frame One can imagine that application of a $\dfrac{\pi}{2}$ pulse in which the B- field now becomes time dependent would be challenging to calculate u\sing the Bloch equations in the lab frame. Therefore we now shift our discussion to the rotating frame. It should be stated here that the insertion of the $^*$ to denote the magnetization projection on the rotating frame. The frequency of rotation $\omega _{rot}$, is equivalent to spins precession frequency, which will be the Larmor frequency. From this we can describe the change of the magnetization as $\dfrac{d^*M}{dt}=\dfrac{dM}{dt}-\omega _{rot}xM$ We know that $\dfrac{dM}{dt}$ is given by the bloch equations we derived for the lab frame that is: $\dfrac{dM}{dt}=\omega (t) x M(t)-[R][M(t)-M_eq]$ Substituting this into our equation, we obtain $\dfrac{d^*M}{dt}=\omega (t)XM-\omega _{rot}xM-[R][M-M_{eq}]$ which simplifies (u\sing cross product relations) to $\dfrac{d^*M}{dt}=\omega _{eff}(t)xM-[R][M-M_{eq}]$ where $\omega _{eff}(t)=\omega - \omega _{rot}$ Noting that the relation between the lab frame frame and the rotating frame is the common z-axis, we can define the Bloch equations in the rotating frame $\begin{pmatrix} \dfrac{dM^*_x(t)}{dt} \ \dfrac{dM^*_y(t)}{dt} \ \dfrac{dM^*_z(t)}{dt} \end{pmatrix}= \begin{pmatrix} \gamma [M^*_y(t)B_z(t)-M^*_z(t)B_y(t)]-\dfrac{M^*_x(t)}{T_2} \ \gamma [M^*_z(t)B_x(t)-M^*_x(t)B_z(t)]-\dfrac{M^*_y(t)}{T_2} \ \gamma [M^*_x(t)B_y(t)-M^*_y(t)B_x(t)]-\dfrac{[M^*_z(t)-M^*_{eq}]}{T_2} \end{pmatrix}$ Which can be simplified to give $\begin{pmatrix} \dfrac{dM^*_x(t)}{dt} \ \dfrac{dM^*_y(t)}{dt} \ \dfrac{dM^*_z(t)}{dt} \end{pmatrix}= \begin{pmatrix} -\Omega M^*_y(t)- \gamma M^*_z(t)B^*_y(t)]-\dfrac{M^*_x(t)}{T_2} \ -\gamma [M^*_x(t)+ \gamma M^*_z(t)B^*_x(t)]-\dfrac{M^*_y(t)}{T_2} \ \gamma [M^*_x(t)B^*_y(t)-M^*_y(t)B^*_x(t)]-\dfrac{[M^*_z(t)-M^*_{eq}]}{T_2} \end{pmatrix}$ where $\Omega=\omega _{0}-\omega _{rot}$ Effects of RF pulses During the NMR experiment, we can choose which direction we can apply our magnetic field during the pulse. For simplicity we assume that we apply the pulse along the +x axis. The nutation frequency is then defined as $\omega_1=\mid{\gamma B_1(t)} \mid$ where $B_1$ is the magnetic field applied along rhe x-axis. Application of this to the Bloch equations in the rotating frame we obtain $\begin{pmatrix} M^*_x(t) \ M^*_y(t) \ M^*_z(t) \end{pmatrix}= \begin{pmatrix} \omega_1 M^*_z\sin\phi - \dfrac{-M^*_x}{T_2} \ -\omega_1 M^*_z\cos\phi - \dfrac{-M^*_y}{T_2} \-\omega_1 M^*_z\sin\phi + \omega_1 M^*_z\cos\phi-\dfrac{(M^*_z-M^*_eq)}{T_1} \end{pmatrix}$ where $\phi$ is the phase of the pulse (0=x axis and 270=-y axis). The RF pulse is much shorter than T1, T2, or $\omega_0$. Therefore we can neglect any terms containing these values.THe equations then simplify to $\begin{pmatrix} M^*_x(t) \ M^*_y(t) \ M^*_z(t) \end{pmatrix}= \begin{pmatrix} \omega_1 M^*_z\sin\phi \ -\omega_1 M^*_z\cos\phi \-\omega_1 M^*_z\sin\phi + \omega_1 M^*_z\cos\phi \end{pmatrix}$ Thereby application of a pulse of phase zero will result in the following result for the magnetization as a function of time $\begin{pmatrix} \dfrac{dM^*_x(t)}{dt} \ \dfrac{dM^*_y(t)}{dt} \ \dfrac{dM^*_z(t)}{dt} \end{pmatrix}= \begin{pmatrix} 0 \ \gamma B_1(t) M^*_z \-\gamma B_1(t) M^*_z\end{pmatrix}$ Or more generally as $\begin{pmatrix} M^*_x(t) \ M^*_y(t) \ M^*_z(t) \end{pmatrix}= \begin{pmatrix} M^*_x \ M^*_y\cos\omega _1t - M^*_z\sin\omega _1t \M^*_z\cos\omega _1t - M^*_y\sin\omega _1t \end{pmatrix}$ We can also describe the effect of a pulse of any phase on a Cartesian basis $\begin{pmatrix} M^*_x(t) \ M^*_y(t) \ M^*_z(t) \end{pmatrix}=\begin{pmatrix}\dfrac{1}{2}M^*_x(1+\cos\omega _1t)+\dfrac{1}{2}(M^*_x \cos2\phi +M^*_y\sin2\phi)(1-\cos\omega _1t)+M^*_z\sin\phi \sin\omega _1t \ \dfrac{1}{2}M^*_y(1+\cos\omega _1t)-\dfrac{1}{2}(M^*_y \cos2\phi +M^*_x\sin2\phi)(1-\cos\omega _1t)+M^*_z\cos\phi \sin\omega _1t \ M^*_z\cos\omega _1t-(M^*_x\sin\phi -M^*_ycps\phi )\sin\omega _1t \end{pmatrix}$ Steady State Approximation If the movement of the magnetization is slow, then $\dfrac{dM^*_x}{dt}=\dfrac{dM^*_y}{dt}=\dfrac{dM_z}{dt}=0$ When can then solve for the magnetizations in each direction $M^*_x=-\dfrac{2\pi \gamma\ B_1 M_0 T^2_2 \Omega}{1+4\pi ^2T^2_2 \Omega ^2+\gamma ^2 B_1^2T_1T_2}$ $M^*_y=-\dfrac{\gamma\ B_1 M_0 T_2}{1+4\pi ^2T^2_2 \Omega ^2+\gamma ^2 B_1^2T_1T_2}$ $M_z=\dfrac{M_0[1+4\pi ^2T^2_2\Omega ^2]}{1+4\pi ^2T^2_2 \Omega ^2+\gamma ^2 B_1^2T_1T_2}$ Free Precession In the absences of any magnetization applied in the x-y plane, the change in the magnetization is due to relaxation effects of T1 and T2 processes. The Bloch equations in the rotating fram then reduce to $\begin{pmatrix} \dfrac{dM^*_x(t)}{dt} \ \dfrac{dM^*_y(t)}{dt} \ \dfrac{dM^*_z(t)}{dt} \end{pmatrix}= \begin{pmatrix} -(\omega _{rot} - \omega _0)M^*_y(t)-\dfrac{M^*_x(t)}{T_2} \ (\omega _{rot} - \omega _0)M^*_x(t)-\dfrac{M^*_y(t)}{T_2} \ \dfrac{-[M^*_z(t)-M^*_{eq}]}{T_1} \end{pmatrix}$ which can be solved to give $\begin{pmatrix} M^*_x(t) \ M^*_y(t) \ M^*_z(t) \end{pmatrix}= \begin{pmatrix} [M^*_x(0) \cos\Omega t - M^*_y \sin\Omega t] e^{\dfrac{-t}{T_2}} \ [M^*_y(0) \cos\Omega t + M^*_x \sin\Omega t]e^{\dfrac{-t}{T_2}} \ M^*_z(0)e^{\dfrac{-t}{T_1}} + M_{eq}(1-e^{\dfrac{-t}{T_1}}\end{pmatrix}$ Bloch Equations in the Spherical Reference Frame The effect of an RF pulse on the bulk magnetization using a spherical basis set is important concept. Assuming we have an arbitrary phase of the pulse $\phi$ then the effect of the pulse is $M^{+1*}=\dfrac{M^{+1*}(1+cos \omega_1t)}{2}-\dfrac{M^*_0e^{i\phi}(isin\omega_1t)}{\sqrt{2}}+\dfrac{M^{-1*}e^{2i\phi}(1-cos\omega_1t)}{2}$ $M^{0*}=\dfrac{M^{+1*}e^{-i\phi}(isin\omega_1t)}{\sqrt{2}}+M^*_0cos\omega_1t-\dfrac{M^{-1*}e^{i\phi}(isin\omega_1t)}{\sqrt{2}}$ $M^{-1*}=\dfrac{M^{+1*}e^{-i2\phi}(1-cos \omega_1t)}{2}+\dfrac{M^*_0e^{-i\phi}(isin\omega_1t)}{\sqrt{2}}+\dfrac{M^{+1*}(1+cos\omega_1t)}{2}$ or more simply $M^{p_i*}=\sum_{p_f=-1}^1 x_{p_i,p_f}(t) M^{p_f*}e^{-i \Delta p \phi}$ where $x_{p_0,p_1}(t)$ is the efficiency of the transfer of magnetization of from pi to pf and $\Delta p =p_f-p_i$ The evolution of the magnetization when we consider the effects of relaxation then $M^{p*}=y(t)M^{p*} e^{ip \Omega t}$ where y(t) is the decay of the coherence.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Spectroscopy/Magnetic_Resonance_Spectroscopies/Nuclear_Magnetic_Resonance/NMR_-_Theory/Bloch_Equations.txt
When placed in a magnetic field, charged particles will precess about the magnetic field. In NMR, the charged nucleus, will then exhibit precessional motion at a characteristic frequency known as the Larmor Frequency. The Larmor frequency is specific to each nucleus. The Larmor frequency is measured during the NMR experiment, as it is dependent on the magnetic field that the nucleus experiences. Spinning Top Analogy Often it is difficult in NMR to understand the microscopic processes that are occurring. However, precession is easily observed on the macroscopic scale, as toy tops. When a top is spun, it rotates about a central axis (Figure $1$). The angular momentum of the top ($L$) is aligned along this central axis. If the top is set at an angle, the central axis will move in a circle. The top now spinning along its own central axis precesses around in a circle around earths gravitational field. Atomic nuclei contain intrinsic spin. The nucleus, like a top, will spin along an axis, which is the direction of the angular momentum for the nucleus. The spin of the nucleus can be related to the magnetic moment of the nucleus through the relation $\mu=\gamma I$ where • $\mu$ is the magnetic moment and • $\gamma$ is a proportionality constant known as the gyromagnetic moment. This constant may be positive of negative, depending on if the nucleus precesses clockwise or counterclockwise, respectively. The nuclear magnetic moment will couple to the external magnetic field, which produces a torque on the nucleus and causes the precession around the magnetic field. This is analogous to the macroscopic tops in that the gravitational force couples with the mass of the top. In the absence of friction, the top would precess forever! The frequency of precession is known as the Larmor frequency, $\nu_0$ where $\nu_0=\gamma B_0$ The effect is illustrated below: Ensemble Effects The net magnetization for a sample is the sum of the individual magnetic moments in the sample $M=\sum_i{\mu _i}$ we have already defined the magnetic moment for a nucleus with spin I. The magnetization can then be written as $M=\gamma J$ where J is the net spin angular momentum. The torque, T, of the sample will then be $T=\dfrac{dJ}{dt}$ Subsituting M for J we obtain $T=M\times{}B$ and finally $\dfrac{dM}{dt}=\gamma M\times{}B$ then $\dfrac{dM}{dt}=\gamma MBsin\theta$ since M and B are parallel the sin term drops out. We want to know the rate at which the magnetization is changing with respect to time so we take the second derivative and the result is the Larmor frequency $\omega _0=\gamma B$ since M and B are both vector quantities, the cross product with B in only the Z direction i.e. (B=(0, 0, B0)) then we obtain the Larmor frequency $\omega _0=\gamma B_0$ Chemical Sh When an atomic nucleus is placed in a magnetic field, the ground state will split into different energy levels proportional to the strength of the magnetic field. This effect is known as Zeeman splitting. While the Zeeman interaction is useful for identifying different types of nuclei placed in magnetic fields, structural and dynamic information may be obtained by considering other magnetic and electronic interactions coupling with the nucleus. These interactions are perturbations to the Zeeman interaction. The full NMR Hamiltonian may therefore be expressed as $\hat{H}=\hat{H}_{Zeeman}+\hat{H}_{J}+\hat{H}_{CS}+\hat{H}_{DD}+\hat{H}_Q$ where HZeeman is the Zeeman interaction, HJ is the J coupling, HCS is the chemical shift coupling, HDD is the dipolar coupling, and HQ is the quadrupolar coupling. The relative magnitude of these interactions is shown in the table below. The Zeeman interaction is the largest, followed by the quadrupolar interactions which are on the order of MHz. The chemical shift and the dipolar coupling are on the order of kHz while the scalar coupling is the smallest which is only tens of Hz. Clearly, some of these interactions are more pronounced than others. Interaction Magnitude (Hz) Zeeman 108 Quadrupolar 106 Chemical Shift 103 Dipole 103 J 10 Table 1. Magnitude of different NMR interactions In the liquid state, the dipolar and anisotropic contribution to the chemical shift are averaged due to the molecular reorientation occurring in liquids. The averaging of these interactions gives the characteristically narrow isotropic peaks. Additionally, liquid state NMR primarily looks at spin 1 /2 nuclei (13 C , 1 H) which eliminates any quadrupole interactions. Only the J coupling and isotropic part of the chemical shift remains. In the solid state, molecular reorientation does not occur and the solids may have a variety of bond lengths and angles of a given chemical site. These factors broaden in the NMR spectrum with the broadest peaks over 1MHz wide! NMR Interactions The chemical shift in NMR is extremely important, as it gives vital information about the local structure surrounding the nucleus of interest. For a majority of scientists, the chemical shift is used exlusivley to determine structure, especially in organic systems. Additional information may be gained by examining the anisotropy of the chemical shift. This section will be devoted to looking at chemical shift from a mathematical standpoint including a full treatment of the chemical shift tensor and the relation to the NMR lineshape. Shielding and Chemical Shift As electrons orbit the nucleus, the slightly alter the magnetic field that the nucleus experineces, which slightly changes the difference between the energy levels which gives the resulting spectra. However, these changes are from a select reference and therefore are relative. The resulting change in the energy levels is on the order of Hz, while the Zeeman interaction is on the order of MHz. Thus we can develop a scale based on the relative changes in the energy levels. Beginning with the equation for Zeeman splitting $\Delta E=-\gamma \hbar B_0$ the effect of shielding $\sigma$, results in $\Delta E=-\gamma \hbar (1-\sigma) B_0$ in which the quantitiy $(1-\sigma)B_0$ is known as the effective field experineced by the nucleus $B_{eff}$. Relating the change in energy to a frequency using $E=h\nu$ The value for the shielding is based off the resonant frequency for some reference sample, such as tetrmethylsilane (TMS), $\sigma_{ref}$. This can cause confusion as the strength of the magnetic field dictates the exact frequency is field dependent. To make the shielding constant between magnetic fields it becomes customary to diving by the resonant frequency of the given nucleus. This gives values on the order of $10^{-6}$ or ppm. This value is known as chemical shift, $\delta$, and is given by $\delta=10^6(\sigma_{ref}-\sigma_{sample})$ Chemical Shift Tensor Consider a system consisting of a single spin 1/2 nucleus; I. $\hat{H}_{CS}$ may then be represented as $\hat{H}_{cs}=-\dfrac{\gamma {h} \hat{I} \sigma B_0}{2\pi}$ where $\gamma$ is the gyromagnetic ratio, h is Plank's constant, $\hat{I}$ is the spin operator, and $\sigma$ is the chemical shift tensor. This tensor may be represented as $\sigma= \begin{pmatrix} \sigma_{xx} & \sigma_{xy} & \sigma_{xz} \ \sigma_{yx} & \sigma_{yy} & \sigma_{yz} \ \sigma_{zx} & \sigma_{zy} & \sigma_{zz} \end{pmatrix}$ The principle axis system (PAS), denoted by axes XPAS​, YPAS, ZPAS,represented in the figure below. The PAS is the x, y, and z coordinates with respect to the nucleus whereas X, Y, and Z of the rotating frame are defined with B0 along the +Z direction. The chemical shift tensor describes the electric field surrounding the nucleus with the principle components of the tensor; $\sigma_{xx}$, $\sigma_{yy}$, $\sigma_{zz}$. In a solid, the sample is not a single spin oriented along B0. Rather, the sample contains orientations sampling all directions in all 3 dimensions. Each magnetic moment of the system can be related for its PAS to the Z axis by an angle, $\theta$, and its position in the x y plane given by angle $\phi$. The magnetization experienced at the nucleus on the basis of the PAS is given by $B_0^{PAS}=(sin\theta cos\phi, sin\theta sin\phi, cos\theta)$. The chemical shift, $\omega_{cs}$, is then $\omega_{cs}=-\omega_0(\sigma_{xx} sin^2\theta cos^2\phi+\sigma_{yy} sin^2\theta sin^2\phi+\sigma_{zz} cos^2\theta)$, which reduces to $\omega_{cs}=-\omega_0\sigma_{iso}-\dfrac{1}{2} \omega_0 \Delta[3cos^2\theta-1+\eta sin^2\theta cos2\phi]$. $-\omega_0 \sigma_{iso}$ is the isotropic frequency, $\omega_{iso}=\dfrac{\nu_{iso}}{2\pi}$, routinely observed in liquid spectra. Chemical Shift Anisotropy (CSA) If a polycrystalline sample is then placed in a spectrometer, a lineshape similar to these may be observed: This is due to asymmetry of the local electronic environment surrounding the nucleus. If the electronic cloud were symmetric then $\sigma_{xx}=\sigma_{yy}=\sigma_{zz}$ and $\Delta$ and $\eta$ would be zero leaving only the isotropic peak. Note how broad this pattern is when compared to liquid spectra! CSA is not a large factor in liquids becasue rapid molecular tumbling averages the CSA during the experimental timescale. The above patterns can be deconstructed into the principle components of the chemical shift tensor, as indicated on the figure. This is especially useful as NMR directly correlates to the immediate local electronic structure surrounding the nucleus probed. Take for example a compound which has $\sigma_{xx}=\sigma_{yy}$, which is said to be axially symmetric and $\eta=0$. It is immediately obvious that there is symmetry about the nucleus which is reflected in the static lineshape (black line in above figure). CSA Conventions There are three major conventions people use to describe the CSA tensor. They are outlined below and once should always keep in mind the difference between shielding and chemical shift scales. IUPAC Convention The IUPAC convention describes the CSA tensor by three values, called principle components of the CSA tensor, $\delta_{11}, \delta_{22}, \delta_{33}$. These values follow the following magnitude $\delta_{11} \geqslant \delta_{22} \geqslant \delta_{33}$. The average value or these is then the isotropic chemical shift. $\delta_{iso}=\dfrac{\delta_{11}+\delta_{22}+\delta_{33}}{3}$ Herzfeld Berger Convention The Herzfeld Berger convention uses the IUPAC definition of the principle components, but they are represented using a different notation, the span, $\Omega$, and the skew, $\kappa$. They are defined as follows: $\delta_{iso}=\dfrac{\delta_{11}+\delta_{22}+\delta_{33}}{3}$ $\Omega=\delta_{11}-\delta_{33}$ $\kappa=\dfrac{3(\delta_{22}-\delta_{iso})}{\Omega}$ $\Omega$ will always be larger than or equal to 0 and $\kappa$ will range from -1 to 1 Haeberlen Convention The Haberlen convention uses different combinations of the principle components to describe the CSA. They are $|\delta_{zz}-\delta_{iso}| \geqslant |\delta_{xx}-\delta_{iso}| \geqslant |\delta_{yy}-\delta_{iso}|$ $\delta_{iso}=\dfrac{\delta_{11}+\delta_{22}+\delta_{33}}{3}$ $\Delta=\delta_{zz}-\delta_{iso}$ $\delta=\dfrac{3\Delta}{2}$ $\eta=\dfrac{\delta_{xx}-\delta_{yy}}{\delta_{zz}-\delta_{iso}}$ Conversion Between Conventions Converting between the standard and Haeberlen convention is often needed to compare values of the CSA. There are two cases, one in which $\Delta>0$ and when $\Delta<0$. When $\Delta>0$ $\delta_{11}=\delta_{iso}+\Delta$ $\delta_{22}=\delta_{iso}-\dfrac{\Delta(1-\eta)}{2}$ $\delta_{33}=\delta_{iso}-\dfrac{\Delta(1+\eta)}{2}$ When $\Delta>0$ $\delta_{33}=\delta_{iso}+\Delta$ $\delta_{22}=\delta_{iso}-\dfrac{\Delta(1-\eta)}{2}$ $\delta_{11}=\delta_{iso}-\dfrac{\Delta(1+\eta)}{2}$ Dynamics and CSA The CSA is very sensitive to any type of motion occurring in the system. A great example of this is a liquid. Liquids only exhibit a narrow line at the isotropic chemical shift, indicating that any CSA is averaged. The average timescale of rotation for a liquid is on the order of picoseconds which is about 1 trillion times larger than the CSA . For solids the case is more interesting as increasing the temperature can give rise to molecular motion. There are two types of motion isotropic tumbling anf rotation about a fixed axis. We will consider the implications on the CSA lineshape for each case. Isotropic Tumbling Isotropic rotation, or isotropic tumbling motion is occurring if the molecule is rotating in all possible directions. Therefore at one time a molecules spin may be aligned with the magnetic field, and then at the next time point be oriented perpendicular to the external magnetic field. Correspondingly, we can begin to envision that the processes of isotropic tumbling closely mirrors that of chemical exchange at a site in which each orientation of the spin may be represented as a single peak. As the orientation of the spin changes, the peak position will change. Thus as the spins begin to change their orientation faster than width of the CSA, they will come to a average position, which will be the isotropic chemical shift. Mathematically this can be described as follows. Assume there is a single crystal with a single CSA. The rotational jump is of random nature and occurs at a single rate constant, $\kappa$. The jump is from site $\Omega → \Omega '$ and spends an average time at any given site of $\tau$. We also assume there is no correlation between $\Omega$ and $\Omega '$. Assuming a Markoff process for the time dependence, then $\kappa^{-1} =\tau$ . The probability of finding a spin at orientation $\Omega$ after a time t from an initial position of $\Omega_0$ is $\dfrac{d}{dt}P(\Omega_0/\Omega,t)=\int \pi(\Omega', \Omega) P(\Omega_0/\Omega,t)d\Omega'$ where the initial probability is then $P(\Omega_0/\Omega,t) = \delta(\Omega-\Omega_0)$ and $\delta$ is the Kroenecker Delta function and $\pi(\Omega', \Omega)$ is a memory function that keeps the history of $\Omega$ As expected, as the time approaches zero then the probability approaches the initial condition, or $P(\Omega_0/\Omega,t) = \delta(\Omega'-\Omega_0)$ The memory function must then be the transition probability per unit time which is $\pi(\Omega', \Omega)=[\dfrac{d}{dt}P(\Omega_0/\Omega,t)]_{t=0}$ For a given time t, the probability of $|Omega'$ rotates to $Omega$ is given by $kappa t$ then $\pi(\Omega', \Omega)= (1-\kappa t)\delta(\Omega'-\Omega) + W(\Omega)\kappa t$ where $W(\Omega$ is the probability of finding the molecule in orientation $\Omega$ then it can be shown $\pi(\Omega', \Omega)=[W(\Omega)-\delta(\Omega'-\Omega_0)]$ and for a fintie number ,k, sites $\pi(\Omega_m, \Omega_n)=[W(\Omega_n)-\delta_{mn}]$ The NMR spectrum will be the Fourier Transform of the FID, G(t) given by $G(t)=\dfrac{1}{N} \int_{\Omega} d \Omega G(t,\Omega$ where $G(t,\Omega)=\sum_kQ_k$ for k sites Assuming a line is generated for each k site at $\omega_k$ where $\omega_k=\omega(\Omega_k)$ the the equation of motion is $\dfrac{d}{dt}=iw(\Omega_k Q_k+\sum_j \pi(\Omega_j, Omega_k) Q_j$ $Q_k=Q(t,K_k)$ The equation of motion can be expressed as a k-dimensional vector, K as $\dfrac{d}{dt}K=(k\omega+\pi)K where $\pi$ is the previously defined jump matrix and $\omega$ is the diagonal matrix with elements \[\omega_{kj}=\delta_{kj}\omega_{kj}$ K can be solved to give $K(t)=K(0)exp[(i\omega+\pi)t]$ $K(0)=W=(W(\Omega_1),W(\Omega_2),W(\Omega_3),W(\Omega_n))$ Then $G(t,\Omega)=W exp[(i\omega+\pi)t]$ and the FTed spectrum is $I=(\omega,\Omega)Real{WA^{-1}}$ $A=i(\omega-\omega E)+\pi$ Rotation about an axis In contrast to isotropic tumbling motion, molecules may only rotate about a single fixed axis. This will ultimately change the CSA to a uniaxial pattens, once the rotation is fast enough. Depending on the $\Delta$ and $\eta$ the averaged pattern may even switch the sign of $\Delta$! Lets explore what happens when this is the case Mathematically. It is easiest to consider the case of symmetry related jumps where a molecule performs a rotation at rate $1/\tau_{jk}$ between symmetry equivalent positions, $\omega_j$. The exchange between sites may then be written as $Ag=i\omega_1M_01$ where A is a coupling matrix, g is the magnetization at site j, $\omega_1$ is the RF field strength and $M_0$ is the thermal equilibrium magnetization. The exchange matrix is a diagonal matrix with elements $A_{jj}=i(\omega-\omega_j)-\dfrac{1}{T_2-} \dfrac{1}{\tau_j}$ we can the solve for g and forier transform to get the intensity ( an exercise left for the reader) to obtain the equation for the lineshape $I(\omega)=\int_\Omega (g(\omega,\Omega)d\Omega$
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Spectroscopy/Magnetic_Resonance_Spectroscopies/Nuclear_Magnetic_Resonance/NMR_-_Theory/Larmor_Precession.txt
Contributors and Attributions • Name #1 here (if anonymous, you can avoid this) with university affiliation J-Coupling Page Under Construction Introduction A J-coupling is an interaction between nuclei containing spin. J-couplings are also known as scalar couplings. This interaction is mediated through bonds, in contrast to dipole interactions, which are mediated through space. Typically, we consider the J-coupling to be a weak interaction, in comparison to the Zeeman interaction. J-couplings are typically used in combination with chemical shifts to deduce the through-bond connectivity in small molecules and proteins. While typically a liquid state phenomena, solid-state J-coupling constants are observable. J-coupling values range in 0.1 Hz in organic compounds to kHz in transition metal complexes. The J-coupling typically reduces in magnitude the more bonds exist between the coupled nuclei. Furthermore, J-couplings may be either homonuclear (i.e. between hydrogens with different chemical shifts) or heteronuclear (i.e. between hydrogen and carbon). Pascals Triangle Nuclei in different chemical environments up to nine bonds (though 1-4 are typically readily measurable) is away can influence one another’s effective local magnetic field by carrying spin orientational information through bonding electrons.This effect is most prominent among chemically equivalent nuclei, giving rise to the N+1 rule for equivalent protons. A proton with N protons on contiguous carbon atoms splits into N+1 peaks with intensity pattern. The splitting pattern for A when it is coupled to a number of X nuclides would follow the relation represented by Pascal’s triangle. To be more specific, I will take AX2 for example. AX2 represent a spin system that contains three nuclei, two of which have the same chemical shift and one of which is different (e.g. ClCH2CHCl2). Here A is CH proton and X are CH2 protons. According to table 4 and figure 12, CH proton will be split into 1:1 doublet, while CH2 will give a fine structure of 1:2:1 triplet. The spacing between peaks is defined as coupling constant J, which can be used to describe the degree of coupling. Table 4. Pascal's triangle according to AX configurations Pascals Trinagle Construction The Pascal’s triangle is a graphical device used to predict the ratio of heights of lines in a split NMR peak. To construct the Pascal’s triangle, use the following procedure. Step 1: Draw a short, vertical line and write number one next to it. Step 2: Draw two vertical lines underneath it symmetrically. Step 3: Connect each of them to the line above using broken lines. Step 4: Each of the two lines is connected to the single line above, which carries the number one. Therefore, write number one next to each line. Step 5: Draw three vertical lines symmetrically underneath the two lines. Step 6: Connect each of them to the nearest line(s) above. Step 7: Each terminal line is connected to one line above, which carries the number one. Therefore, write number one next to each of them. The internal line is connected to two lines above, each carrying the number one. 1 + 1 = 2; write number two next to the internal line. Continue the process as far down as necessary. Strong Coupling and the Roof Effect In the high field approximation, there may be strong coupling between homonuclear spins. Strong coupling refers to the scenario when the chemical shift difference (in Hz) is on the order of the J-coupling. As chemical shift scales with the magnetic field, the chemical shift differences are therefore smaller at low magnetic fields, makes this scenario more likely at low magnetic fields. First we must define the strong coupling parameter ,$\theta$ $tan 2 \theta=\dfrac{2 \pi J}{\omega_A-\omega_B}$ where $\omega$ is in radians and J is in Hz. [insert tan 2theta graph] Then the J-coupling line intensities become for the doublet (A1,A2) $I_{A1} \propto 1-sin 2 \theta$ $I_{A2} \propto 1+sin 2 \theta$ Just like with the usual J-couplings, having multiple J-couplings will lead to a complex splitting pattern, where each intensity is modified by the angle as such. Expansion of the J-Coupling Hamiltonian Typically, the J-coupling is given in the weak coupling limit. The weak coupling limit is defined by having $\nu_I-\nu_s>>J$, which is typically valid at high magnetic fields of several tesla. However, here we will derive the J-coupling Hamiltonian for a 2 spin system without any assumptions about the coupling limit. First we begin with the J-coupling Hamiltonian which constains the pertenent interactions; Zeeman and Scalar. Then the Hamiltonian will be $\omega_II_Z+\omega_SS_Z+2\pi J \textbf(I \cdot S)$ The factor 2$\pi$ in front of the J-coupling keeps all units in angular frequency, as J is measurable in Hz. Remember that for a 2 spin system, each nucleus has an $\alpha$ and $\beta$ state. Thus the potential wave functions in matrix form are given by the $\alpha$ and $\beta$ states. In Matrix form this would be represented for a 2 spin system as $\begin{bmatrix} |\alpha \alpha> & |\alpha \beta> \ |\beta \alpha> & |\beta \beta> \end{bmatrix}$ Now the $\alpha$ and $beta$ states also have matrix representations as $|\alpha>=\begin{bmatrix} 1\0 \end{bmatrix}$ $|\beta>=\begin{bmatrix} 0\1 \end{bmatrix}$ and the combined states are given by cross product of the 2 matricies so $|\alpha\alpha>=\begin{bmatrix} 1\0 \end{bmatrix} \times \begin{bmatrix} 1\0 \end{bmatrix}=\begin{bmatrix} 1\0\0\0\end{bmatrix}$ $|\beta\beta>=\begin{bmatrix} 0\1 \end{bmatrix} \times \begin{bmatrix} 0\1 \end{bmatrix}=\begin{bmatrix} 0\0\0\1 \end{bmatrix}$ $|\alpha\beta>=\begin{bmatrix} 1\0 \end{bmatrix} \times \begin{bmatrix} 0\1 \end{bmatrix}=\begin{bmatrix} 0\1\0\0 \end{bmatrix}$ $|\beta\alpha>=\begin{bmatrix} 0\1 \end{bmatrix} \times \begin{bmatrix} 1\0 \end{bmatrix}=\begin{bmatrix} 0\0\1\0 \end{bmatrix}$ Then without further proof (at the moment) the eigenfunctions of the 2 spin system are $\psi_1=|\alpha\alpha>$ $\psi_4=cos\theta|\alpha\beta>+sin\theta|\beta\alpha>$ $\psi_3=cos\theta|\beta\alpha>-sin\theta|\alpha\beta>$ $\psi_4=|\beta\beta>$ Remember that according to Schroedinger's Equation, $\hat{H} \psi= E \psi$ thus if we apply the Hamiltonian Operator on each of the eigenstates, we should be able to deduce the energy level. Now looking at the Hamiltonian, there are I and S operators which have the following matrix representations $I_z=S_z=\dfrac{1}{2}\begin{bmatrix} 1&0\0&-1\end{bmatrix}$ $I_x=S_x=\dfrac{1}{2}\begin{bmatrix} 0&1\1&0\end{bmatrix}$ $I_y=S_y=\dfrac{i}{2}\begin{bmatrix} 0&-1\1&0\end{bmatrix}$ However since we are considering a 2 spin system, 2x2 matrices won't cut it. Rather, we need to transform these matrices into the product basis by calculating the direct products of the wavefunctions. To do this we recognize that we can obtain 2N, wavefunctions where N is the total number of spins. To do this $\psi_k=|m_1> \times |m_2> ... \times |m_N>$ Thus for \psi_1 $\omega_II_Z+\omega_SS_Z+2\pi J \textbf(I \cdot S)$ Limits of J-Couplings There exists certain limits in which the J-coupling can be calculated. We begin our discussion investigating 13C-labeled methanol (13C-MeOH). Lets assume that we are in the high-temperature regime, such that the OH proton is uncoupled to the methyl spins during the experimental timescale. The J-coupling between the 3 methyl protons and the 13C nucleus is 1JHC=140.5 Hz. High Field Limit At high magnetic fields, the line width is governed by the magnetic field inhomogeneity. That is when a superconducting magnet is build, the field is not completely uniform over the sample volume. Therefore, there is a distribution of chemical shifts for a given site. Since the field inhomogeneity is typically small, (parts per billion or ppb), this leads to a broader peak than a perfectly homogenous field. This is the reason in which shimming is performed prior to sample collection. Shimming makes the field more homogenous. If we assume that the magnetic field inhomogeneity (Binhomo) of the external magnetic field (B0), is 25 ppb over the sample volume after shimming, then we can calculate the line width $\Delta \nu$ of a single peak. $\Delta \nu =B_{0} B_{inhomo}$ Assuming B0 is 11.74T (500 MHz spectrometer), then $\Delta \nu$ is 2.1 Hz (~0.3 ppm). Of course there are geometrical constraints about the magnetic field inhomogenity, such as the fraction of sample that is in the inhomogenous region, but for now we assume that the equal portions of the sample are in the inhomogenous field. Thusly, J-couplings larger than 3.45 Hz could not be resolved. Of course this is ridiculous for a 500 MHz instrument with good shimming. A much more reasonable Figure x: Magnitude of J-coupling that is un-resolvable J-couplings as a function of magnetic field, assuming a Binhomo=1.5 ppb. The regions of couplings that are resolvable are marked. Note that this assumes 1H value would be not resolving J-couplings less than 0.2 H, which would be a field inhomogeneity of 0.10 ppb. Currently, the largest sustained magnetic fields are on the order of 100T (pulsed, National High Magnetic Field Lab) and ~30T (static). Therefore, the high field limit (\B_{HF}) can be described as $B_{HF}=J/(B_0 \Delta \nu \gamma)$ or $B_{HF}=J/(B_0^2 B_{inhomo} \gamma)$ where $\gamma$ is the gyromagnetic ratio in Mhz/T and $\Delta \nu$ is in ppm. Thus, for the same field inhomogenity, the resolvable J-couplings are smaller for small $\gamma$ nuclei. Figure x. Resolvable J-couplings for 1H (black) and 13C (blue) for $\Delta \nu$ of 1.0 ppb (solid) and 2.0 ppb (dashed) High Field Weak Coupling Limit When the magnetic field is less than BHF, the heteronuclear J-couplings can be resolved. This leads to the well-known (2I+1) multiplet splitting. For spin I=1/2 (i.e. 1H, 13C, 31P) this leads to the well known pascal triangle. However, for coupling to spins I>1/2 (i.e. 14N, 2H, 7Li, 51V) the splitting patterns become more complex. As the J<<$\nu$ we are in the weak coupling limit. Low Field Strong Coupling Limit $B_{1 \rightarrow 2}^{Low Field}=\dfrac{J^2}{2 \Delta \nu (\gamma_I - \gamma_S)}$ First order Perturbation $B_{2 \rightarrow 3}^{Low Field}=\sqrt{\dfrac{3J^2}{4 \Delta \nu (\gamma_I - \gamma_S)^2}}$ $B_{3 \rightarrow exact}^{Low Field}=\dfrac{(1+\dfrac{1}{\sqrt{2}})J}{(\gamma_I - \gamma_S)}$ Ultra Low Field Strong Coupling Limit $B_{exact \rightarrow 2}^{Ultra-Low Field}=\dfrac{4J}{(7\gamma_I - \gamma_S)}$ Ultra-Low Field Weak Coupling Limit $B_{2 \rightarrow 1}^{Ultra-Low Field}=\sqrt{\dfrac{2 \Delta \nu J}{ (\gamma_I - \gamma_S)^2}}$ Zero Field Weak Coupling Limit $B_{1 \rightarrow 0}^{Zero Field}=\dfrac{2 \Delta \nu J}{ (\gamma_I + \gamma_S)}$ Outside Links • This is not meant for references used for constructing the module, but as secondary and unvetted information available at other site • Link to outside sources. Wikipedia entries should probably be referenced here. Problems 1) Calculate the multiplicity for CH3 for both C and H. 2) Calculate the multiplicity for HCl for both H and Cl.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Spectroscopy/Magnetic_Resonance_Spectroscopies/Nuclear_Magnetic_Resonance/NMR_-_Theory/NMR_Interactions/Dipolar_Cou.txt
Nuclei with spin $I > \dfrac{1}{2}$ comprise more that 2/3 of the NMR active nuclei. These nuclei exhibit a quadrupolar moment which couples to the electric field gradient resulting in extensive peak broadening. This page is dedicated to understanding the origins of the quadrupole moment and the effects on the NMR line shape. It will be developed through a heavy mathematical treatment, however, the illustrations and captions will provide a pictorial representation of the mathematical treatment. The Quadrupole Moment Understanding a Quadrupole To fully understand the quadrupole interaction we must first establish what a quadrupole is. Quite simply, a quadrupole can be thought of as two dipoles. Unlike a dipole however, the quadrupole will not couple to a symmetric field as the forces and subsequent torques on the quarupole will cancel. A Quadrupole If there is an non-symmetric field there will be a force on the quadrupole, i.e., an electric field gradient. We can then define the quadrupole moment as the tendency of the quadrupole to rotate about an axis. Due to the 3D nature of a quadrupole it may be described by a second rank tensor Q where $Q=\begin{bmatrix} Q_{xx} & Q_{xy} & Q_{xz} \ Q_{yx} & Q_{yy} & Q_{yz} \ Q_{zx} & Q_{zy} & Q_{zz} \end{bmatrix}$ The quadrupole can then couple to an Electric Field Gradient (EFG) The electric field gradient is denoted a V and is also described by a second rank tensor. $V=\begin{bmatrix} V_{xx} & V_{xy} & V_{xz} \ V_{yx} & V_{yy} & V_{yz} \ V_{zx} & V_{zy} & V_{zz} \end{bmatrix}$ EFGs are generated in solids and liquids by th electrons in the sample. Quadrupolar Nuclei Within the nucleus of an atom, the protons, and subsequent charge of the nucleus, can be distributed symmetrically or asymmetrically. If the charge distribution is symmetric, the spin,I, of the nucleus is 1/2 and the interaction of the nucleus with electric field gradients is direction independent. However, if the charge distribution is asymmetric I>1/2, and the electric field gradient can interact with the nucleus and exhibit a torque on the nucleus. These nuclei are known as quadrupolar nuclei. It is worth mentioning that the electric field gradient is generated by the electrons present in the sample. Consequently, these nuclei exhibit a quadrupole moment, Q. While calculations of the quadrupole moment for a given nuclei are beyond the scope of this page, the moments for the nuclei have been calculated and a few examples are listed below. Nucleus Spin Q (barns) X 103 2H 1 2.86 6Li 1 0.83 7Li 3/2 -40.6 10B 3 84.7 17O 5/2 -25.7 87Rb 3/2 127.1 Q can be considered a friction coefficient between rotations of the electric field of the molecule. The larger the Q value, the more strongly the asymmetric nucleus will interact with a non-uniform electric field gradient. This leads to a nuclear spin reorientation in the nucleus. The exception is cubic symmetry (Td or Oh) where the electric field gradient is symmetic resulting in no net effect on the non-spherical nucleus. Spin Energy Levels As the spin of a quadrupole nucleus is larger than I=1/2 we develop multiple energy levels (2I+1) and therefore multiple transitions are expected. In half-integer spin systems (e.g. 3/2, 5/2, 7/2) There is still a -1/2 to 1/2 transition knowns as the central transition. The other transitions are known as satellite transitions. A good example is listed below: Boron-11 (11B) is a classic example of the spectral changes caused by the quadrupole moment. With a nuclear spin of I =$\frac{3}{2}$ and Δm = ±1 transitions being allowed, 4 states are produced: m = $\frac{3}{2}$,$\frac{1}{2}$, -$\frac{1}{2}$, -$\frac{3}{2}$. However, only 3 transitions are possible: $\dfrac{3}{2}\leftrightarrow\dfrac{1}{2}\,$ Satellite Transition $\dfrac{1}{2}\leftrightarrow-\dfrac{1}{2}$ Central Transition $-\dfrac{1}{2}\leftrightarrow-\dfrac{3}{2}$ Satellite Transition While it may be expected that these transitions happen at the same energy levels, we we see in a latter section that this is not the case! Derivation of the Hamiltonian If we consider the picture below, We can describe the electrostatic interaction between the electron, which has a non-spherical charge distribution, and the protons of the nucleus through a coulombic interaction such that $U=-\sum_{p=1}^Z \dfrac{e^2}{| \vec{r_p} -\vec{r_e}|}$ where Z is the atomic number, and rp and re are the proton and electron distances. (insert picture) Most likely, the PAS of the rp and re will not coincide with the lab frame and instead should be expressed by the principle axis system of angles $\theta_p , \phi_p, \theta_e$ as denoted in the figure above. Converting the reference frame to a frame with its origin at the center of the nucleus we can obtain $\dfrac{1}{| \vec{r_p} -\vec{r_e}|} = 4 \pi \sum_{l=0}^{\infty} \sum_{m=-l}^l \dfrac{1}{2l+1} \dfrac{r_<^l}{r_>^{l+1}} Y_m^{(l)*}(\theta_p, \phi_p)Y_m^{(l)}(\theta_e, \phi_e)$ where r< is the smaller value between rp or re and r> is the larger value, and Y denotes the spherical harmonics. In the case that the electrons do not penetrate the nucleus, re>rp and $U=-4 \pi e^2 \sum_{p=1}^Z \sum_{l=0}^{\infty} \sum_{m=-l}^l \dfrac{1}{2l+1} \dfrac{r_p^l}{r_e{l+1}} Y_m^{(p)*}(\theta_p, \phi_p) Y_m^{(p}(\theta_e, \phi_e)$ From thie above equation, we will derive the Hamiltonian for the quadrupolar interaction, HQ. In order to do this we first start by showing the symmetry relation $Y_m^{(l)}=(-1)^mY_{-m}^{(l)}*$ Then, $\sum_{m=-l}^{l}Y_m^{(l)*}(\theta_p, \phi_p)Y_m^{(l)}(\theta_e, \phi_e)=\sum_{m=-l}^{l}(-1)^m Y_{-m}^{(l)*}(\theta_p, \phi_p)Y_m^{(l)}(\theta_e, \phi_e)=Y^{(l)}(\theta_p, \phi_p) \cdot Y^{(l)}(\theta_e, \phi_e)$ The dot product infers that we can then separate the potential energy equation into two pieces, one for the nuclear energy denoted as Q(l) and one for the electronic energy as k(l). These expressions are given as $Q^{(l)}=e\sum_{p=1}^Z \sqrt{\dfrac{4\pi}{2l+1}} r_p^l Y^{(l)}(\theta_p, \phi_p)$ $k^{(l)}=-e\sqrt{\dfrac{4\pi}{2l+1}} \dfrac{1}{r_e^{l+1}} Y^{(l)}(\theta_p, \phi_p)$ The total potential energy is then $U=\sum_l Q^{(l)} \cdot k^{(l)}$ for l=0 $U=-\dfrac{Ze^2}{r_e}$ which is the expected coulombic interaction between the electron and the nucleus. The next higher order (l=1) corresponds to the interaction between a nucleus electric dipole moment and the electric field generated by the electrons. Since rp is an odd operator, the expectation value of Q is zero andthe potential energy, U, is also zero. The next higher interaction is l=2 corresponding to electric quadrupole interaction. $H_Q=U^{(2)}=Q^{(2)} \cdot k^{(2)}$ In order to fully expand this Hamiltonian, we want to relate Q and K to the molecular parameters. Q will be related to only the operators involving the nuclear spin such that $Q^{(2)}=AT^{(2)}$ where T is the spherical tensor. K will be related to the EFG. Lets begin our discussion by expanding Q. Consider the effect of Q on the spins such that $\langle I, m_{I=I} | Q_0^{(2)}|I, m_{I=I} \rangle =e\sqrt{\dfrac{4 \pi}{5}} \sum_p r_p^2 \langle I, I| Y_0^{(2)}(\theta_p, \phi_p)|I, I\rangle$ $= \dfrac{1}{2} e \langle I,I| \sum_p (3z_p^2-r_p^2)|I,I\rangle$ In which $sum_p (3z_p^2-r_p^2)$ is the quadrupolar moment. Therefore this term reduces to $=\dfrac{1}{2}eQ$ Using the irreducible tensors, $=A\langle I,I|T_0^{(2)}|I,I\rangle$ $=A\langle I,I|\dfrac{1}{\sqrt{6}}(3I_0^2-I^2)|I,I\rangle$ $=\dfrac{A}{\sqrt{6}}(3I^2-I(I+1))$ $=\dfrac{A}{\sqrt{6}}I(2I+1)$ We can then solve for A $A=\sqrt{\dfrac{3}{2}} \dfrac{eQ}{I(2I-1)}$ which yeilds $Q^{(2)}=\sqrt{\dfrac{3}{2}}\dfrac{eQ}{I(2I-1)}T^{(2)}$ Adopting a similar approach for k $\dfrac{d^2}{dr_i dr_j}(\dfrac{-e}{p})=-e\dfrac{3r_ir_j-r^2 \delta_{ij}}{r^5}=V_{r_ir_j}$ Then $k_0^2=-e\sqrt{\dfrac{4 \pi}{5}} \dfrac{1}{r_e^3} Y_0^{(2)}(\theta_p, \phi_p)$ $=\dfrac{-e}{2}\dfrac{3Z_e^2-r_e^2}{r_e^5}$ $\dfrac{1}{2}V_{zz}$ We can expand k(2) and find $k_{\pm 1}^{(2)}=\mp \dfrac{1}{2} \sqrt{\dfrac{2}{3}}(V_{zx} \pm iV_{yz})$ $k_{\pm 2}^{(2)}=dfrac{1}{4} \sqrt{\dfrac{2}{3}}(V_{xx}-V_{yy} \pm 2iV_{xy})$ Substituting the equations in for k and Q we would be able to obtain HQ in an arbitary frame. However, it is much more convient to express HQ in terms of the principle axis system of the EFG. Therefore, we choose the following condition $V_{i,j}=0$ unless i=j. Resulting in $k_{\pm 1}^{(2)}=0$ and we obtain our expression for the HQ as $H_Q=\dfrac{eQ}{I(2I-1)}[\dfrac{1}{4}I_{+1}^2(V_{xx}-V_{yy})+\dfrac{1}{4}(3I_0^2-I^2)V_{zz}+\dfrac{1}{4}I_{-1}^2 (V_{xx}-V_{yy})$ $=\dfrac{eQ}{4I(2I-1)}[(3I_0^2-I^2)V_{zz}+(I_{+1}^2+I_{-1}^2) (V_{xx}-V_{yy})$ We can further simplify this expression by converting the electronic operators (Vij) by defining $V_{zz}=eq$ and $V_{xx}-V_{yy}=\eta eq$ Yielding $H_Q=\dfrac{e^2Q}{4I(2I-1)}[(3I_0^2-I^2)+(I_{+1}^2+I_{-1}^2) \eta$ Furthermore, $I_{+1}^2+I_{-1}^2=I_x^2-I_y^2$ and $I^2=I_x^2+I_y^2+I_z^2$ Then $H_Q^{PAS}=\dfrac{e^2Q}{4I(2I-1)}[2I_z^2-I_x^2-I_y^2+\eta(I_x^2-I_y^2)]$ $=\dfrac{e^2Q}{4I(2I-1)}(I_x,I_y,I_z)\begin{bmatrix} \eta-1 &0&0 \ 0&-\eta-1&0 \ 0&0&2 \end{bmatrix} \begin{pmatrix} I_x\I_y\I_z \end{pmatrix}$ Then HQ reduces to $H_Q^{PAS}=\dfrac{e^2Q}{4I(2I-1)}\vec{I} \cdot Q \cdot \vec{I}$ Expansion of the Hamiltonian Classical Expansion For those who wish to not delve into the complex treatment of the quadrupolar hamiltonian we can treat the Hamiltonian semi-classically and derive an expression for the quadrupolar Hamiltonian. The interaction of a quadrupole with a field gradient in an arbitrary frame (The PAS of the of the electric field gradient) may be described by $\hat{H}_{Q}=\dfrac{eQ}{2I(2I-1)\hbar} \hat{I} \cdot V \cdot \hat{I}$ We can actually re-write this expression to account for the description of the EFG that was given in the Cartesian components if we change spin operators to their Cartesian analogues. This gives the result $\hat{H}_{Q}=\dfrac{eQ}{6I(2I-1)\hbar} \sum_{\alpha, \beta = x, y, z} V_{\alpha \beta} [\frac{3}{2} (\hat {I}_\alpha \hat {I}_\beta + \hat {I}_\beta \hat {I}_\alpha)- \delta_{\alpha \beta} \hat{I}^2]$ The EFG is traceless and can be described using an asymmetry parameter define as $\eta_Q= \dfrac{V_{xx}^{PAS}-V_{yy}^{PAS}} {V_{zz}^{PAS}}$ and the magnitude of the EFG will then be given as $eq=V_{zz}^{PAS}$ We must at this point recognize that we are in the reference frame of the electric field gradient. The HQ in the reference frame of the EFG is then $\hat{H}_{Q}=\dfrac{e^2Qq}{4I(2I-1)\hbar} \left [3\hat{I}_{zz}^{2 PAS}-\hat{I^2}+\eta(\hat{I}_{xx}^{2 PAS}-\hat{I}_{yy}^{2 PAS}) \right]$ For the remainder of the discussions we can simplify the constant term to $\chi$, $\chi=\frac{e^2qQ}{\hbar}$ This is known as the quadrupole coupling constant and is the accpeted term in the NMR literature. Readers must be wary however as there are several different definitions floating in the literature. The expansion of the Hamiltonian is done using perturbation theory. It has been shown that experimental spectra may be exactly calculated using the first and second perturbations to the Hamiltonian. Using this information, the Hamiltonian may be expanded in terms of polar coordinates and raising and lowering operators. This is done to transform the Hamiltonian from the PAS of the EFG into the laboratory frame $\hat{H}_Q=\dfrac{\chi}{4I(2I-1)} \dfrac{1}{2}(3\cos^2 \theta-1)(3 \hat{I}_z^2- \hat{I}^2)$ $+\dfrac{3}{2}\sin\theta \cos\theta \left[\hat{I}_z(\hat{I}_++\hat{I}_-)+(\hat{I}_++\hat{I}_-)\hat{I}_z\right]$ $+\dfrac{3}{4} \sin^2 \theta (\hat{I}_+^2+\hat{I}_-^2)$ $+\eta_{Q}\dfrac{2\pi \chi}{4I(2I-1)h}[ \dfrac{1}{2} \cos2\phi [(1-\cos^2 \theta)(3\hat{I}_z^2- \hat{I}^2)$ $+(\cos^2 \theta+1)(\hat{I}_+^2+\hat{I}_-^2)]$ $+\dfrac{1}{2} \sin \theta [\cos \theta \cos2 \phi -i\sin2 \phi)(\hat{I}_+\hat{I}_z+\hat{I}_z\hat{I}_+)$ $+(\cos \theta \cos2 \phi+\sin2 \phi)(\hat{I}_-\hat{I}_z+\hat{I}_z\hat{I}_-)]$ $+(i/4)\sin2 \phi \cos \theta(\hat{I}_+^2-\hat{I}_-^2)$. From which we can get first and second order corrections to the energy levels, $E­_m^{(1)}=\dfrac{\chi}{4I(2I-1)}(I(I+1)-3m^2) \left[\dfrac{1}{2}(3\cos^2\theta-1)-\eta \cos2\phi(\cos^2\theta-1) \right]$ $E_m^{(2)}=-\dfrac{\chi^2 m}{4I(2I-1) \omega_0} - \dfrac{1}{5} (I(I+1)-3m^2)(3+\eta_Q^2)$ $+\dfrac{1}{28}(8I(I+1-12m^2-3)[(\eta_Q^2-3)(3\cos^2 \theta -1)+6\eta_Q\sin^2 \theta \cos2\phi$ $+\dfrac{1}{8}(18I(I+1)-34m^2-5)[\dfrac{1}{140}(18+\eta^2)(35\cos^4\theta-30\cos^2\theta+3)$ $+\dfrac{3}{7}\eta_Q\sin\theta(7\cos^2\theta-1)\cos2\phi+\dfrac{1}{4}\eta_Q^2\sin^4\theta \cos4\phi]$. An illustrative figure showing how the energy levels change according to the Zeeman, the first, and the second order quadrupolar interactions is shown below for a spin 3/2 nucleus. Interestingly, he first order approximation does not affect the central transition, while the second order transition is inversely proportional the Larmor frequency. With increasing field, the second order effect on the central transition decreases. Quadrupolar Lineshapes The signal exhibited by the quadrupolar nucleus exhibits a very characteristic powder pattern. Looking at the energy levels, it is easy to see there are multiple transitions which can occur. For a spin 1 nucleus, the transitions are from -1 to 0 and 0 to 1. These two transitions manifest themselves as a double horned powder pattern, each horn representative of a transition. The difference in intensities is due to the alignment of the crystallites with respect to the magnetic field. If the crystallite is aligned with the B0 field, then after application of a $\frac{\pi}{2}$ pulse, it will lie in the x-y plane and consequently contribute fully to the signal. If the crystallite is oriented with some angle relative to the B0 axis and a pulse is applied, it will not process as long and consequently will not be detected for as long and give less of a signal. The frequency at which the transitions will occur is given by the quadrupolar frequency defined as $\omega_Q(\theta)=\omega_0-\dfrac{3}{8} \left(\dfrac{2m-1}{I(2I-1)}\right)\chi (3\cos^2\theta-1)$ from this it is easy to see why two horns are observed. m is either 1 (red) or -1 (blue) which changes the sign of the quadrupolar perturbation to the Larmor frequency. For nuclei, such as 87Rb, which have multiple transitions, the powder patterns are more complex. Similar to the CSA the lineshape is dependent on the magnitude of $\chi$ and $\eta$, as shown in the figure below. Typically, the satellite transitions are not observed in quadrupolar spectra. The frequency for a symmetric transition, such as -1/2 to 1/2 in quadrupolar nuclei may be represented by the sum of the 0th 2nd and 4th rank Legendre polynomials or mathematically as $\nu_{m,-m}=\sum \limits_{l=0,2,4}({\alpha,\beta,\gamma)C_l(I,m)P_l\cos\theta_{MA}}$ where $C_0(I,m)=2m[I(I+1)-3m^2]$ $C_2(I,m)=2m[8I(I+1)-12m^2-3]$ $C_4(I,m)=2m[18I(I+1)-34m^2-5]$ $P_2(\cos\theta)=\dfrac{1}{2}(3\cos^2\theta-1)$ $P_4(\cos\theta)=\dfrac{1}{8}(35\cos^4\theta-30\cos^2\theta+3)$ The second and fourth rank interactions also have associated frequencies. These are given below $v_2^Q=\dfrac{1}{192\nu_L}[\dfrac{\chi}{I(2I-1)h}]^2[I(I+1)-\dfrac{3}{4}]F_2(\theta,\phi)$ $v_4^Q=\dfrac{1}{3360\nu_L}[\dfrac{\chi}{I(2I-1)h}]^2[I(I+1)-\dfrac{3}{4}]F_4(\theta,\phi)$ $F_2(\theta,\phi)=\dfrac{35}{4}(3-\eta_Q\cos2\phi)^2\sin^4\theta-5(18+\eta_Q^2-9\eta_Q\cos2\phi)\sin^2\theta+18+\eta_Q^2$ $F_4(\theta,\phi=2(3\eta_Q\cos2\phi-\dfrac{2}{3}\eta_Q^2-3)\sin^2\theta+\dfrac{1}{21}(22\eta_Q^2-90\eta_Q\cos2\phi+120$. These equations are more useful mathematically and become important in the discussion of pulse sequences of quadrupolar nuclei. Magnetic Field Effects and the Center of Gravity The nature of the quadrupolar interaction is heavily influenced by the magnetic field. Below is a figure that shows the field dependence of a Ga resonance in $\beta$-Ga2O3. The reader should take note of two things. First, Note how the spectrum narrows as the field is increased. This shows the effect of the central transition is inversely proportional to $\omega_0$. Second, note the shift in the center of gravity of the peak. As the field increases, the peak shifts progressivley upfield, although the isotropic peak position, denoted by the dotted line is constant. Quadrupole Moments: Effects in NMR Spectra As mentioned earlier, a quadrupolar nucleus is efficiently relaxed by a non-uniform electric field that is a product of the solute molecules interaction with the dipolar solvent. This relaxation is dependent on the interaction of the electric field gradient at the nucleus. When the nucleus is in a molecule that is surrounded by a non-spherical electron density distribution, it creates a gradient. The field gradient, q, describes the electron charge cloud’s deviation from spherical symmetry. The value of q is found to equal zero if the groups around the quadrupolar nucleus have a cubic symmetry, such as in the Td point group. However, if a non-cublic molecule has a threefold or higher symmetry axis, the deviation from spherical symmetry is expressed as a magnitude of q. The two parameters, q, the field gradient, and η, the asymmetric parameter, become necessary only if the molecule's point group's highest symmetry axis is a threefold symmetry or less. Depending on the molecule, certain cancellations can take place leading the asymmetric parameter, η, to equal zero. This is caused by a combination of very specific bond angles and charge distribution in the molecule being analyzed. Ultimately, the effectiveness of the relaxation is dependent on the magnitude of the electric field gradient, q. Linewidth broadening in the NMR spectrum is consequential of the rapid nuclear quadrupole relaxation of the quadrupole nucleus. Consider an analogous situation: chemical exchange. It is known that when the nuclei’s spin state rapidly changes it causes broadening in the spectrum. Similarly, the nuclear quadrupole relaxation rates of a quadrupolar nucelus corresponds to an intermediate rate of chemical relaxation.The apparent broadening effect also influences the spectra of the other nuclei attached to the quadrupolar nucleus, including protons. In some cases, the rapid nucleur quadrupole relaxation times (T1) can cause extensive homogenous broadening (consequential of readily relaxing nuclei, seen in Figure 2) rendering the proton signal of the quadrupolar nucleus completely unobservable in the 1H NMR spectrum. T1 is determined by two factors: the electric quadrupole moment (Q) and the presence of the electric field gradient (q) across the nucleus. A common approach to resolving quadrupolar effects on the spectra of solution state NMR is elevating temperatures while collecting NMR data. The molecular reorientational correlation times are then shorter than the normal time scale, so the homogenous broadening of the line can be reduced. Unfortunately, the temperature required to create this motional tapering is unfeasibly high for many samples that would deem this technique necessary. Contributors and Attributions • Derrick C. Kaseman (UC Davis) and Megan McKenney (UCDavis)
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Spectroscopy/Magnetic_Resonance_Spectroscopies/Nuclear_Magnetic_Resonance/NMR_-_Theory/NMR_Interactions/Quadrupolar.txt
Introduction NMR employs strong magnetic field which interact with the intrinsic spin of the nucleus to form degenerate energy levels. Treating this phenomenon classically is mathematically tedious and/or impossible. Here we outline the fundamentals of NMR in quantum mechanical terms. Zeeman Interaction If we begin with the historical Schrödinger equation $H|\psi \rangle =E|\psi \rangle$ we can immediately realize that by defining the Hamiltonian of the system will result with an corresponding energy level of the wave function. The Hamiltonian for a magnetic vector ($\vec{\mu}$ in J/T)in a magnetic field (also a vector), ($\vec{B}$ in T) is given by the expression $H=- \vec{\mu} \cdot \vec{B}$ We then see that the Hamiltonian has units of energy (J). Therefore operating the Hamiltonian on a wavefinction will result in an energy eigenvalue! We can further refine the Hamiltonian by relating the magnetic moment of a nucleus to the particles angular momentum ari\sing from the intrinsic spin, I, of the nucleus thus $\vec{\mu}=-\gamma \vec{I}$ where gamma is the ratio of the angular momentum to the magnetic moment. Then the Hamiltonian may be rewritten as $H=-\gamma \vec{I} \cdot \vec{B}$ Assuming that the magnetic field lies along the Z-axis, then $H_z =- \gamma (I_x\hat{i} +I_y \hat{j} + I_z \hat{k}) \cdot (B_0 \hat{k})=- \gamma B_0I_z=-\omega_0I_z$ where $\omega_0$ is the Larmor frequency. Therefore if the $|\psi \rangle$ is an eigenfunction of H then the $|\pm \rangle$ spin states are eigenfunctions of the total and z-component of angular momentum. We can describe this is three ways in a DC magnetic field: Operator: $H_z|+ \rangle=- \omega_0 I_z|+ \rangle=-\dfrac{\omega_0}{2}|+ \rangle$ $H_z|-\rangle=- \omega_0 I_z|-\rangle=-\dfrac{\omega_0}{2}|-\rangle$ Matrix: $H_z=$ Energy Level Insert picture These representations rely on the fact that spin states are eigenfunctions of the angular momentum and therefore have the following properties, with I and m defining the state of the system. $I^2|I,m \rangle=I(I+1)|I,m \rangle$ $I_z|I,m \rangle=m|I,m \rangle$ $I_{+1}|I,m \rangle=-\sqrt{\dfrac{1}{2}(I(I+1)-m(m+1))}|I,m_{+1} \rangle$ $I_{-1}|I,m \rangle=\sqrt{\dfrac{1}{2}(I(I+1)-m(m-1))} |I,m_{-1} \rangle$ where $I_x=\dfrac{1}{\sqrt{2}}(I_{+1}-I_{-1})$ $I_y=\dfrac{i}{\sqrt{2}}(I_{+1}+I_{-1})$ Note that in contrast to the harmonic oscillator, the $|I,m \rangle$ spin basis is orthogonal and complete. This means for the spin $I$, there are only $2I+1$ spin states and the the overlap integral $\langle I,m|I',m' \rangle=\delta_{mm'} \delta_{II'}$ holds. This formalism holds equally well for $I>1/2$ particles as well. Effects of Radio Frequency While it is relatively straightforward to understand the Zeeman interaction, understanding the effects of radio frequency pulses on the Hamiltonian needs careful consideration. To examine this we will look at a modification to the Stern-Gerlach experiment made by Rabi in the 1930's. \since no one has bothered to discuss the Stern Gerlach experiment in any detail on the chemwiki a brief glimpse into the initial experiment is necessary. Essentially a beam of silver atoms was shot through a magnetic field and instead of a \single line of atoms or \single spot, depending on the dispersion of the atoms, two spots were observed. This of course was due to the the fact that Zeeman splitting had occurred due to silvers spin=1/2 nature. A figure is provided below for reference. Rabi modified this experiment by adding a coil of wire which generated a magnetic field perpendicular to the applied magnetic field from the stern gerlach experiment. By selecting only 1 of the nuclei either +/- 1/2 and running these particles through the radio frequency coil Rabi noted a characteristic nutation of the magnetization shown below. insert figure We can see then that the number of particles appearing on the screen is characterisitc of the frequency. This behavior is denoted as the Rabi or nutation frequency. This experiment serves as a good bridge into the NMR experiment. First, we must calculate the appropriate Hamiltonian for the sitation. We have two magnetic fields, one is the large external magnetic field which splits the degenerate ground states into energy levels, and the second is the osciallting magnetic field from the coil. As such the magnetic field experienced by the spins is $\vec{B}=B_0 \hat{k} + 2B_1 \cos(\omega_0 t) \hat{i}$ Here we define the coil to be along the x direction and the external magnetic field about the z direction. \since the field is oscillating in time, a \co\sine function is the appropriate choice to model the behavior of this magnetic field. We may now express the Hamiltonian in eith operator or matrix form as $H=-\gamma \vec{I} \cdot \vec{B} = -\gamma B_0I_z - 2 \gamma B_1 \cos(\omega_o t)I_x=-\omega_0 I_z-2 \omega_1 \cos(\omega_0 t)I_x$ insert matrix here. We now see that our Hamiltonian become time dependent. In order to solve the observation made by Rabi we must use the time depedent Schrödinger equation. $\dfrac{d}{dt}|\psi (t) \rangle = -iH(t)| \psi (t) \rangle$ where H(t) is in frequency units and |$\psi$(t=0)>=|+>. Assuming that the Hamiltonian is time independent we can use a time dependent unity operator $\hat{U}$ defined as $|\psi (t) \rangle=\hat{U}(t)| \psi (0)>$ $\hat{U}(t)=e^{-iHt}$ that transforms the initial wavefunction |$\psi$ (0)>to a wavefunction at a given time |$\psi$ (t) \rangle. insert figure This issue with the case above is that we assumed the Hamiltonian is time-independent, which is clearly not the our case. Therefore, we need to solve this problem by transforming the time-dependent Schrödinger equation into a reference frame that renders the Hamiltonian time independent. Consider the development of the time evolution of |Q(t) \rangle under the following conditions $|\psi (t) \rangle=R(t)|Q(t) \rangle$ $Q(t) \rangle=R^+(t)| \psi (t) \rangle$ $R(t)R^+(t)=R^+(t)R(t)=1$ As you can see R commutes and the adjoint give the unity operator. We can then rewrite the Schrödinger equation as $\dfrac{d}{dt}| \psi (t) \rangle=\dfrac{d}{dt}R(t)|Q(t) \rangle=[\dfrac{d}{dt}R(t)]|Q(t) \rangle + R(t)\dfrac{d}{Dt}|Q(t) \rangle$ $=-iH(t)| \psi (t) \rangle=-iH(t)R(t)|Q(t) \rangle$ Mow if we multiple both sides of the equation by the adjoint then we can rearrange to obtain a new form of the Schrödinger equations $[R^+(t) \dfrac{d}{dt}R(t)]|Q(t) \rangle + \dfrac{d}{dt}|Q(t) \rangle=-iR^+(t)H(t)R(t)|Q(t) \rangle$ $\dfrac{d}{dt}|Q(t) \rangle=-iH_{int}(t)|Q(t) \rangle$ where $H_{int}=R^+(t)H(t)R(t)-iR^+(t)\dfrac{d}{dt}R(t)$ To apply this to our situation consider the case where $R(t)=e^{-iH_z t}=e^{i \omega_0 I_zt}$ $R^+=e^{-i \omega_0 I_z t}$ Then we can develop the following Identities $R(t)=\begin{bmatrix} e^{\dfrac{i \omega_0 t}{2}} &0\0& e^{\dfrac{-i \omega_0 t}{2}} \end{bmatrix}$ $R^{\dagger}(t)=\begin{bmatrix} e^{\dfrac{-i \omega_0 t}{2}} &0\0& e^{\dfrac{i \omega_0 t}{2}} \end{bmatrix}$ in which we can easily take derivitives of leading to $\dfrac{d}{dt}R(t)=\begin{bmatrix}\dfrac{d}{dt}e^{\dfrac{i \omega_0 t}{2}} &0 \ 0 & \dfrac{d}{dt}e^{\dfrac{-i \omega_0 t}{2}} \end{bmatrix}$ $\dfrac{d}{dt}R(t)=\begin{bmatrix}\dfrac{i \omega_0}{2}e^{\dfrac{i \omega_0 t}{2}} &0 \ 0 & \dfrac{-i \omega_0}{2}e^{\dfrac{-i \omega_0 t}{2}} \end{bmatrix}$ $\dfrac{d}{dt}R(t)=i \omega_0 \begin{bmatrix} \dfrac{1}{2} &0\0&\dfrac{1}{2}\end{bmatrix} \cdot \begin{bmatrix}e^{\dfrac{i \omega_0 t}{2}} &0\0& e^{\dfrac{-i \omega_0 t}{2}} \end{bmatrix}$ $\dfrac{d}{dt}R(t)=i\omega_0 I_z \cdot e^{i \omega_0 tI_z}$ We can also calculate $R^{\dagger} (t) I_x R(t)= \begin{bmatrix} e^{\dfrac{-i \omega_0 t}{2}} &0 \ 0& e^{\dfrac{i \omega_0 t}{2}} \end{bmatrix} \cdot \begin{bmatrix} 0& \dfrac{1}{2} \ \dfrac{1}{2} &0 \end{bmatrix} \cdot \begin{bmatrix} e^{\dfrac{i \omega_0 t}{2}} &0\0& e^{\dfrac{-i \omega_0 t}{2}} \end{bmatrix}$ $R^{\dagger}(t)I_xR(t)=\begin{bmatrix} 0&\dfrac{1}{2}e^{\dfrac{-i \omega_0 t}{2}}\ \dfrac{1}{2}e^{\dfrac{i \omega_0 t}{2}}&0 \end{bmatrix}$ $R^{\dagger}(t)I_xR(t)=\cos \omega t \begin {bmatrix} 0& \dfrac{1}{2} \ \dfrac{1}{2} &0 \end{bmatrix} +\sin \omega t \begin {bmatrix} 0 & \dfrac{-i}{2} \ \dfrac{i}{2} & 0\end{bmatrix} =\cos \omega_0 t I_x +\sin \omega_0 t I_y$ Similarly we can show (left as an exercise for the reader) $R^{\dagger}(t)I_xR(t)=\cos \omega_0 t I_y -\sin \omega_0 t I_x$ We now have all the pieces we need to solve the Hamiltonian. We will show this in multiple ways Matrix Approach $R^{\dagger}(t) \dfrac{d}{dt} R(t) =\begin{bmatrix} e^{\dfrac{-i \omega_0 t}{2}} &0\0& e^{\dfrac{i \omega_0 t}{2}} \end{bmatrix} \cdot \begin{bmatrix}\dfrac{i \omega_0}{2}e^{\dfrac{i \omega_0 t}{2}} &0 \ 0 & \dfrac{-i \omega_0}{2}e^{\dfrac{-i \omega_0 t}{2}} \end{bmatrix} =\begin{bmatrix} \dfrac{i \omega_0}{2} & 0 \ 0 & \dfrac{-i \omega_0}{2} \end{bmatrix}$ $R^{\dagger}(t) H(t) R(t) =\begin{bmatrix} e^{\dfrac{-i \omega_0 t}{2}} &0\0& e^{\dfrac{i \omega_0 t}{2}} \end{bmatrix} \cdot \begin{bmatrix} \dfrac{-\omega_0}{2} & - \omega_1 \cos \omega_0 t \ -\omega_1 \cos \omega_0 t & \dfrac{\omega_0}{2} \end{bmatrix} \cdot \begin{bmatrix}e^{\dfrac{i \omega_0 t}{2}} &0 \ 0 & e^{\dfrac{-i \omega_0 t}{2}} \end{bmatrix} = \begin{bmatrix} \dfrac{-\omega_0}{2} & - \omega_1 \cos \omega_0 t e^{-i \omega_0 t} \ -\omega_1 \cos \omega_0 t e^{i \omega_0 t} & \dfrac{\omega_0}{2} \end{bmatrix}$ Then $H_{int} =-\omega_1 \begin{bmatrix} 0& 1/2 \ 1/2 &0 \end{bmatrix} - \omega_1 \cos2 \omega_0 t \begin{bmatrix} 0 &1/2\1/2&0 \end{bmatrix} - \omega_1 \sin 2 \omega_0 t \begin{bmatrix} 0 &-i/2 \i/2 &0 \end{bmatrix}$ $H_{int}=\omega_1 I_x - \omega_1 \cos 2 \omega_0 t I_x - \omega_1 \sin 2 \omega_0 t I_y$ Operator Approach We now wish to solve the Hamiltonian u\sing the I operators we have defined previously. $R^{\dagger}(d)\dfrac{d}{dt}R(t)=e^{-i\omega_0I_z} \cdot i \omega_0 I_z \cdot e^{i \omega_0 I_z}=i \omega_0 I_z$ $R^{\dagger}(t)H(t)R(t)=e^{-i\omega_0I_z} (-\omega_0 I_z - 2 \omega_1 I_x) e^{i \omega_0 I_z}$ $= -\omega_0 I_z -2 \omega_1 \cos \omega_0 t I_x E^{i \omega_0 t I_z} I_xe^{-i \omega_otI_z}$ $=-\omega_0 I_z - 2 \omega_1 \cos^2 \omega_0 t I_x - 2\omega_1 \cos \omega_0 t \sin \omega_0 t I_y$ $=-\omega_0 I_z -\omega_1 I_x -\omega_1 \cos2 \omega_0 tI_x -\omega_1 \sin2 \omega_0 t I_y$ $H_{int}(t)=-\omega_1 I_x -\omega_1 \cos2 \omega_0 t I_x - \omega_1 \sin 2\omega_0 t I_y$ This is the exact expression we obtained u\sing the matrix approach! Rotating Wave Approximation It can be shown that the time dependent part of the Hamiltonian modifies the first term in the above equation by $\frac{\omega_1^2}{\omega_0}$ (Need to prove this). Standard NMR spectrometers typically output $\omega_1$ on the order of kHz, whereas the larmor frequency is on the order of MHz and we can neglect the time department part of the Hamiltonian. This is known as the Rotating Wave Approximation. The Hamiltonian becomes $H_{int} = -\omega_1 I_x =-\omega_1 \begin{bmatrix}0&\frac{1}{2} \ \frac{1}{2}&0 \end{bmatrix}$ We now need to find the describe the initial state of the system $|Q(0)>=R^{\dagger}(t=0)>=\begin{bmatrix} 1 &0 \ 0&1 \end{bmatrix} \begin{pmatrix} 0 \ 1 \end{pmatrix} =\begin{pmatrix} 0 \1 \end{pmatrix}$ Now we if we diagonalize the matrix then we can find the state of the system at any time t. The diagonalized Hamiltonian is then $U(t)=\begin{bmatrix} \cos \frac{\omega_1 t}{2} & i\sin \frac{\omega_1 t}{2} \ i\sin \frac{\omega_1 t}{2} & \cos \frac{\omega_1 t}{2}\end{bmatrix}$ We can now evaluate |Q(t) \rangle as \Q(t) \rangle=U(t)|Q(0) $|Q(t) \rangle=\begin{bmatrix} \cos \frac{\omega_1 t}{2} & i\sin \frac{\omega_1 t}{2} \ i \sin \frac{\omega_1 t}{2} & \cos \frac{\omega_1 t}{2}\end{bmatrix} \begin{pmatrix} 0 \1 \end{pmatrix} = \begin{bmatrix} i\sin \frac{\omega_1 t}{2} \ \cos \frac{ \omega_1 t}{2}\end{bmatrix}$ Finally recognizing that $|\Psi(t) \rangle$=R(t)|Q(t). There for $|\Psi(t) \rangle=\begin{bmatrix} e^{\frac{i \omega_0 t}{2}} & 0 \ 0& e^{\frac{-i \omega_0 t}{2}}\end{bmatrix} \begin{bmatrix} i\sin \frac{\omega_1 t}{2}/ \cos \frac{ \omega_1 t}{2}\end{bmatrix} = \begin{bmatrix} i\sin \frac{\omega_1 t}{2} e^{\frac{i \omega_0 t}{2}}\ \cos \frac{\omega_1 t}{2} e^{\frac{-i \omega_0 t}{2}}\end{bmatrix}$ $= i\sin \frac{\omega_1 t}{2} e^{\frac{i \omega_0 t}{2}} |-\rangle + \cos \frac{\omega_1 t}{2} e^{\frac{-i \omega_0 t}{2}} |+>$ $=c_-(t)|-\rangle+c_+(t)|+>$ The probability that Rabi obsereved is given by multiplying the complex congugates which are calculated as $P_{|-\rangle}=\frac{1}{2}-\frac{1}{2}\cos \omega_1t$ $P_{|+>}=\frac{1}{2}+\frac{1}{2}\cos \omega_1t$ The Hamiltonian we found to govern Rabi's experiment is the exact Hamiltonian used in the NMR experiment, with two slight modifications. First, Rabi was able to select an initial state, while in NMR we have a thermal equilibrium population. Secondly we measure a transverse magnetization instead of |+> and |-\rangle. Liouville Von Neumann Equation \since we are dealing with a system that can be described by both a statistical and quantum mechanical aspects, we need a way to unify these descriptions. The solution is the Liouville-von Neumann equation, which describes how the density operator evolves in time. More specifically, it describes the time evolution of a mixed state, which is directly applicable to NMR, as we want to know how the off-diagonal elements of the density matrix evolve after an applied RF. The density matrix is evolving in time, thus we must derive a differential equation to describe this behavior then integrate it with each step of the NMR experiment. As usual we begin with the time-dependent Schroedinger Equation. $i \bar{h} \frac{d}{dt} \Psi (t) = H(t)\Psi (t)$ Lets also look at the thermal equilibrium of the spins. U\sing Boltzmann statistics the initial condition of our spins is described by $c_+(0)c^*_+(0)=\frac{Ne^{-\frac{E_+}{kT}}}{Z}$ $c_-(0)c^*_-(0)=\frac{Ne^{-\frac{E_-}{kT}}}{Z}$ where $Z=[e^{-\frac{E_-}{kT}}+e^{-\frac{E_+}{kT}}$ and $c_+(0)c^*_-(0)=0$ $c_-(0)c^*_+(0)=0$ Defining $| \Psi(t) \rangle=c_+(t)\+> +c_-(t)|-\rangle$ $\langle \Psi(t)|=c^*_+(t)\langle+| +c^*_-(t)<-|\ we immediately realize that we have a density operator! \[\rho(t)=|\Psi (t) \rangle<\Psi (t) |=c_+(t)c^*_+(t)\+>\langle+| + c_-(t)c^*_-(t)|-\rangle<-|+c_+(t)c^*_-(t)|+><-|+c_-(t)c^*_+(t)|-\rangle\langle+|$ Then the time evolution is $\frac{d}{dt}\rho(t)=\frac{d}{dt}|\Psi (t) \rangle<\Psi (t) | +|\Psi (t) \rangle\frac{d}{dt}< \Psi (t)|=-iH(t)|\Psi (t) \rangle<\Psi (t)|+i|\Psi (t) \rangle<\Psi (t)|H(t)$ which gives the Liouville-von Neumann Equation as $\frac{d}{dt}\rho(t)=i[H(t),\rho (t)]$ Lets now look at how this evolves in a simples NMR experiment. Initially we can define the density matrix at thermal equilibrium as $\rho (0)=\begin{bmatrix} c_+(0)c^*_+(0)&c_+(0)c^*_-(0)\c_-(0)c^*_+(0)&c_-(0)c^*_-(0) \end{bmatrix} = \frac{1}{Z} \begin{bmatrix}e^{-\frac{E_+}{kT}}&0 \ 0&e^{-\frac{E_-}{kT}} \end{bmatrix}$ High Temperature Approximation We can further refine the inital density operator by substituting in the equations for the energy levels E+ and E-. $E_(\pm)=\mp \frac{\bar{h} \omega_0}{2}$ but \since we know the kT>>$\omega_0$ then we can use a high temperature approximation of $e^{\pm\frac{\bar{h} \omega_0}{2kT}}=1 \pm \frac{\bar{h} \omega_0}{2kT}$ ans $Z=2$ resulting in a new density operator as $\rho (0)\begin{bmatrix} \frac{1}{2} +\frac{\bar{h} \omega_0}{2kT} & 0 \ 0 & \frac{1}{2}- \frac{\bar{h} \omega_0}{2kT} \end{bmatrix}=\frac{N}{2}1 + \frac{N \bar{h} \omega_0}{2kT}I_z$ Normally the magnetization constant is taken to be a scaling factor and the unity operator is invariant to applied fields which firther reduces the density matrix to $\rho(0)=I_z$ NMR Observables In a simple NMR experiment, the magnetization along the externally applied magnetix field is knocked into the x-y plane u\sing a rf pulse. As the spins recover along the external field direction (z direction) they precess near their Larmor frequencies through a coil which in turn produces an EMF. In order for an EMF to be generated the preces\sing magnetization must be perpendicular to the coil. According the Kirchoff $EMF=-\mu_0 \eta A \frac{d}{dt} M_x$ as is immediatley evident the magnetizaion is time dependent and our problem is essentially reduced to $M_x(t)=<T_x>=<\Psi (t)| I_x| \Psi (t) \rangle$ which is the expectation value of Ix. If instead we consider a generic operator <O> then $| \Psi (t) \rangle= \sum_n c_n(t)|n>$ $< \Psi (t)|= \sum_n c^*_n(t)<n|$ $<O>=<\Psi (t)| O| \Psi (t) \rangle=\sum_{n,m}c_n(t)c^*_m(t)<m|O|n>$ and recalling that $<n|\rho (t)|m>=c_n(t)c^*_m(t)$ results in $<O> \sum_{n,m} <n|\rho (t)|m><m|O|n>=Tr[\rho(t)O]$ Now for Mx $M_x=<I_x>=TR[I_x \rho (t)$ $\rho (t)=I_z$ $H(t)=-\omega_0I_z-2\omega_1 \cos\omega_0 t I_x$ Then all we need to do is calculate $\rho (t)$ and <Ix>! Matrix Approach Like we did in the above treatment for the roatating wave, we need to move $\rho (t)$ into a different representation. $\sigma(t)=R^{\dagger}(t) \rho(t)R(t)$ $\rho(t)=R(t) \sigma(t)R^{\dagger}(t)$ $\frac{d}{dt} \sigma(t)=-iH[H_{int}, \sigma(t)]$ and remembering that $R(t)=e^{i \omega_0 t I_z}$ $\sigma(0)=\rho(0)$ We get $U(t)=\begin{bmatrix} \cos \frac{\omega_1 t}{2}&i\sin \frac{\omega_1 t}{2}\ i\sin \frac{\omega_1 t}{2}&\cos \frac{\omega_1 t}{2}\end{bmatrix}$ $U^{\dagger}(t)=\begin{bmatrix} \cos \frac{\omega_1 t}{2}&-i\sin \frac{\omega_1 t}{2}\ -i\sin \frac{\omega_1 t}{2}&\cos \frac{\omega_1 t}{2}\end{bmatrix}$ which gives $\sigma(t)= U(t) \sigma(0) U^{\dagger}(t)$ $=\begin{bmatrix} \cos \frac{\omega_1 t}{2}&i\sin \frac{\omega_1 t}{2}\ i\sin \frac{\omega_1 t}{2}&\cos \frac{\omega_1 t}{2}\end{bmatrix} \begin{bmatrix} \frac{1}{2} & 0 \ 0 & \frac{1}{2} \end{bmatrix} \begin{bmatrix} \cos \frac{\omega_1 t}{2}&-i\sin \frac{\omega_1 t}{2}\ -i\sin \frac{\omega_1 t}{2}&\cos \frac{\omega_1 t}{2}\end{bmatrix}=\frac{1}{2}\begin{bmatrix} \cos \omega_1 t & -i\sin \omega_1 t\-i\sin \omega_1 t&\cos \omega_1 t\end{bmatrix}$ Now we can use $\sigma(t)$ to find $\rho (t)$. $\rho(t)=R(t) \sigma(t) R^{\dagger}(t)=\frac{1}{2} \begin{bmatrix} e^{\frac{i \omega_0 t}{2}} & 0 \ 0 &e^{\frac{-i \omega_0 t}{2}} \end{bmatrix} \begin{bmatrix} \cos \omega_1 t & -i\sin \omega_1 t\-i\sin \omega_1 t&\cos \omega_1 t\end{bmatrix} \begin{bmatrix} e^{\frac{-i \omega_0 t}{2}} & 0 \ 0 &e^{\frac{i \omega_0 t}{2}} \end{bmatrix} =\begin{bmatrix} \cos \omega_1 t & -i\sin \omega_1 t e^{i \omega_0 t}\-i\sin \omega_1 t e^{-i \omega_0 t}&\cos \omega_0 t \end{bmatrix}$ Now lets look at the observables Ix and Iy. $<I_x> = Tr{I_x \rho(t)}=Tr{ \begin{bmatrix} 0 & 1/2 \ 1/2 & 0 \end {bmatrix} \begin{bmatrix} \cos \omega_1 t & -i\sin \omega_1 t e^{i \omega_0 t}\-i\sin \omega_1 t e^{-i \omega_0 t}&\cos \omega_1 t \end{bmatrix}}$ $=\frac{sin \omega_1t sin \omega_0 t}{2}$ Then <Iy> is $<I_y>=\frac{sin \omega_1 t cos \omega_0 t}{2}$ And <Iz> is $<I_z>=\frac{cos \omega_1 t}{2}$ Operator Approach Alternatively we can use the operator approach, which is much less mathemtically intensive. $\sigma(0)=\rho (0) =I_z$ $H_{int}=-\omega_1 I_x$ $\sigma (t)=e^{-i H_{int} t} \sigma(0) e^{i H_{int} t}=cos \omega_1 t I_z +sin \omega_1 t I_y$ $\rho (t) = R(t) \sigma(t) R^{\dagger} (t) = cos \omega_1 t I_z + sin \omega_1 t cos\omega_0t I_y + sin\omega_1 t sin \omega_0 t I_x$ Using $Tr [I_n I_m] =\frac{1}{2} \delta_{nm}$ we obtain the same expectation values as before.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Spectroscopy/Magnetic_Resonance_Spectroscopies/Nuclear_Magnetic_Resonance/NMR_-_Theory/Quantum_Mechanic_Treatment.txt
Relaxation in NMR is a fundamental concept which describes the coherence loss of the magnetization in the x-y plane and the recovery of relaxation along the z-axis. One can easily imagine that in the absence of any other effects, the magnetization in the x-y plane will recover along the z-axis as the external magnetic field forces the spins to align with it. This is known as T1 relaxation. The loss of coherence of the magnetization in the x-y plane is due to spins "forgetting" their orientation with respect to the bulk magnetization. This is known as T2 relaxation. There are many factors that contribute to the relaxation processes. This page will examine the different types of relaxation at a basic level. For those interested in a deeper discussion please see article on T1 and T2 relaxation. T1 Relaxation T1 relaxation, also known as spin lattice or longitudinal relaxation is the time constant used to describe when ~63% of the magnetization has recovered to equilibrium. The T1 of a given spin is dictated by field fluctuations (both magnetic and electric) that occur in the sample. Consequently, T1 measurements can tell us important information regarding inter and intra molecular dynamics of the system. Several factor may cause this alternating field: molecular motion, J-Coupling, Dipolar Coupling, Chemical Shift Anisotropy, and quadrupole-phonon interactions. We can also look at relaxation from the energy level standpoint. Initially, we have a Boltzmann distribution of spins in the (For I=1/2) in two energy levels, with the lower energy level slightly more populated than the higher energy level. After a pulse these spin populations are inverted and now the higher energy level has more spins in it. Eventually, these spins will go back to their lower energy state due to relaxation. The timescale on which this occurs is T1. From the Bloch Equations, we know that magnetization is along the Z-axis is $M_z=M_0(1-\exp\frac{-t}{T_1})$ T1$\rho$ Relaxation T1$\rho$ is the relaxation of the magnetization during a spin lock. This becomes particularly important in the cross-polarization. T2 Relaxation T2 relaxation is also known as spin-spin or transverse relaxation. T2 relaxation involves energy transfer between interacting spins via dipole and exchange interactions. Spin-spin relaxation energy is transferred to a neighboring nucleus. The time constant for this process is called the spin-spin relaxation time (T2). The relaxation rate is proportional to the concentration of paramagnetic ions in the sample. This mechanism is largely temperature independent. T2 values are generally much less dependent on field strength, B, than T1 values. In the process of relaxation, the component of magnetization in xy-plane will become zero as M0 returns to z-axis. Time constant T2 is used here to describe spin-spin relaxation with the following function. This process involves energy changing between the spin-active and their adjacent nuclei. $M_{xy}=M_{xy0}exp\frac{-t}{T_2}$ T2* Relaxation In an ideal NMR spectrometer, the external magnetic field is completely homogeneous. However, all magnets have small inhomegneities in them. Consequently, nuclei experience different magnetic fields which changes the precessional frequency of each nucleus. The change in Larmor frequency results in dphasing in the transverse plane and is known as T2* Magnetic Field Dependence So far, we have only described what the relaxation parameters are, but have largely omitted the equations that give us the results. All elaxation measurements are based on the idea of a correlation time, tc. The correlation time is based on the environment that the spin is in. For $T_1$ relaxation, the correlation time must be on the order of the larmor frequency of the nucleus, while the correlation time for the for $T_2$ must be on the order of the inverse of the linewidth to make effects. The correlation time are random fluctuations in the magnetic field that couple to the nucleus. The equations that govern $T_1$ and $T_2$ are given below with no additional proof.. \begin{align*} \frac{1}{T_{1}} &=C \left(\frac{t_{c}}{1+w^{2}t_{c}^{2}}+\frac{2t_{c}}{1+4w^{2}t_{c}^{2}}\right) \[4pt] \frac{1}{T_{2}} &=3Ct_{c} \left(\frac{2}{\pi}\right)^\frac{3}{2}+\frac{1}{2} \end{align*} where $w$ is the Larmor frequency, and $C$ is a constant that contains temperature and frequency independent terms and defined as.... Problems 1. Derive why only 63% of spins have recovered to equilibrium after T1 2. Explain the differences between T1, T1/(/rho/),T2, and T2*. Relaxation Nuclear magnetic resonance (NMR) is an analytical technique used in chemistry to help identify chemical compounds, obtain information on the geometry and orientation of molecules, as well as to study chemical equilibrium of species undergoing physical changes of composition, among many others. Capitalizing on the ability to manipulate the magnetization through different pulse programs in NMR, allows for the study and understanding of the kinetics of a system. The exchange rates between two sites can be evaluated through dynamic nuclear magnetic resonance experiments (DNMR). 17O is a common, NMR active nucleus that is used in the study of kinetics. Introduction NMR uses radio frequency radiation to change the direction of nuclear spins that have been placed in a static magnetic field, and measures the change of magnetization as a function of time. Since its discovery, NMR has gone through many advancements that have enabled it to become a very useful analytical technique. The Fourier Transform NMR has enabled more complicated studies through the ability to create pulse programs that can manipulate the spectra, like saturate one species magnetization so no peak is produced. These pulse programs can also be used to tip the spin of certain nuclei, while keeping others along the z-axis. This is useful for many applications, including being able to quench signals, change the direction (positive or negative) of the signal, and track relaxation, to name a few examples. Using different pulse programs allows for the study of exchange rates between species. This is done by monitoring the changes in the environment of the NMR active nuclei as a result exchange between the sites. Because of the exchange, spins (magnetization) will be transferred, leading to changes in the bulk magnetization at both sites. Any NMR active nuclei can be used to study exchange rates, such as 13C, 1H, 17O, but 17O kinetic studies are often performed. This is done because 17O enriched water can be used as one of the exchange sites, normally the bulk solvent site. 17O NMR for Kinetics Studies Background and Equations Oxygen seventeen nuclei have a spin state of 5/2, making them susceptible to nuclear magnetic resonance. This isotope of oxygen is only 0.0373% naturally abundant, but using isotopically labeled oxygen compounds can result in useful information. Studying these nuclei in the presence of a magnetic field will provide information about the structure and environment of the oxygens in the molecule. Using dynamic NMR or DNMR, 17O NMR experiments can be performed to understand chemical reactivity and kinetics of compounds. DNMR studies the effect of a chemical exchange between two sites that have either a different chemical shift or coupling constant. These studies are done by obtaining NMR spectra over time and analyzing the increase and/or decrease of the signals. Unlike other methods that are used to study kinetics, NMR studies can acquire information about the effects of the exchange on the molecules. To utilize NMR spectra to establish kinetic information, the Bloch equations must be adapted to include terms that take into account relaxation as a result of chemical reactivity. While investigating exchange reaction of 17O water between two sites, the bulk water and water bound to a metal, it is assumed that the kinetics are 1st order, such that: $\dfrac{du_M}{dt}=-\overrightarrow{k}_M u_M +\overleftarrow{k} _W u_W$ $\dfrac{du_W}{dt}=-\overleftarrow{k}_W u_W + \overrightarrow{k}_M u_M$ Where $\vec{k}_M$ & $\vec{k}_W$ represent the rate of exchange between the bulk water and the bound water. These two sites can be said to be coupled because the isotopically enriched oxygen is exchanging between the metal site and bulk water site. As exchange occurs, the magnetization of the 17O metal ensemble and 17O water ensemble will change, not only due to magnetization relaxation, but also due to the exchange. The exchange rate terms can be added into the Bloch equations to take into account the relaxation. With the addition of this term, the equations are known as the Bloch-McConnell equations. Since there are two sites and three Bloch equations per site, there is a total of six equations for the change in magnetization of the system. Equations 1-3 are for the metal site, while Equations 4-6 are for the bulk water site. Bloch-McConnell Equations for Metal Site (Equations 1-3) $\dfrac{du_M}{dt} = v_M(\omega_{rf} - \omega_o)-\dfrac{u_M}{T_{2M}} -\overrightarrow{k}_M u_M + \overleftarrow{k}_W u_W$ $\dfrac{dv_M}{dt} = -u_M(\omega_{rf} - \omega_o)-\dfrac{v_M}{T_{2M}} - \overrightarrow{k}_M v_M + \overleftarrow{k}_W v_W$ $\dfrac{dm_{zM}}{dt} = v_M \omega_1-\dfrac{(m_{zM}-m_o)}{T_{1M}} - \overrightarrow{k}_M m_{zM} + \overleftarrow{k}_W m_{zW}$ Bloch-McConnell Equations for Bulk Water Site (Equations 4-6) $dfrac{du_W}{dt} = v_W(\omega_{rf} - \omega_o)-\dfrac{u_W}{T_{2W}} - \overleftarrow{k}_W u_W + \overrightarrow{k}_M u_M$ $dfrac{dv_W}{dt} = -u_W(\omega_{rf} - \omega_o)-\dfrac{v_W}{T_{2W}} - \overleftarrow{k}_W v_W + \overrightarrow{k}_M v_M$ $dfrac{dm_{zW}}{dt} = v_W \omega_1-\dfrac{(m_{zM}-m_o)}{T_{1W}} - \overleftarrow{k}_W m_{zW} + \overrightarrow{k}_M m_{zM}$ To analyze the NMR spectra, which is obtained by measuring the magnetization in the x-y plane, requires an equation that explains the magnetization change in the x-y plane as a function of time. In the rotating frame, the total magnetization in the x-y plane is comprised of two components the “real” and “imaginary” parts. Therefore, the total magnetization in the x-y plane can be expressed $m_{xy}=u+iv$, or $u=m_{xt}-iv$. Taking the derivative of this equation with respect to time leads to $\frac{dm_{xy}}{dt}=\frac{du}{dt}+i\frac{dv}{dt}$. Using the previous relationships, the Bloch equations for the two sites can be simplified and rearranged to give the magnetization in the x-y place as a function of time. Invoking the law of detailed balance, which states that the exchange rate of the metal site times the amount of 17O at this site is equal to the exchange rate of the bulk water site times the amount of 17O at this site, will eliminate one of the rate coefficients, simplifying the equations even further gives Equation 7. Equation 7:
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Spectroscopy/Magnetic_Resonance_Spectroscopies/Nuclear_Magnetic_Resonance/NMR_-_Theory/Relaxation/NMR%3A_Kinetics.txt
Introduction The Solomon equations describe the relaxation between two coupled spins, I and S. This page will be dedicated to developing a theoretical treatment beginning from the Bloch equations. We will investigate the ramifications of the relaxation and the consequences on the overall relaxation rates. Theoretical Treatment We begin by describing a system of two spins, I and S. Each spin has two states; low energy $\alpha$ and high energy $\beta$ and we can then come up with a 4 level energy system for these two spins. This is shown in the figure below. A spin can have allowed relaxation induced transitions between the $\alpha$ and $\beta$ states which occur at a rate (W1IS) as well as forbidden relaxation induced transitions between like states (W2), which are shown in the figure below. Forbidden transitions W2 corresponds to both spins flipping, known as a flip-flip transition, while W0 is where each spin flips between states, known as a flip-flop transition. The superscripts denote the order (zero, single, double) type of transition. We can now define the change in the population of each level in with 1=$\alpha \alpha$ 2=$\beta \alpha$, 3= $\alpha \beta$ and 4=$\beta \beta$ during a given amount of time. For example, for energy level 1, the population is initially n10. As time progress, the some of this population will go to energy levels 2, 3, and 4 at rates, W1I, W1S, W2, respectively. Meanwhile the state will gain population from states 2, 3, and 4 at at rates, W1I, W1S, W2, respectively. It may seem that this would give a net result of 0, but the populations of each state is given by the Boltzmann distribution and the argument may be easily made that the rate will depend on the number of spins in the given energy level. Therefore we may now write: $\dfrac{dn_1}{dt}=-W_S^1n_1-W_I^1n_1-W^2n_1+W_S^1n_3+W_I^1n_2+W^2n_4$ (eq. 1) the rates for the other energy levels may then be written as: $\dfrac{dn_2}{dt}=+W_I^1n_1+W_S^2n_4+W^0n_3-W_S^2n_2-W_I^1n_2-W^0n_2$ (eq. 2) $\dfrac{dn_3}{dt}=+W_S^1n_1+W_I^2n_4+W^0n_2-W_I^2n_3-W_S^1n_3-W^0n_3$(eq. 3) $\dfrac{dn_4}{dt}=-W_S^2n_4-W_I^2n_4-W^2n_4+W_I^2n_3+W_S^2n_2+W^2n_1$(eq. 4) We can then calculate the spin magnetization for the I and S spins, along the Z axis. The reason for the Z axis is because we are concerned with the recovery of the magnetization to it's equilibrium value. Therefore, the net magnetization for spin I will be the difference in populations between the I transitions and is expressed as $I_z=n_1-n_2+n_3-n_4$ (eq. 5) and spin S will be $S_z=n_1-n_3+n_2-n_4.$ (eq. 6) Further we can define the difference in populations between the 2 I-spin transitions, n1-n2-n3+n4, which is the same for the 2 S-spin transitions and will be denoted 2IzSz. $2I_zS_z=n_1-n_2-n_3+n_4.$ (eq. 7) The total population of the system is then given by $T=n_1+n_2+n_3+n_4$ (eq. 8) And ni can then be re-written as a combination of Iz, Sz, 2IzSz, and T such that $n_1=\dfrac{1}{4}(T+I_z+S_z+2I_zS_z)$ (eq. 9) $n_2=\dfrac{1}{4}(T-I_z+S_z-2I_zS_z)$ (eq. 10) $n_3=\dfrac{1}{4}(T+I_z-S_z-2I_zS_z)$ (eq. 11) $n_4=\dfrac{1}{4}(T-I_z-S_z+2I_zS_z)$ (eq. 12) Which can be verified by plugging equations 5-8 into equations 9-12. Derivation of The Solomon Equations The Solomon equations are the time-dependent operators Iz, Sz, and 2IzSz expressed in terms of W and the operators. We will take the following approach to derive each operator: 1) Take the time derivative of each operator. 2) Express the derivative in terms of W by substituting equations 1-4 for $\dfrac{dn_i}{dt}$. 3) Group the ni terms. 4) Substitute the equations 9-12 for ni and group the like W terms. This will give the final equations. Each step will be numbered according to this list. Derivation of Iz 1) $\dfrac{dI_z}{dt}=\dfrac{dn_1}{dt}-\dfrac{dn_2}{dt}+\dfrac{dn_3}{dt}-\dfrac{dn_4}{dt}$ 2) $=-W_S^1n_1-W_I^1n_1-W^2n_1+W_S^1n_3+W_I^1n_2+W^2n_4$ $-(+W_I^1n_1+W_S^2n_4+W^0n_3-W_S^2n_2-W_I^1n_2-W^0n_2)$ $+(+W_S^1n_1+W_I^2n_4+W^0n_2-W_I^2n_3-W_S^1n_3-W^0n_3)$ $-(-W_S^2n_4-W_I^2n_4-W^2n_4+W_I^2n_3+W_S^2n_2+W^2n_1)$ 3) $\dfrac{dI_z}{dt}=-2n_1[W_I^1+W^2]+2n_2[W_I^1+W^0]-2n_3[W_I^2+W^0]+2n_4[W_I^2+W^2]$ 4) $W^2[-2\dfrac{1}{4}*(T+I_z+S_z+2I_zS_z)+2\frac{1}{4}(T-I_z-S_z+2I_zS_z)]=W^2(-I_z-S_z)$ $W_I^1[-2\dfrac{1}{4}(T+I_z+S_z+2I_zS_z)+2\dfrac{1}{4}(T-I_z+S_z-2I_zS_z)=W_1^1(-I_z-2I_zS_z)$ $W^0[2\dfrac{1}{4}(T-I_z+S_z-2I_zS_z)+-2\dfrac{1}{4}(T+I_z-S_z-2I_zS_z)]=W^0(-I_z+S_z)$ $W_I^2[-2\dfrac{1}{4}(T+I_z-S_z-2I_zS_z)+2\dfrac{1}{4}(T-I_z-S_z+2I_zS_z)]=W_I^2(-I_z+2I_zS_z)$ $\dfrac{dI_z}{dt}=-(W_I^1+W_I^2+W^2+W^0)I_z-(W^2-W^0)S_z-(W_I^1-W_I^2)2I_zS_z$ Derivation of Sz $S_z=n_1-n_3+n_2-n_4.$ 1) $\dfrac{dS_z}{dt}=\dfrac{n_1}{dt}-\dfrac{n_3}{dt}+\dfrac{n_2}{dt}-\dfrac{n_4}{dt}$ 2) $\dfrac{dS_z}{dt}=[-W_S^1n_1-W_I^1n_1-W^2n_1+W_S^1n_3+W_I^1n_2+W^2n_4]$ $-[+W_S^1n_1+W_I^2n_4+W^0n_2-W_I^2n_3-W_S^1n_3-W^0n_3]$ $+[W_I^1n_1+W_S^2n_4+W^0n_3-W_S^2n_2-W_I^1n_2-W^0n_2]$ $-[-W_S^2n_4-W_I^2n_4-W^2n_4+W_I^2n_3+W_S^2n_2+W^2n_1]$ 3) $\dfrac{dS_z}{dt}=$$-2n_1[W_S^1+W^2]$$-2n_2[W^0+W_S^2]$$+2n_3[W_S^1+W^0]$$+2n_4[W^2+W_S^2]$ 4) $2\dfrac{1}{4}W_S^1[(-T-I_z-S_z-2S_zI_z)+(T+I_z-S_z-2I_zS_z)]=W_S^1(-S_z-2I_zS_z)$ $2\dfrac{1}{4}W^2[(-T-I_z-S_z-2S_zI_z)+(T-I_z-S_z+2I_zS_z)]=W^2(I_z+S_z)$ $2\dfrac{1}{4}W^0[(-T+I_z-S_z+2S_zI_z)+(T+I_z-S_z-2I_zS_z)]=W^0(I_z-S_z)$ $2\dfrac{1}{4}W_S^21[(-T-I_z+S_z+2S_zI_z)+(T-I_z-S_z+2I_zS_z)]=W_S^2(-S_z-2I_zS_z)$ $\dfrac{dS_z}{dt}=-S_z(W_S^1+W^2+W^0+W_S^2)+2I_zS_z(W_S^2-W_S^1)+I_z(W^0-W^2)$ Deriving 2IzSz $2I_zS_z=n_1-n_2-n_3+n_4.$ 1)$\dfrac{d2I_zS_z}{dt}=\dfrac{n_1}{dt}-\dfrac{n_2}{dt}-\dfrac{n_3}{dt}+\dfrac{n_4}{dt}$ 2)$\dfrac{d2I_zS_z}{dt}=[-W_S^1n_1-W_I^1n_1-W^2n_1+W_S^1n_3+W_I^1n_2+W^2n_4]$ $-[W_I^1n_1+W_S^2n_4+W^0n_3-W_S^2n_2-W_I^1n_2-W^0n_2]$ $-[W_S^1n_1+W_I^2n_4+W^0n_2-W_I^2n_3-W_S^1n_3-W^0n_3]$ $+[-W_S^2n_4-W_I^2n_4-W^2n_4+W_I^2n_3+W_S^2n_2+W^2n_1]$ 3)$\dfrac{d2I_zS_z}{dt}=-2n_1(W_S^1+W_I^1)+2n_2(W_I^1+W_S^2)+2n_3(W_S^1+W_I^2)-2n_4(W_S^2+W_I^2)$. 4)$W_S^1[-2\dfrac{1}{4}(T+I_z+S_z+2I_zS_z)+2\dfrac{1}{4}(T+I_z-S_z-2I_zS_z)]=-W_S^1[S_z+2I_zS_z]$ $W_I^1[-2\dfrac{1}{4}(T+I_z+S_z+2I_zS_z)+2\dfrac{1}{4}(T-I_z+S_z-2I_zS_z)]=W_I^1[I_z+2I_zS_z]$ $W_S^2[-2\dfrac{1}{4}(T-I_z+S_z-2I_zS_z)-2\dfrac{1}{4}(T-I_z-S_z+2I_zS_z)]=-W_S^2[S_z-2I_zS_z]$ $W_I^2[-2\dfrac{1}{4}(T+I_z-S_z-2I_zS_z)-2\dfrac{1}{4}(T-I_z-S_z+2I_zS_z)]=W_I^2[I_z-2I_zS_z]$ $\dfrac{d2I_zS_z}{dt}=S_z(W_S^2-W_S^1)+I_z(W_I^2-W_I^1)-2I_zS_z(W_S^2+W_S^1+W_I^2+W_I^1)$ Relaxation Mechanisms of the Solomon equations The Solomon equations for operators Iz, Sz, and 2IzSz are given below (in case the reader skips the derivation) $\dfrac{I_z}{dt}=-(W_I^1+W_I^2+W^2+W^0)I_z-(W^2-W^0)S_z-(W_I^1-W_I^2)2I_zS_z$ $\dfrac{S_z}{dt}=-S_z(W_S^1+W^2+W^0+W_S^2)+2I_zS_z(W_S^2-W_S^1)+I_z(W^0-W^2)$ $\dfrac{2I_zS_z}{dt}=S_z(W_S^2-W_S^1)+I_z(W_I^2-W_I^1)-2I_zS_z(W_S^2+W_S^1+W_I^2+W_I^1)$ Now, these equations are not quite right, as the treatment that we gave actually considers only the perturbed spins. Therefore we may now re-write these equations to account for the the spins that are perturbed by defining the perturbed spins as the perturbed population+equilibrium population=Iz, while the equilibrium population is I0z. The equations may now be re-written as $\dfrac{I_z-I_z^0}{dt}=-(W_I^1+W_I^2+W^2+W^0)(I_z-I_z^0)-(W^2-W^0)(S_z-S_z^0)-(W_I^1-W_I^2)$ $\dfrac{S_z-S_z^0}{dt}=-(S_z-S_z^0)(W_S^1+W^2+W^0+W_S^2)+2I_zS_z(W_S^2-W_S^1)+(I_z-I_z^0)(W^0-W^2)$ $\dfrac{2I_zS_z}{dt}=(S_z-S_z^0)(W_S^2-W_S^1)+(I_z-I_z^0)(W_I^2-W_I^1)2I_zS_z(W_S^2+W_S^1+W_I^2+W_I^1)$ Self Relaxation The Solomon equations for a given spin, here described for Iz but applicable for Sz and 2IzSz, show that the rate of relaxation is tied to Iz, Sz, and 2IzSz. The rate constants associated with Iz corresponds to the self relaxation of the spin magnetization on its own. This is given by the overall rate $W_I^1+W_I^2+W^2+W^0$ Note that the rates W2 and W0 correspond to processes cross-relaxation processes that need a spin S to occur. Therefore, the relaxation rate for Iz is not dependent on any other spins in the system. The summation of the rate constants is denoted by RI. Cross-Relaxation The term (W0-W2)Sz is present in the Solomon equations for Iz and corresponds to the the rate at which Sz transfers magnetization to Iz spin magnetization in a phenomenon is known as cross-relaxation. To properly illustrate the effect, take $W^2=W^0=0$. In this case, relaxation of a spin I is independent of spin S. On the other hand, if $W^2 and/or W^0\neq0$ but spin S is not perturbed, (i.e, ($S_z-S_z^0)=0$) cross relaxation will not occur either. The cross relaxation term is denoted as $\sigma_{IS}$. 2IzSz Relaxation The relaxation associated with $W_I^1-W_I^2$ is the transfer of magnetization from IzSz to I, denoted as $\Delta_{I}$, which will only occur if the rate constants are different. Meanwhile the operatore 2IzSz also shows self-relaxation as RIS, denoted by $R_{IS}=W_I^1+W_I^2+W_S^1+W_S^2$ Spin-Spin Relaxat Page Under Construction Introduction In this section, give a short introduction to your topic to put it in context. The module should be easy to read; please reduce excess space, crop figures to remove white spaces, full justify all text, with the exception of equations, which should use the equation command for construction (see FAQ). T2 and the Bloch Equations Rename to desired sub-topic. This is where you put the core text of your module. Add any number of headings necessary for your topic. Try to reduce unnecessary discussion and get to the point in a terse, yet informative, manner possible. T2 Measurments Rename to desired sub-topic. You can delete the header for this section and place your own related to the topic. Remember to hyperlink your module to other modules via the link button on the editor toolbar. Outside Links • This is not meant for references used for constructing the module, but as secondary and unvetted information available at other site • Link to outside sources. Wikipedia entries should probably be referenced here. Problems Be careful not to copy from existing textbooks. Originality is rewarded. Make up some practice problems for the future readers. Five original with varying difficulty questions (and answers) are ideal. Contributors and Attributions • Name #1 here (if anonymous, you can avoid this) with university affiliation Spin Lattice Rela Page Under Construction! Contributors and Attributions • Name #1 here (if anonymous, you can avoid this) with university affiliation
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Spectroscopy/Magnetic_Resonance_Spectroscopies/Nuclear_Magnetic_Resonance/NMR_-_Theory/Relaxation/Solomon_Equations.txt
Page under construction! Irreducible Tensor Operators are extremely valuable to help reduce complex mathematical problems commonly found in NMR. This page will be devoted to deriving the tensors and showing how to use the tensors in calculations. Cartesian Rotations Irreducible Tensor Operators operate on angular momenta, which is a fairly abstract concept to understand. We begin this discussion by investigating rotations in 2 and 3 dimensions so the reader understands a physical picture before moving into a complex quantum mechanical treatment. 2D Rotations Let's begin with the simple example of the hand of a clock. The clock hand is a vector which has a magnitude (the length of the hand) and a direction (the direction the arrow is pointing). The vector is rotating in a 2D plane (the clock face). We define our cartesian axes that the y axis lies along 12 and the x axis is along 3 on the clock. Initally, lets say it is 3:00 (t=0) and the vector lies along the x-axis and rotates counter-clockwise, then a some time, (t=2.2hrs) later the vector points along some arbitrary angle theta from the inital x-axis. Taking a snapshot of the vector at this time, we can see that vector a can now be described by $\vec{r}=a\hat{i}+b{\hat{j}=r\hat{i}\cos(\theta) \hat{j} + r\sin(\theta}) \hat{j}$ insert picture We can actually describe this vectors position using any arbitrary axis (time) for x and y as long as x and y remain orthogonal. If for example the axes are rotated by $\phi$ then the vector a time=2.2hrs is $\vec{r}=c\hat{i}+d{\hat{j}=r\hat{i}\cos(\theta-\phi) \hat{j} + r\sin(\theta-\phi}) \hat{j}$ insert picture Using the following triognometric identities, we can solve for a ,b, c, and d. $\cos(\theta-\phi)=\cos\theta \cos\phi + \sin\theta \sin\phi$ $\sin(\theta-\phi)=\sin\theta \cos\phi - \cos\theta \sin\phi$ the it follows that $a=r\cos\theta$ $b=r\sin\theta$ $c=r\cos(\theta-\phi)=r\cos\theta \cos\phi+r\sin\theta \sin\phi= a\cos\phi +b\sin\phi$ $d=r\sin(\theta-\phi)=r\sin\theta \cos\phi- r\cos\theta \sin\phi= b\cos\phi -a\sin\phi$ From inspection we can see that this can be readily transformed into a matrix $\begin{pmatrix} c\d \end{pmatrix}=\begin{pmatrix} \cos\phi & \sin\phi \ -\sin\phi & \cos\phi \end{pmatrix}\begin{pmatrix} a\b \end{pmatrix}$ In other word we can rotate r into r' by applying a rotation trnasformation R or $r'=Rr$ where R has the property $RR^{\dagger}=1$ leading to $\begin{pmatrix} a\b \end{pmatrix}=\begin{pmatrix} \cos\phi & -\sin\phi \ \sin\phi & \cos\phi \end{pmatrix}\begin{pmatrix} c\d \end{pmatrix}$ 3D Rotations In 3 dimensions rotations are slightly more complex. We need to specify another dimension in which the object can be rotated. In order to do this we need to employ three angles $\alpha, \beta, \gamma$, commonly referred to as the euler angles. $\alpha$ is rotation about the original z-axis, $beta$ is rotation about y' and $\gamma$ is rotation about z'. How the angle correspond to rotations are shown below. insert figure Our vector, r, may now be described as $\vec{r}=a\hat{i} +b\hat{j} + c\hat{k}$ We can now describe rotations R when one of the 3 axes is fixed If Z axis is fixed, $R_z=\begin{pmatrix} \cos\alpha & \sin\alpha & 0 \ -\sin\alpha &\cos\alpha &0 \ 0&0&1\end{pmatrix}$ When Y axis is fixed $R_y=\begin{pmatrix} \cos\beta & 0 & -\sin\beta \ 0 &1 &0 \ \sin\beta &0&\cos\beta\end{pmatrix}$ When X axis is fixed $R_z=\begin{pmatrix} \cos\gamma & \sin\gamma & 0 \ -\sin\gamma &\cos\gamma &0 \ 0&0&1\end{pmatrix}$ Then rotation of the original axis system to the new axis system is then $R=R_xR_yR_z=\begin{pmatrix} \cos\alpha & \sin\alpha & 0 \ -\sin\alpha &\cos\alpha &0 \ 0&0&1\end{pmatrix} \begin{pmatrix} \cos\beta & 0 & -\sin\beta \ 0 &1 &0 \ \sin\beta &0&\cos\beta\end{pmatrix} \begin{pmatrix} \cos\gamma & \sin\gamma & 0 \ -\sin\gamma &\cos\gamma &0 \ 0&0&1\end{pmatrix}$ Angular Momentum Rotations J=1/2 In a similar manner we can rotated angular momentum states. Like we developed a rotation operator, R for Cartesian space, we can define a new rotation operators which operates on angular momentum states using the Euler relations we made for 3D rotations. $D(\alpha, \beta, \gamma)=e^{i\gamma J_z} e^{i\beta J_y} e^{i\alpha J_x}$ Lets examine what happens when D operates on |J,m> $D(\alpha, \beta, \gamma)=\sum_{J',m'} |J',m'><J',m'|D(\alpha, \beta, \gamma)|J,m>$ The operator can only operate on a \single J value reducing the above equation to $D(\alpha, \beta, \gamma)=\sum_{m'}<J,m'|D(\alpha, \beta, \gamma)|J,m>|J,m' = \sum_{m'} D_{m',m}^{(J)}(\alpha, \beta, \gamma)|J,m'>$ where $D_{m',m}^{(J)}(\alpha, \beta, \gamma)=<J,m'|D(\alpha, \beta, \gamma)|J,m>$ which can be simplified to $D_{m',m}^{(J)}(\alpha, \beta, \gamma)=\sum_{n',n'} <J,m'|e^{i \gamma J_z}|J,n><J,n|e^{i \beta J_y}|J,n'><J,n'|e^{i \alpha J_z}|J,m>$ It is then realized that $e^{i\alpha J_z}|J,m>=[1+i\alpha J_z + \frac{(i\alpha J_z)^2}{2!}+...]|J,m> = [1+i\alpha m +\frac{(i \alpha m)^2}{2!}+...]|J,m=e^{1 \alpha m}|J,m$ Then $D_{m',m}^{J}(\alpha, \beta, \gamma)= e^{im'\gamma}<J,m'|e^{i\beta J_y}|J,m>e^{i m \alpha}= e^{im'\gamma}d_{m',m}^{J} (\beta) e^{i m \alpha}$ where $d_{m',m}^{J} (\beta) =<J,m'|e^{i\beta J_y}|J,m>$ We now must calculate $d_{m',m}^{J} (\beta)$ for a arbitrary J. Lets examine the case where J=1/2 for rotation operator $d^{1/2}(\beta)$ $d^{1/2}(\beta)=e^{i\beta J_y}$ Taking a Taylor expansion gives $e^{i\beta J_y}=1+i\beta J_y -\frac{1}{2!}\beta ^2 J_y ^2 -\frac{1}{3!} \beta ^3 j_Y^3$ $=\begin{pmatrix} 1 &0 \0&1\end{pmatrix} (1-\frac{\frac{\beta}{2}^2}{2!}+...)+\begin{pmatrix} 0&1\-1&0\end{pmatrix}(\frac{\beta}{2}-\frac{\frac{\beta}{2}^3}{3!}+...)$ We then see that multiplying the Taylor expansion of $e^{i \beta J_y}$ by the respective matrices, we obtain the Taylor expansions of \cos and \sin which can be rewritten as: $= \begin{pmatrix} \cos(\beta/2)&\sin(\beta/2)\-\sin(\beta/2) & \cos(\beta /2)\end{pmatrix}$ Then we have obtained the values for d $d_{1/2,1/2}^{(1/2)}=\cos(\beta /2)$ $d_{1/2,-1/2}^{(1/2)}=\sin(\beta/ 2)$ J>1/2 Now we must calculate $d_{m',m}^{J} (\beta)$ for J>1/2. To do this we must first uncouple J and m' $D_{m',m}^{J}(\alpha, \beta, \gamma)|J,m>=\sum_{m'} D_{m',m}^{J}(\alpha, \beta, \gamma) |J,m'>$ $=(-1)^{j_2-j_1-m} \sqrt{2J+1} \sum{m_1,m_2} D(\alpha, \beta, \gamma) |j_1,m_1> D(\alpha, \beta, \gamma) |j_2,m_2> \begin{pmatrix} j_1 &j_2 & J \ m_1 & m_2 & -m \end{pmatrix}$ $=(-1)^{j_2-j_1-m} \sqrt{2J+1} \sum{m_1,m_2 , m'_1, m'_2} D(\alpha, \beta, \gamma)^{(j_1)}_{m'_1 m_1} D(\alpha, \beta, \gamma)^{(j_2)}_{m'_2 m_2} \begin{pmatrix} j_1 &j_2 & J \ m_1 & m_2 & -m \end{pmatrix} | j_1m'_1;j_2,m'_2>$ Taking this result and left multiplying by <Jm'| and uncoupling J and m' to the other j and m states results in the expression for $D(\alpha, \beta, \gamma)^{(J)}_{m',m}$ $D(\alpha, \beta, \gamma)^{(J)}_{m',m}=(-1)^{j_2-j_1-m} \sqrt{2J+1} \sum{m_1,m_2 , m'_1, m'_2} D(\alpha, \beta, \gamma)^{(j_1)}_{m'_1 m_1} D(\alpha, \beta, \gamma)^{(j_2)}_{m'_2 m_2} \begin{pmatrix} j_1 &j_2 & J \ m_1 & m_2 & -m \end{pmatrix} <J,m'|| j_1m'_1;j_2,m'_2>$ $=(2J+1)\sum_{m_1,m_2} \begin{pmatrix} j_1 &j_2 & J \ m_1 & m_2 & -m \end{pmatrix}\begin{pmatrix} j_1 &j_2 & J \ m'_1 & m'_2 & -m' \end{pmatrix}D(\alpha, \beta, \gamma)^{(j_1)}_{m'_1 m_1} D(\alpha, \beta, \gamma)^{(j_2)}_{m'_2 m_2}$ From this result we can calculate any $D(\alpha, \beta, \gamma)^{(J)}_{m',m}$ for any J value. This is important since the observables we wish to calculate in NMR are simply rotations of the angular momentum. Looking at the time-dependant Schrödinger equation $\frac{d}{dt}|\psi (t)>=-iH|\psi (t)>$ $|\psi (t)=e^{iHt}|\psi (t)>$ Lets now examine what we can do by rotating the angular momentum vector J,m to J m'. Looking at the z component of J we know $<J,m'|J_z|J,m>=m\delta_{m',m}$ which results in a diagonal matriz for J_z with the digonal elements being the m quantum numbers. Then J_z may be written as $J_z=\sum_m m|J,m><J,m|$ which has the complex conjugate need to figure this out! Now we have created the operatore for J_z.. In principle we can create an operator for any combination of |J'm'><J,m|. Now lets see what happens weh nwe rotate the operator in which the states are weighted by the angular momentum characteristics. We choose to label the new operator $T^{(1)}_{(2)}(3,4)$ by the following indicie. $k=J+J'$ The Total Angular Momentum $Q=m'-m$ The difference in M values $J$ The initial and $J'$ the final angular momentum. $T^{(k)}_{(Q)}(J,J')=\sqrt{2k+1}\sum_{m,m'}(-1)^{J'-m'} \begin{pmatrix} J'&J&k\m'&-m&-Q\end{pmatrix}|J',m'><J,m|$ To rotate this operator we must left multiply by $D(\alpha, \beta, \gamma)$ and right multiple by $D(\alpha, \beta, \gamma)^{\dagger}$. This corresponds the rotation of the final and initial states. $D(\alpha, \beta, \gamma)T^{(k)}_{(Q)}(J,J')(D(\alpha, \beta, \gamma)^{\dagger}=\sqrt{2k+1}\sum_{m,m',n,n'}(-1)^{J'-m'} \begin{pmatrix} J'&J&k\ m'&-m&-Q\end{pmatrix} D(\alpha, \beta, \gamma)^{(J)}_{n',m} D(\alpha, \beta, \gamma)^{(J)\dagger}_{m,n} |J',m'><J,m|$ where $D^{J_1}(\alpha, \beta, \gamma) D^{J_2}(\alpha, \beta, \gamma)^{\dagger}=\sum_{J,m,m'} (2J+1) \begin{pmatrix} j_1&j_2&J\m_1'&M_2'&-m'\end{pmatrix} \begin{pmatrix} j_1&j_2&J\m_1&M_2&-m\end{pmatrix}(-1)^{m'-m} D(\alpha, \beta, \gamma)^{(J)}_{-m',m}$ which simplifies the previous equation to $D(\alpha, \beta, \gamma)T^{(k)}_{(Q)}(J,J')(D(\alpha, \beta, \gamma)^{\dagger}=\sqrt{2k+1}\sum_{n',n,q}(-1)^{J'-n'} \begin{pmatrix} J'&J&k\n'&-n&-q\end{pmatrix} D_{qQ}^{(k)}(\alpha, \beta, \gamma)|J',n'><J,n|$ which is really $T^{(k)}_{(Q)}(J,J')$ with Q replaced by q and m',m with n',n or $D(\alpha, \beta, \gamma)T^{(k)}_{(Q)}(J,J')(D(\alpha, \beta, \gamma)^{\dagger}=\sum_q D_{qQ}^{(k)}(\alpha, \beta, \gamma)T^{(k)}_{(q)}(J,J')$ This is a very unique property of these operators! Rotation of one operator produces another operator that is weighted by $D_{qQ}^{(k)}(\alpha, \beta, \gamma)$. It is now trivial to create analytical solutions as $D_{mm'}^{(J)}(\alpha, \beta, \gamma)$ are known.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Spectroscopy/Magnetic_Resonance_Spectroscopies/Nuclear_Magnetic_Resonance/NMR_-_Theory/Rotations_and_Irreducible_Te.txt
Nuclear Magnetic Resonance (NMR) is a nuclei (Nuclear) specific spectroscopy that has far reaching applications throughout the physical sciences and industry. NMR uses a large magnet (Magnetic) to probe the intrinsic spin properties of atomic nuclei. Like all spectroscopies, NMR uses a component of electromagnetic radiation (radio frequency waves) to promote transitions between nuclear energy levels (Resonance). Most chemists use NMR for structure determination of small molecules. Introduction In 1946, NMR was co-discovered by Purcell, Pound and Torrey of Harvard University and Bloch, Hansen and Packard of Stanford University. The discovery first came about when it was noticed that magnetic nuclei, such as 1H and 31P (read: proton and Phosphorus 31) were able to absorb radio frequency energy when placed in a magnetic field of a strength that was specific to the nucleus. Upon absorption, the nuclei begin to resonate and different atoms within a molecule resonated at different frequencies. This observation allowed a detailed analysis of the structure of a molecule. Since then, NMR has been applied to solids, liquids and gasses, kinetic and structural studies, resulting in 6 Nobel prizes being awarded in the field of NMR. Spin and Magnetic Properties The nucleus consists of elementary particles called neutrons and protons, which contain an intrinsic property called spin. Like electrons, the spin of a nucleus can be described using quantum numbers of I for the spin and m for the spin in a magnetic field. Atomic nuclei with even numbers of protons and neutrons have zero spin and all the other atoms with odd numbers have a non-zero spin. Furthermore, all molecules with a non-zero spin have a magnetic moment, $\mu$, given by $\mu=\gamma I$ where $\gamma$ is the gyromagnetic ratio, a proportionality constant between the magnetic dipole moment and the angular momentum, specific to each nucleus (Table 1). Table $1$: The gyromagnetic ratios for several common nuclei Nuclei Spin Gyromagetic Ratio (MHz/T) Natural Abundance (%) 1H 1/2 42.576 99.9985 13C 1/2 10.705 1.07 31P 1/2 17.235 100 27Al 5/2 11.103 100 23Na 3/2 11.262 100 7Li 3/2 16.546 92.41 29Si 1/2 -8.465 4.68 17O 5/2 5.772 0.038 15N 1/2 -4.361 0.368 The magnetic moment of the nucleus forces the nucleus to behave as a tiny bar magnet. In the absence of an external magnetic field, each magnet is randomly oriented. During the NMR experiment the sample is placed in an external magnetic field, $B_0$, which forces the bar magnets to align with (low energy) or against (high energy) the $B_0$. During the NMR experiment, a spin flip of the magnets occurs, requiring an exact quanta of energy. To understand this rather abstract concept it is useful to consider the NMR experiment using the nuclear energy levels. Nuclear Energy Levels As mentioned above, an exact quanta of energy must be used to induce the spin flip or transition. For any m, there are 2m+1 energy levels. For a spin 1/2 nucleus, there are only two energy levels, the low energy level occupied by the spins which aligned with $B_0$ and the high energy level occupied by spins aligned against $B_0$. Each energy level is given by $E=-m\hbar \gamma B_0$ where $m$ is the magnetic quantum number, in this case +/- 1/2. The energy levels for $m>1/2$, known as quadrupolar nuclei, are more complex and information regarding them can be found here. The energy difference between the energy levels is then $\Delta E=\hbar \gamma B_0$ where $\hbar$ is Planks constant. A schematic showing how the energy levels are arranged for a spin=1/2 nucleus is shown below. Note how the strength of the magnetic field plays a large role in the energy level difference. In the absence of an applied field the nuclear energy levels are degenerate. The splitting of the degenerate energy level due to the presence of a magnetic field in known as Zeeman Splitting. Energy Transitions (Spin Flip) In order for the NMR experiment to work, a spin flip between the energy levels must occur. The energy difference between the two states corresponds to the energy of the electromagnetic radiation that causes the nuclei to change their energy levels. For most NMR spectrometers, $B_0$ is on the order of Tesla (T) while $\gamma$ is on the order of $10^7$. Consequently, the electromagnetic radiation required is on the order of 100's of MHz and even GHz. The energy of a photon is represented by $E=h\nu$ and thus the frequency necessary for absorption to occur is represented as: $\nu=\dfrac{\gamma B_0}{2\pi}$ For the beginner, the NMR experiment measures the resonant frequency that causes a spin flip. For the more advanced NMR users, the sections on NMR detection and Larmor frequency should be consulted. Nuclear Shielding The power of NMR is based on the concept of nuclear shielding, which allows for structural assignments. Every atom is surrounded by electrons, which orbit the nucleus. Charged particles moving in a loop will create a magnetic field which is felt by the nucleus. Therefore the local electronic environment surrounding the nucleus will slightly change the magnetic field experienced by the nucleus, which in turn will cause slight changes in the energy levels! This is known as shielding. Nuclei that experience different magnetic fields due to the local electronic interactions are known as inequivalent nuclei. The change in the energy levels requires a different frequency to excite the spin flip, which as will be seen below, creates a new peak in the NMR spectrum. The shielding allows for structural determination of molecules. The shielding of the nucleus allows for chemically inequivalent environments to be determined by Fourier Transforming the NMR signal. The result is a spectrum, shown below, that consists of a set of peaks in which each peak corresponds to a distinct chemical environment. The area underneath the peak is directly proportional to the number of nuclei in that chemical environment. Additional details about the structure manifest themselves in the form of different NMR interactions, each altering the NMR spectrum in a distinct manner. The x-axis of an NMR spectrum is given in parts per million (ppm) and the relation to shielding is explained here. Relaxation Relaxation refers to the phenomenon of nuclei returning to their thermodynamically stable states after being excited to higher energy levels. The energy absorbed when a transition from a lower energy level to a high energy level occurs is released when the opposite happens. This can be a fairly complex process based on different timescales of the relaxation. The two most common types of relaxation are spin lattice relaxation (T1) and spin spin relaxation (T2). A more complex treatment of relaxation is given elsewhere. To understand relaxation, the entire sample must be considered. By placing the nuclei in an external magnetic field, the nuclei create a bulk magnetization along the z-axis. The spins of the nuclei are also coherent. The NMR signal may be detected as long as the spins are coherent with one another. The NMR experiment moves the bulk magnetization from the z-axis to the x-y plane, where it is detected. • Spin-Lattice Relaxation ($T_1$): T1 is the time it takes for the 37% of bulk magnetization to recovery along Z-axis from the x-y plane. The more efficient the relaxation process, the smaller relaxation time (T1) value you will get. In solids, since motions between molecules are limited, the relaxation time (T1) values are large. Spin-lattice relaxation measurements are usually carried out by pulse methods. • Spin-Spin Relaxation ($T_2$): T2 is the time it takes for the spins to lose coherence with one another. T2 can either be shorter or equal to T1. Applications The two major areas where NMR has proven to be of critical importance is in the fields of medicine and chemistry, with new applications being developed daily Nuclear magnetic resonance imaging, better known as magnetic resonance imaging (MRI) is an important medical diagnostic tool used to study the function and structure of the human body. It provides detailed images of any part of the body, especially soft tissue, in all possible planes and has been used in the areas of cardiovascular, neurological, musculoskeletal and oncological imaging. Unlike other alternatives, such as computed tomography (CT), it does not used ionized radiation and hence is very safe to administer. In many laboratories today, chemists use nuclear magnetic resonance to determine structures of important chemical and biological compounds. In NMR spectra, different peaks give information about different atoms in a molecule according specific chemical environments and bonding between atoms. The most common isotopes used to detect NMR signals are 1H and 13C but there are many others, such as 2H, 3He, 15N, 19F, etc., that are also in use. NMR has also proven to be very useful in other area such as environmental testing, petroleum industry, process control, earth’s field NMR and magnetometers. Non-destructive testing saves a lot of money for expensive biological samples and can be used again if more trials need to be run. The petroleum industry uses NMR equipment to measure porosity of different rocks and permeability of different underground fluids. Magnetometers are used to measure the various magnetic fields that are relevant to one’s study. Problems 1. Calculate the magnetic field, B0 that corresponds to a precession frequency of 600 MHz for 1H. 2. What is the field strength (in tesla) needed to generate a 1H frequency of 500 MHz? 3. How do spin-spin relaxation and spin-lattice relaxation differ from each other? 4. The 1H NMR spectrum of toluene shows that it has two peaks because of methyl and aromatic protons recorded at 60 MHz and 1.41 T. Given this information, what would be the magnetic field at 400 MHz? 5. What is the difference between 13C and 1H NMR? Solutions 1. B0= 14.1 T. 2. Using the equation used in problem 1 and solving it for B0we get a field strength of 11.74 T. 3. Look under relaxation. 4. Since we know that the NMR frequency is directly proportional to the magnetic strength, we calculate the magnetic field at 400 MHz: B0 = (400 MHz/60MHz) x 1.41 T = 9.40 T 5. Look under applications. Nuclear Magnetic Resonance Spectroscopy ( Learning Objectives After completing this unit, a student will be able to: 1. Explain the origins of the two energy levels involved in NMR transitions. 2. Explain what happens to a hydrogen nucleus during the excitation process. 3. Describe the importance of the populations of the two energy states as it affects sensitivity and coupling constants. 4. Describe the two processes by which excited state nuclei relax back to the ground state. 5. Explain the advantages of having a larger applied magnetic field and provide the rationale for each of these advantages. 6. Describe the origin of electron shielding. 7. Explain the origin of nuclear coupling. 8. Predict the multiplet nature of the resonance of a hydrogen atom due to coupling. 9. Identify the parameters that influence the magnitude of a coupling constant. 10. Predict the number of resonances and their multiplet nature for compounds undergoing slow and fast exchange. 11. Explain the classical description of NMR spectroscopy and how it is consistent with the quantum mechanical description. 12. Explain the complete process by which it is possible to obtain a free induction decay. 13. Describe the normal pulse sequence used in Fourier transform NMR. 14. Describe methods or strategies that can be used to improve the sensitivity of NMR. 15. Explain why resonances with long relaxation times have diminished signal and describe how magnetic resonance imaging can be used to generate an image. The goal of this unit is to develop introductory concepts on nuclear magnetic resonance (NMR) spectroscopy that are most relevant for undergraduate chemistry majors. Development of the concepts of NMR is accomplished through an examination of the normal hydrogen (¹H) nucleus. The unit is focused on understanding what occurs in molecules and within the NMR spectrometer that causes ¹H NMR spectra to look the way they do. There is less of an emphasis on interpreting NMR spectra, although the concepts developed herein provide students with the understanding needed to begin interpreting NMR spectra. Components of both a quantum mechanical and classical description of NMR spectroscopy are developed. InternalError: An item with the same key has already been added. [System.ArgumentException] (click for details) ```Callstack: at wiki.gettopichierarchy() at (Template:MindTouch/IDF3/Views/Topic_hierarchy), /content/body/pre, line 25, column 44 at template() at (Template:MindTouch/IDF3/Views/Guide), /content/body/pre[1], line 42, column 57 at template() at (Template:ShowGuide), /content/body/pre[2], line 2, column 9 at template() at (Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Spectroscopy/Magnetic_Resonance_Spectroscopies/Nuclear_Magnetic_Resonance/Nuclear_Magnetic_Resonance_Spectroscopy_(Wenzel)), /content/body/p[2]/span, line 1, column 19 ```
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Spectroscopy/Magnetic_Resonance_Spectroscopies/Nuclear_Magnetic_Resonance/Nuclear_Magnetic_Resonance_II.txt
Mössbauer spectroscopy is a versatile technique used to study nuclear structure with the absorption and re-emission of gamma rays, part of the electromagnetic spectrum. The technique uses a combination of the Mössbauer effect and Doppler shifts to probe the hyperfine transitions between the excited and ground states of the nucleus. Mössbauer spectroscopy requires the use of solids or crystals which have a probability to absorb the photon in a recoilless manner, many isotopes exhibit Mössbauer characteristics but the most commonly studied isotope is 57Fe. Introduction Rudolf L. Mössbauer became a physics student at Technical University in Munich at the age of 20. After passing his intermediate exams Mössbauer began working on his thesis and doctorate work in 1955, while working as an assistant lecturer at Institute for Mathematics. In 1958 at the age of 28 Mössbauer graduated, and also showed experimental evidence for recoilless resonant absorption in the nucleus, later to be called the Mössbauer Effect. In 1961 Mössbauer was awarded the Nobel Prize in physics and, under the urging of Richard Feynman, accepted the position of Professor of Physics at the California Institute of Technology. Mössbauer Effect The recoil energy associated with absorption or emission of a photon can be described by the conservation of momentum. In it we find that the recoil energy depends inversely on the mass of the system. For a gas the mass of the single nucleus is small compared to a solid. The solid or crystal absorbs the energy as phonons, quantized vibration states of the solid, but there is a probability that no phonons are created and the whole lattice acts as the mass, resulting in a recoilless emission of the gamma ray. The new radiation is at the proper energy to excite the next ground state nucleus. The probability of recoilless events increases with decreasing transition energy. $P_R = P_{\gamma} \nonumber$ $P^2_\gamma = P^2_{\gamma} \nonumber$ $2 M E_R = \dfrac{E^2_{\gamma}}{c^2} \nonumber$ $E_R = \dfrac{E^2_\gamma}{2M{c^2}} \nonumber$ Doppler Effect The Doppler shift describes the change in frequency due to a moving source and a moving observer. $f$ is the frequency measured at the observer, $v$ is the velocity of the wave so for our case this is the speed of light $c$, $v_r$ is the velocity of the observer, $v_s$ is the velocity of the source which is positive when heading away from the observer, and $f_0$ is the initial frequency. $f = {\left (\dfrac{v+v_r}{v+v_s}\right)} f_0 \nonumber$ $f = {\left (\dfrac{c}{c+v_s}\right)} f_0 \nonumber$ In the case where the source is moving toward a stationary observer the perceived frequency is higher. For the opposite situation where the source travels away from the observer frequencies recorded at the observer will be of lower compared to the initial wave. The energy of a photon is related to the product of Planck's constant and the frequency of the electromagnetic radiation. Thus for increasing frequencies the corresponding energy also increase, and the same is true in the reverse case where frequencies decrease and therefore energy decreases. $E = \dfrac{hc}{\lambda} = hv \nonumber$ The energy differences between hyperfine states are minimal (fractions of an eV) and the energy variation is achieved by the moving the source toward and away from the sample in an oscillating manner, commonly at a velocity of a few mm/s. The transmittance is then plotted against the velocity of the source and a peak is seen at the energy corresponding to the resonance energy. In the above spectrum the emission and absorption are both estimated by the Lorentzian distribution. Mössbauer Isotopes By far the most common isotopes studied using Mössbauer spectroscopy is 57Fe, but many other isotopes have also displayed a Mössbauer spectrum. Two criteria for functionality are 1. The excited state is of very low energy, resulting in a small change in energy between ground and excited state. This is because gamma rays at higher energy are not absorbed in a recoil free manner, meaning resonance only occurs for gamma rays of low energy. 2. The resolution of Mössbauer spectroscopy depends upon the lifetime of the excited state. The longer the excited state lasts the better the image. Both conditions are met by 57Fe and it is thus used extensively in Mössbauer spectroscopy. In the figure to the right the red colored boxes of the periodic table of elements indicate all elements that have isotopes visible using the Mössbauer technique. Hyperfine Interactions Mössbauer spectroscopy allows the researcher to probe structural elements of the nucleus in several ways, termed isomer shift, quadrupole interactions, and magnetic splitting. These are each explained by the following sections as individual graphs, but in practice Mössbauer spectrum are likely to contain a combination of all effects. Isomer Shift An isomeric shift occurs when non identical atoms play the role of source and absorber, thus the radius of the source, $R_s$, is different that of the absorber, $R_a$, and the same holds that the electron density of each species is different. The Coulombic interactions affects the ground and excited state differently leading to a energy difference that is not the same for the two species. This is best illustrated with the equation: $R_A \neq R_S \nonumber$ $\rho_S \neq \rho_S \nonumber$ $E_A \neq E_S \nonumber$ $\delta = E_A-E_S = \dfrac{2}{3}nZ{e^2}{(\rho_A - \rho_S)}(R^2_{es} - R^2_{gs}) \nonumber$ Where delta represents the change in energy necessary to excite the absorber, which is seen as a shift from the Doppler speed 0 to V1. The isomer shift depends directly on the s-electrons and can be influenced by the shielding p, d, f electrons. From the measured delta shift there is information about the valance state of the absorbing atom The energy level diagram for $\delta$ shift shows the change in source velocity due to different sources used. The shift may be either positive or negative. Quadrupole Interaction The Hamiltonian for quadrupole interaction using ${}^{57}Fe$ nuclear excited state is given by $H_Q = \dfrac{eQV_{ZZ}}{12}[3I^2_Z-I(I+1) + \eta(I^2_X-I^2_y)] \nonumber$ where the nuclear excited states are split into two degenerate doublets in the absence of magnetic interactions. For the asymmetry parameter $\eta = 0$ doublets are labeled with magnetic quantum numbers $m_{es} = \pm 3/2$ and $m_{es} = \pm 1/2$, where the $m_{es} = \pm 3/2$ doublet has the higher energy. The energy difference between the doublets is thus $\Delta{EQ} = \dfrac{eQV_{zz}}{2}\sqrt{1+\dfrac{\eta^2}{3}} \nonumber$ The energy diagram and corresponding spectrum can be seen as Magnetic Splitting Magnetic splitting of seen in Mössbauer spectroscopy can be seen because the nuclear spin moment undergoes dipolar interactions with the magnetic field $E(m_I) = -g_n{\beta_n}{B_{eff}}m_I \tag{14}$ where $g_n$ is the nuclear g-factor and $\beta_n$ is the nuclear magneton. In the absence of quadrupole interactions the Hamiltonian splits into equally spaced energy levels of The allowed gamma stimulated transitions of nuclear excitation follows the magnetic dipole transition selection rule: $\Delta I = 1 \nonumber$ and $\Delta m_I = 0, \pm 1 \nonumber$ where $m_I$ is the magnetic quantum number and the direction of $\beta$ defines the nuclear quantization axis. If we assume $g$ and $A$ are isotropic (direction independent) where $g_x = g_y = g_z$ and $B$ is actually a combination of the applied and internal magnetic fields: $H = g\beta{S}\centerdot{B}+AS\centerdot{I} - g_n\beta_nB\centerdot{I} \nonumber$ The electronic Zeeman term is far larger then the nuclear Zeeman term, meaning the electronic term dominates the equation so $S$ is approximated by $\langle S \rangle$ and $\langle S_z\rangle = m_s = \pm \dfrac{1}{2} \nonumber$ and $\langle S_x \rangle = \langle S_y \rangle \approx 0 \nonumber$ $H_n = A \langle S \rangle \centerdot{I} - g_n\beta_nB\centerdot{I} \nonumber$ Pulling out a $-g_n\beta_n$ followed by $I$ leaves $H_n = -g_n\beta_n \left( -\dfrac{A \langle S \rangle}{g_n\beta_n} + B\right){I} \nonumber$ Substituting the internal magnetic field with $B_{int} = -\dfrac{A \langle S \rangle }{g_n\beta_n} \nonumber$ results in a combined magnetic field term involving both the applied magnetic field and the internal magnetic field $H_n = -g_n\beta_n(B_{int} + B)\centerdot{I} \nonumber$ which is simplified by using the effective magnetic field $B_{eff}$ $H_n = -g_n\beta_nB_{eff}\centerdot{I} \nonumber$ Problems 1. The magnetic splitting of $m_I = 0$ intensity of transition is related to $sin^2(\theta)$ of the angle between the incoming gamma ray and the effective magnetic field. When is the intensity of transition at max? 2. Why is it important for the sample to be in solid or crystalline state? 3. What case will result in a delta shift of 0. 00 mm/s? 4. Why is the Doppler effect important to Mössbauer spectroscopy? 5. Why are both the emission and absorption distributions the same? (both estimated with Lorentzian functions)
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Spectroscopy/Mossbauer_Spectroscopy.txt
Photoacoustic spectroscopy (PAS) uses acoustic waves produced from materials which are exposed to light to measure its concentration. PAS is unique in that it combines heat measurements with optical microscopy. Gases have been the ideal samples used but more research has been increasing gradually to use PAS efficiently for solid and liquid samples. When measuring a sample, it takes measurements directly through looking at the internal heat instead of the effects of the light on the surroundings. This makes PAS highly accurate and useful for sensitive detectors. Interest sparked after Alexander Graham Bell wrote about his findings when he discovered the acoustic effect in 1880. Bell accidently stumbled onto this effect as he was experimenting with this invention of the photophone. He noticed that a clear acoustic sound formed whenever the sunlight hitting the sample was interrupted. Bell realized that the absorption of light by the material caused the sound wave which is now known as the photoacoustic effect. The ultraviolet and infrared spectra also worked and experimented by Bell. However, the apparatus was not sophisticated enough to show any promise in accurate results and the development of PAS was put on a halt. It was not until the introduction of more sophisticated equipment did the development of PAS start again. Today, most set ups use other sources than sunlight and microphones instead of the ear to measure the waves emitted from a material accurately. Theory Generally, when a material absorbs light, there are many paths the energy can go on. Light is always conserved as shown by the equation, $1 = \alpha +T+R \label{1}$ where is $\alpha$ the absorbance, $T$ is the transmittance, and $R$ is the reflectance. Light that hits the sample must either be absorbed, transmit through the material, or reflect off of the material. PAS focuses on the light path that is absorbed as that is where heat is released. As light strikes the sample, the photons are absorbed and electrons are excited from the energy created. This energy is then released as heat and as the heat expands, acoustic waves are formed. This process is explained below and shown in Figure $1$. As light is absorbed electrons are excited either electronically or vibrationally. When looking at electronic excitation, electrons jump to a higher energy level. As they drop back to its ground state, the extra energy is given off as heat. Collision deactivation, another form of heat formation, involves the colliding of atoms. The collision of atoms give off energy in the form of heat. However in the case of electronic excitation, the energy can also be dissipated through chemical reactions or radiative emissions as seen in figure 1. Chemical reactions involve any reactions with its surroundings as energy is used to initiate those reactions. Radiative emissions involve the energy given off as photons, rendering it useless for PAS which requires heat. This reduces the amount of heat formed as energy is spent somewhere else. It is possible to have chemical reactions form heat, but only a portion of the energy absorbed goes towards heat. On the other hand, with vibrational energy, chemical reactions and radiative emissions have little effect. The lifetimes of the vibrations are long enough prevent chemical reactions and radiative emissions from interfering. Therefore, the atoms have as much time as needed to complete the process of collision deactivation which will effectively use the full amount of energy to transfer to heat. With the formation of heat, thermal expansion also occurs. The expansion of heat creates localized pressure waves and in turn, can be measured as an acoustic wave. However, as with the case of the formation of energy, heat can also be lost through the surroundings. Heat diffusion lowers the temperature around the emitted energy source which in turn lessens the pressure fields. With acoustic waves sent after every pulse of light, a sensor can then measure the waves. By adjusting the wavelength of each pulse of light, the corresponding acoustic wave can be measured and plotted to form a spectrum of the material. With the advancement in technology, amplifiers, light sources, and sensors have significantly improved. Figure 3 shows a common set up of the inside of a photoacoustic spectrometer. Light sources typically use infrared lasers or wire filaments such as tungsten that produce high intensities of light. To send pulses of light to hit the sample, the light source is either turned off and on to produce the pulsing effect or a rotating disc with openings to control the pulses of light going through. A mirror directs the pulses of light to hit a set of filters, which can change to alter the wavelength of the light hitting the sample. Once the light passes through the filter, it hits the contact window, which is where the sample is held. Two microphones are placed inside to pick up the acoustic waves and is sent to measure its electrical signal. Different wavelengths are tested and a spectrum for the sample is created. Importance of Photoacoustic Spectrometry Differences in PAS Though PAS may seem similar to other infrared techniques such as Fourier transform infrared spectroscopy (FTIR), it has many unique aspects. PAS does not measure the effect relative to the background but directly from the sample, making it extremely accurate. Samples with multiples gases can be singled out and measured. Samples can also be in tiny amounts as the sample case is small. However, it can currently only measure samples as small as a few milliliters, which is still small compared to other techniques. Applications Because of the high sensitivity and accuracy of PAS, it is ideal to use it in gas detectors. Gas levels in the atmosphere can be measured and provide details on any dangers of rising toxic gases. It is also useful in determining the materials of an unknown samples. Each material has its own unique spectrum and by observing the acoustic waves produces, one can match the waves to specific profiles of materials. PAS is also used for high resolution imaging by analyzing the topography of the sample. Using the topography and the profiles of electric signals, one can create an image with shaded colors to indicate the different materials. The cost of creating these devices have decreased and is gradually being used more widely in gas detectors and sample analysis in labs. Exercise $1$ Light hits a sample of aluminum. The data collected shows that the sample had a transmittance 0.12 and reflectance value of 0.8196. Assuming light is conserved, what is the absorbance of the material? Answer Using Equation \ref{1}, light must be conserved and so $1-0.12-0.8196 = α \nonumber$ $α = 0.0604 \nonumber$ Exercise $2$ Does vibrational or electronic excitation of electrons produce more heat? Why is that? Answer Vibrational excitation of electrons produce better heat as all the energy from the absorbed light transfers into heat. This is because the vibrational lifetimes of the atoms is long enough so that the transfer of energy to heat is not interrupted by other processes. Exercise $3$ Why does PAS require pulses of light instead of a continuous steady source of light to hit the sample? Answer To obtain a signal, it must come in the form of a wave. Having a constant source of light will prevent waves from forming and the spectrum will only show up as a straight line. The point of the pulsing light is to have the material absorb the energy, convert it to heat, send out the acoustic waves, and let it rest before allowing it to absorb another amount of energy.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Spectroscopy/Photoacoustic_Spectroscopy.txt
Photoelectron spectroscopy involves the measurement of kinetic energy of photoelectrons to determine the bonding energy,intensity and angular distributions of these electrons and use the information obtained to examine the electronic structure of molecules. • Applications of Photoelectron Spectroscopy Photoelectron spectroscopy (PES) is a technique used for determining the ionization potentials of molecules. Underneath the banner of PES are two separate techniques for quantitative and qualitative measurements. They are ultraviolet photoeclectron spectroscopy (UPS) and X-ray photoelectron spectroscopy (XPS). XPS is also known under its former name of electron spectroscopy for chemical analysis (ESCA). UPS focuses on inoization of valence electrons, while XPS involves ionizing core electrons. • Photoelectron Spectroscopy Photoelectron spectroscopy involves the measurement of kinetic energy of photoelectrons to determine the binding energy, intensity and angular distributions of these electrons and use the information obtained to examine the electronic structure of molecules. It differs from the conventional methods of spectroscopy in that it detects electrons rather than photons to study electronic structures of a material. Thumbnail: Photoelectric effect underlies photoelectron spectroscopy. (CC BY-NC; Laura Guerin via CK-12 Foundation) Photoelectron Spectroscopy Photoelectron spectroscopy (PES) is a technique used for determining the ionization potentials of molecules. Underneath the banner of PES are two separate techniques for quantitative and qualitative measurements. They are ultraviolet photoeclectron spectroscopy (UPS) and X-ray photoelectron spectroscopy (XPS). XPS is also known under its former name of electron spectroscopy for chemical analysis (ESCA). UPS focuses on ionization of valence electrons while XPS is able to go a step further and ionize core electrons and pry them away. Photoelectron Instrumentation The main goal in either UPS or XPS is to gain information about the composition, electronic state, chemical state, binding energy, and more of the surface region of solids. The key point in PES is that a lot of qualitative and quantitative information can be learned about the surface region of solids. Specifics about what can be studied using XPS or UPS will be discussed in detail below in separate sections for each technique following a discussion on instrumentation for PES experiments. The focus here will be on how the instrumentation for PES is constructed and what types of systems are studied using XPS and UPS. The goal is to understand how to go about constructing or diagramming a PES instrument, how to choose an appropriate analyzer for a given system, and when to use either XPS or UPS to study a system. There are a few basics common to both techniques that must always be present in the instrumental setup. 1. A radiation source: The radiation sources used in PES are fixed-energy radiation sources. XPS sources from x-rays while UPS sources from a gas discharge lamp. 2. An analyzer: PES analyzers are various types of electron energy analyzers 3. A high vacuum environment: PES is rather picky when it comes to keeping the surface of the sample clean and keeping the rest of the environment free of interferences from things like gas molecules. The high vacuum is almost always an ultra high vacuum (UHV) environment. Radiation sources While many components of instruments used in PES are common to both UPS and XPS, the radiation sources are one area of distinct differentiation. The radiation source for UPS is a gas discharge lamp, with the typical one being an He discharge lamp operating at 58.4 nm which corresponds to 21.2 eV of kinetic energy. XPS has a choice between a monocrhomatic beam of a few microns or an unfocused non-monochromatic beam of a couple centimeters. These beams originate from X-Ray sources of either Mg or Al K-? sources giving off 1486 eV and 1258 eV of kinetic energy respectively. For a more versitile light source, synchrotron radiation sources are also used. Synchrotron radiation is especially useful in studying valence levels as it provides continuous, polarized radiation with high energies of > 350 eV. The main thing to consider when choosing a radiation source is the kinetic energy involved. The source is what sets the kinetic energy of the photoelectrons, so there needs to not only be enough energy present to cause the ionizations, but there must also be an analyzer capable of measuring the kinetic energy of the released photoelectrons. In XPS experiments, electron guns can also be used in conjunction with x-rays to eject photoelectrons. There are a couple of advantages and disadvantages to doing this, however. With an electron gun, the electron beam is easily focused and the excitation of photoelectrons can be constantly varied. Unfortunately, the background radiation is increased significantly due to the scattering of falling electrons. Also, a good portion of substances that are of any experimental interest are actually decomposed by heavy electron bombardment such as that coming from an electron gun. Analyzers There are two main classes of analyzers well-suited for PES - kinetic energy analyzers and deflection or electrostatic analyzers. Kinetic energy analyzers have a resolving power of $E/\delta{E}$, which means the higher the kinetic energy of the photoelectrons, the lower the resolution of the spectra. Deflection analyzers are able to separate out photoelectrons through an electric field by forcing electrons to follow different paths according to their velocities, giving a resolving power, $E/\delta{E}$, that is greater than 1,000. Since the resolving power of both types of analyzer is $E/\delta{E}$, the resolution is directly dependent on the kinetic energy of the photoelectrons. The intensity of the spectra produced is also dependent on the kinetic energy. The faster the electrons are moving, the lower the resolution and intensity is. In order to actually get well resolved, useful data other components must be introduced into the instrument. Adding a system of optics (lenses) to a PES instrument helps with this problem immensely. Electron optics are capable of decelerating the photoelectrons through retardation of the electric field. The energy the photoelectrons decelerate to is known as the "pass energy." This has the benefit of significantly raising the resolution, however this does, unfortunately, lower the sensitivity. Optics are also capable of accelerating the electrons as well. The design of any lens system greatly effects the photoelectron counts. These lenses are also capable of focusing on a small area of a particular sample. Specific Analyzers Within the broad picture of two main analyzer classes, there are a variety of specific analyzers in existence that are used in PES. The list below goes over several well-used analyzers, though this list is, by no means, exhaustive. The most common type of analyzer is a hemispherical analyzer, which will be explained in more depth under the spherical deflection analyzer topic. Plane Mirror Analyzer (PMA) PMAs, the simplest type of electric analyzer are also known as parallel-plate mirror analyzers. These analyzers are condensers made from two parallel plates with a distance, d, across them. Parabolic trajectories of electrons are obtained due to the constant potential difference, V, between the two plates. In order for transmission to occur, the potential must be: V=Eod/eLo. Eo=kinetic energy of electron in eV and e=charge of the electron. To obtain better focus, the electron entrance and exit angle is capable of being shifted to 30 degrees, but this is not necessarily a good idea as it sacrifices transmission instead. Cylindrical Mirror Analyzer (CMA) CMAs are advantageous over PMAs. They employ 2? geometry to overcome the low transmission with a PMA. A CMA consists of two cylinders having a potential difference, V, between them. The entrance and exit slits are all contained on the inner cylinder. Here: V=1.3E0 ln(Rout/Rin) where L0=6.1(Rin) and E0 is in volts. They are good for applications that require a high sensitivity with only a moderate resolution. Cylindrical Deflection Analyzer (CDA) CDAs consist of two cylinders spanning a 127 degree angle. It is this reason that CDAs are sometimes called "127 degree analyzers." The potential difference in a CDA is: 2V=E0(Rin/Rout) where E0 is the energy of incoming photoelectrons, in eV, that are focused. These analyzers have high resolution, however their transmission is low. Spherical Deflection Analyzer (SDA) SDAs are similar to CDAs, but they consist of two concentric hemispheres instead. In an SDA, the transmission of photoelectrons with initial energy, E0, occurs along a path where R0=(Rin/Rout)/2. Since SDAs are the most common, prevalent type of PES analyzer, they will be discussed in more depth than any of the previous analyzers as a thorough understanding of how they apply to PES is, theoretically, of greater importance. Here, the potential is different for both the inner and the outer hemisphere: $V_{in}=E_0[3-2(R_0/R_{in})]$ and $V_{out}=E_0[3-2(R_0/R_{out})]$ The resolving power of these analyzers is proportional to the radius of the inner and outer hemispheres. These analyzers are also capable of running in two separate modes when coupled with an optical system - fixed analyzer transmission mode (FAT) and fixed retardation energy mode (FRR). In FAT mode, the lens either retards or accelerates the electrons so that all photoelectrons enter the analyzer with the same kinetic energy. For this to occur, the analyzer is also arranged so that only photoelectrons of a specific, fixed kinetic energy will pass through and reach the detector. In this case, the lens is scanned for different energies. In FRR mode, the lens only retards the photoelectrons, and it does so in a uniform manner causing all photoelectrons to be reduced in energy to a fixed value such as 15 eV, 30 eV, or whatever energy is desired. The hemispheres of the analyzer here have a potential difference between them that is varied so that photoelectrons of different kinetic energies can reach the detector. The more common of these two modes is FAT because it provides a greater signal intensity at low electron kinetic energy and is also makes quantification of the spectra simpler. These analyzers are a particularly good class of deflection analyzer. The slits in an SDA define the acceptable range of entrance and exit trajectories a photoelectron may have when entering or leaving the analyzer. The photoelectrons that do make it through the entrance slits will then only exit if they follow a specific, curved path down the middle of the two hemispheres. The path they follow has the "correct energy" for exit to occur, and is determined by the selection of Vin and Vout. Photoelectrons that are of higher or lower kinetic energy than what is defined by the hemispheres will be lost through collisions with the walls. Detection & Spectra Detection relies on the ability of the instrument to measure energy and photoelectron output. One type of energy measured is the binding energy, which is calculated through the following equation: $K_e = h\nu - BE - \phi$ where: • Ke= Kinetic energy, this is measured • $h\nu$ = Photon energy from the radiation source, this is controlled by the source • $\phi$= Work function of the spectrometer, this is found through calibration • BE= Binding energy, this is the unknown of interest and can be calculated from the other three variables Another part of PES detection is in the use of electron multipliers. These devices act as electron amplifiers because they are coated with a material that produce secondary photoelectrons when they are struck by an electron. Typically, they are able to produce two to three photoelectrons per every electron they are hit with. Since the signals in PES are low, the huge amplification, up to 107 and higher when run in series so the secondary electrons from one multiplier strike the next, they greately improve the signal strength from these instruments. One type of spectra in these experiments is recorded by varying the potential difference between the plates or hemispheres of the analyzer. The output is known as an electron kinetic energy spectrum and is obtained by measuring the photoelectron current at the detector as a function of the voltage applied to the hemispheres or plates. The voltage is then used in the calculation of kinetic energy. Further detail on the spectra produced in PES experiments and the analysis of said spectra is planned for a future module on the interpreation of photoelectron spectroscopy. Limitations The main limitation in a PES instrument is the resolution. The problems in resolution come from four main areas: the dimensions of the analyzer, the widths of the entrance and exit slits, other charges such as outside electronic fields or outside magnetic fields, and local charges inside the instrument itself arising from things such as contamination in the analyzer. Steps can be taken to improve the resolution, but some methods then sacrifice other factors such as the sensitivity. Obtaining high spatial resolution and high energy resolution always comes at the expense of the signal intensity. One resolution improving technique that, then, messes with the sensitivity is changing the width of the entrance and/or exit slits. For example, in an SDA these slits are what define the range of trajectories photoelectrons may have when entering or exiting the analyzer. Decreasing the widths will certainly cause the resolution to go up, but the smaller slit size will decrease the number of photoelectrons allowed in and out of the analyzer, therefore lowering the sensitivity. Another technique which was discussed above given it's relevance to the discussion on analyzers, is the addition of electron optics to the instrument. A third method of improving resolution is specific to XPS and is the addition of an x-ray monochromator to the system. These monochromators eliminate satellite radiation from x-rays and are capable of narrowing the x-ray line width from ~1eV to ~0.2eV. The use of monochromatic x-rays also serves to simplify the spectrum. Ultraviolet Photoelectron Spectroscopy - UPS In early UPS, the sample was a gas or a vapor that is irradiated with a narrow beam of UV radiation. More modern UPS instruments are now capable of studying solids as well. The photoelectrons produced are passed through a slit into a vacuum region where they are then deflected by magnetic or electrostatic fields to give an energy spectrum. UPS is sensitive to the very near surface region, up to around 10 nm in depth. There are two main areas UPS is used to study: 1. Electronic structure of solids 2. Adsorbed molecules on metals Specific examples of UPS studies include: 1. The measurement of molecular orbital energies that can be compared to theortical values calculated from quantum chemistry 2. Determination and assignment of bonding, nonbonding, and/or antibonding molecular orbitals 3. The binding and orientation of adsorbed species on the surface of solids 4. Band structure mapping in k-space with angle-resolved techniques Spectral output Briefly, the spectrum produced from a UPS experiment has peaks that correspond to the ionization potentials of the molecule. These also correspond to the orbital energies. Because of this, UPS can also give information on the vibrational energy levels of the ions formed. Limitations UPS is capable only of ionizing valence electrons, which limits the range and depth of UPS surface experiments. Conventional UPS has relatively poor resolution. Advantages Ultraviolet radiation has a very narrow line width and a high flux of photons available from simple discharge sources. Higher resolution UPS scans allow for the observation of the fine structures that are due to vibrational levels of the molecular ion which, then, allows molecular orbital assignment of specific peaks. X-Ray Photoelectron Spectroscopy - XPS A diagram of a typical XPS instrument was shown at the beginning in figure 1. XPS is extremely good for surfaces. This is because the kinetic energy of the escaping photoelectrons limits the depth able to be probed. The samples studied are all solids of some type ranging from metals to frozen liquids. When the sample is irradiated, the electrons ejected are from the inner shells of the atoms. There are several areas suited to measurement by XPS: 1. Elemental composition 2. Empirical formula determination 3. Chemical state 4. Electronic state 5. Binding energy 6. Layer thickness in the upper portion of surfaces Some specific examples of systems studied by XPS are: 1. Analysis of stains and residues on surfaces 2. Reactive frictional wear of solid-solid reactions 3. Silicon oxynitried thickness and measurements of dosage 4. Depth profiling: In depth profiling, a sputter source is used. This removes successive layers from the surface of a sample and allows for the quanitation of element depth profiles to be recorded in the near-surface region. This is useful in the composition of thin films. 5. Angle dependence measurements: When the angle of measurement is changed, the depth of the information gathered can be varied by 1-10 nm. The usefulness here is in determining the concentration of additives in the surface region. 6. Imaging of surfaces 7. Utilizing a special imaging mode, the distribution of elements in surface structures can be determined. This technique is useful in dimensions up to about 3 um. Spectral output Briefly, the spectrum from an XPS experiment is a graph of emission intensity vs binding energy. This allows elements on the surface to be identified based on the unique binding energy each element has. The peak areas on these spectra can also be used to obtain the concentration of the elements on the surface as well. Detailed information on the interpretation of XPS spectra is planned for a future module. Limitations Despite the many benefits to XPS, nothing is foolproof, nor is anything without limitations. The smallest analytical area XPS can measure is ~10 um. Samples for XPS must be compatible with the ultra high vacuum environment. Because XPS is a surface technique, there is a limited amount of organic information XPS can provide. XPS is limited to measurements of elements having atomic numbers of 3 or greater, making it unable to detect hydrogen or helium. XPS spectra also take a long time to obtain. The use of a monochromator can also reduce the time per experiment. Advantages XPS has a greater range of potential application than UPS since it can probe down to core electrons. XPS is good for identifying all but two elements, identifying the chemical state on surfaces, and is good with quantitative analysis. XPS is capable of detecting the difference in chemical state between samples. XPS is also able to differentiate between oxidations states of molecules. UPS vs. XPS One question that is always of consideration when multiple techniques are available for use is which technique is the best for the system or sample of interest. Here is a brief table of some of the experimental applications of PES and the suitability to either XPS, UPS, or both. Application XPS UPS Materials on surfaces X X Depth profiling X Angle dependent studies X X Binding energy X X Valence band fine structure   X Elemental composition X Empirical formulas X Electron energy levels X X Problems 1. What are the limitations involved with PES analyzers? 2. Is it possible to obtain both high sensitivity and high resolution with XPS? Why or why not? 3. Name three methods for improving the signal output from a PES instrument. 4. Can you study a system using both UPS and XPS? What are the advantages to using both techniques? Are there any disadvantages? 5. Why is the SDA the most widely used analyzer for PES experiments?
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Spectroscopy/Photoelectron_Spectroscopy/Applications_of_Photoelectron_Spectroscopy.txt
Photoelectron spectroscopy involves the measurement of kinetic energy of photoelectrons to determine the binding energy, intensity and angular distributions of these electrons and use the information obtained to examine the electronic structure of molecules. It differs from the conventional methods of spectroscopy in that it detects electrons rather than photons to study electronic structures of a material. Introduction Photoelectron spectroscopy (PES) is the energy measurements of photoelectrons emitted from solids, gases, or liquids by the photoelectric effect. Depending on the source of ionization energy, PES can be divided accordingly into Ultraviolet Photoelectron Spectroscopy (UPS) and X-ray Photoelectron Spectroscopy (XPS). The source of radiation for UPS is a noble gas discharge lamp, usually a He discharge lamp. For XPS, also referred to as Electron Spectroscopy for Chemical Analysis (ESCA), the source is high energy X-rays (1000-1500 eV). Furthermore, depending on the source of the ionization energy, PES can probe either valence or core electrons. UPS, which uses the energy of ultraviolet rays (<41 eV) will only be sufficient to eject electrons from the valence orbitals, while the high energy X-rays used in XPS can eject electrons from the core and atomic orbitals (Figure 1). Further information, about XPS and UPS, is discussed within the module on Photoelectron Spectroscopy: Application, which includes a discussion on methods of study for both these spectroscopic techniques and a comparison. Photoelectric Effect To understand the principles of photoelectron spectroscopy, the photoelectric effect must be applied. The photoelectric effect states that electrons can be pushed off the surface of a solid by electromagnetic radiation. The ejected electrons are called photoelectrons. Originally, known as the Hertz effect, the photoelectric effect was first observed by Heinrich Hertz in 1887, when Hertz noticed that sparks would more readily jump between two charged surfaces that were illuminated with light. Hertz’s observation would ultimately lead to Einstein’s photoelectric law; the kinetic energy of an emitted photoelectron is given by $E_k = h\nu - E_I \label{1}$ where h is Planck’s constant, ν is the frequency of the ionizing light, and EI is the ionization energy, which is synonymous with electron binding energy, of the electron. The term photoelectric effect is only considered when discussing solids, exclusively. As PES can be used for energy measurements of solids, liquids, and gases the term photoionization or photoemission better represents the principles of PES. Photoionization is the process in which molecule (M) is ionized by a beam of photons, in which the molecule will lose an electron: $M + h\nu \rightarrow M^{^{+}}(E_{int}) + e \label{2}$ This process of photoionization follows the three-step model. The three-step model breaks down the process of photoionization into three independent steps: 1. The molecule will absorb the photon, causing the energy of the photon to be transferred to the molecule's electrons, which will become excited. 2. The excited electron will travel to the surface of the molecule. During this step, the excited electron travels it may or may not collide with other particles. Any excited electrons which do collide with a particle will loss energy. 3. The excited electron will escape the surface of the molecule into the vacuum where it will be detected. The process of photoionization can only occur if the photon has energy greater than the energy which is holding the electron to the molecule, which is the lowest ionization potential. If there is excess energy after the ionization has occurred then the excess energy will be in the form of kinetic energy. By rearranging equation 1, the ionization energy will be the difference between the photon energy (hv) and the photoelectron's kinetic energy (Ek). These two variables, photon energy and kinetic energy, are both measured by the PE spectrometer. Thus, by using PES it is possible to measure the energies of the ground and excited states, after the loss of an electron from a neutral molecule, given by the above chemical formula. Ionization Energy Ionization energy, also known as electron binding energy, determined by photoelectron spectroscopy provides some of the most detailed quantitative information about electronic structure of organic and inorganic molecules. Ionization is defined by transitions from the ground state of a neutral molecule to the ion states (equation 2). There are two types of ionization energy: adiabatic and vertical ionization energy. Adiabatic ionization energy of a molecule is defined as the minimum amount of energy needed to eject an electron from the neutral molecule. Additionally, can be referred to as the difference between the energy of the vibrational ground state of the neutral molecule and positive ion. The second type: vertical ionization energy accounts for any additional transitions between the ground and excited vibrational state of the neutral molecule. The vertical ionization energy is the most probable transition. The Frank-Condon principle explains the relative intensity of the vibrational bands for photoionization transitions. Koopman's theorem, which states that the negative of the eigenvalue of an occupied orbital from a Hartree-Fock calculation is equal to the vertical ionization energy of the ion state formed by the photoionization of the molecule. Due to Kooperman's theorem, ionization energies are shown to be directly related to the energies of molecular orbitals; however, there are limitations to Koopman's theorem. During the process of photoionization, the ejection of an electron will result in the formation of a positive ion (M+). The energy required to cause the ejection of an electron is known as ionization energy or electron binding energy. Overall, ionization energy will depend on the location of the electrons in preference to the nucleus of the molecule. As electrons are arranged in orbitals surrounding the atomic nucleus, the ionization energy will be higher or lower depending on whether the electrons are located in the core or valence shell. Obviously, core electrons, which are closer to the nucleus, will require more energy to be ejected. Furthermore, each chemical element has a different number of protons in the nucleus, resulting in a unique set of ionization energies for every element. By using photoelectron spectroscopy, the ionization energy is determined by subtracting the energy of the incoming photon from the measured kinetic energy of the ejected electron. Thus, it is possible to use PES to determine the chemical elements within an unknown sample based on the observed ionization energies in a PE spectrum. The location of the ejected electron will factor greatly into which type of photoelectron spectroscopy is used. X-ray photoelectron spectroscopy (XPS) is used to eject electrons from the core or valence shell. The sample used in XPS will first be placed in an ultra-high vacuum chamber to prevent photons and emitted electrons from being absorbed by gases. Then the sample will be bombarded with x-rays, causing the ejection of electrons. The ejected electrons energies will be measured by their dispersal within an electric field. Due to the vacuum environment of the sample, XPS cannot be used for liquids. In addition, XPS will provide information about oxidation states for any elements present in the sample, as the ionization energies of core electrons are slightly higher when an oxidation state is present. UPS works in a similar fashion as XPS, but uses photons, produced by a noble gas discharge lamp, in the ultraviolet range of the spectrum. Originally, UPS was used only to determine the ionization energies of gaseous molecules; however, over the years it is also attributed information to the electronic structure of molecules. Splitting Various types of splitting occur in the photoelectron spectrum due to the removal of an electron from an orbital. The Russell-Saunders term symbol notation: $L_{J}^{2S+1}$ is used to determine the differences in the initial and final states for the spectral transitions. The first type, spin orbit splitting is purely an initial state effect, which occurs during photoionization if an electron is removed from a degenerate subshell. In addition, spin orbit splitting will never occur for s orbitals, as it depends on an electron being removed from a degenerate subshell. The PE spectrum will represents spin orbit splitting for p, d, and f orbitals as doublets for XPS. The intensity of the peaks for the doublets will depend on the J value in the Russell-Saunders term. For example, the binding energy for the doublet with the lower J value will result in the highest intensity. Furthermore, due to nuclear shielding the magnitude of spin-orbit splitting will decrease the further the way from the nucleus. Another type of splitting is multiplet splitting which arises when there is interaction between an unpaired electron formed by photoelectron ejected and an already pre-existing unpaired electron. This can result in the formation of multiple final states being formed during the photoionization. For example, consider the three electron atom lithium. The ground state is 1s22s; 2S, which when it undergoes photoionization can yield two 1s final states with different angular momenta: $Li^{+}(1s^{1}2s;S^{1})+e^{-}$ and $Li^{+}(1s^{1}2s;S^{3})+e^{-}$ Overall, the energy difference which occurs is known as multiplet splitting, which will result in a multi-peak envelope on the PE spectrum. Lastly, Jahn-Teller splitting will occur when the symmetry of a molecule is destroyed by photoionization. Photoelectron Instrumentation All photoelectron spectrometers must have three components. The first is an excitation source used to irradiate the sample into releasing electrons. The second is an electron energy analyzer which will disperse the emitted photoelectrons according to their respective kinetic energy. Lastly, a detector. In addition, the spectrometer needs to have a high vacuum environment, which will prevent the electrons from being scattered by gas particles. These various components in photoelectron spectrometers are available in many different forms, which are discussed within the module on Photoelectron Spectroscopy: Application. A block diagram of a basic PE spectrometer is listed below: Figure 4: A block diagram of PE spectrometer. An example of a photoelectron spectrum obtained by a PE spectrometer is shown in Figure 5. This plot shows the kinetic energy distribution of emitted photon obtained by the electron energy analyzer, resulting in a plot of of the number of electrons detected versus the binding energy of electrons obtained. Problems 1. Which radiation source is used to eject core electrons? 2. Describe how PES can be used to calculate the ionization energy of a molecule. 3. Describe the photoelectric effect. Answers 1. An X-ray radiation source. 2. PES involves a given energy of photon to ionize a molecule. As the excess energy, will be in the form of kinetic energy, is calculated by the photoelectron spectrometer it is possible to calculate ionization energy of a molecule, by rearranging the following equation: $E_k = h\nu - E_I$, to solve for $E_I$, ionization energy. 3. The photoelectric effect occurs when light hits a metal surface; thus, causing the ejection of electrons from the surface of the metal.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Spectroscopy/Photoelectron_Spectroscopy/Photoelectron_Spectroscopy.txt
Rotational spectroscopy is concerned with the measurement of the energies of transitions between quantized rotational states of molecules in the gas phase. The spectra of polar molecules can be measured in absorption or emission by microwave spectroscopy or by far infrared spectroscopy. • Microwave Rotational Spectroscopy Microwave rotational spectroscopy uses microwave radiation to measure the energies of rotational transitions for molecules in the gas phase. It accomplishes this through the interaction of the electric dipole moment of the molecules with the electromagnetic field of the exciting microwave photon. • Rotational Spectroscopy of Diatomic Molecules The rotation of a diatomic molecule can be described by the rigid rotor model. To imagine this model think of a spinning dumbbell. The dumbbell has two masses set at a fixed distance from one another and spins around its center of mass (COM). This model can be further simplified using the concept of reduced mass which allows the problem to be treated as a single body system. • Rotation of Linear Molecules The rotational energy levels of a diatomic molecule in 3D space is given by the quantum mechanical solution to the rotating rigid rotor. • Rovibrational Spectroscopy In this section, we will learn how the rotational transitions of molecules can accompany the vibrational transitions. It is important to know how each peak correlates to the molecular processes of molecules. Rovibrational spectra can be analyzed to determine average bond length. Thumbnail: Rotation-Vibration Transitions Rotational Spectroscopy Microwave rotational spectroscopy uses microwave radiation to measure the energies of rotational transitions for molecules in the gas phase. It accomplishes this through the interaction of the electric dipole moment of the molecules with the electromagnetic field of the exciting microwave photon. Introduction To probe the pure rotational transitions for molecules, scientists use microwave rotational spectroscopy. This spectroscopy utilizes photons in the microwave range to cause transitions between the quantum rotational energy levels of a gas molecule. The reason why the sample must be in the gas phase is due to intermolecular interactions hindering rotations in the liquid and solid phases of the molecule. For microwave spectroscopy, molecules can be broken down into 5 categories based on their shape and the inertia around their 3 orthogonal rotational axes. These 5 categories include diatomic molecules, linear molecules, spherical tops, symmetric tops and asymmetric tops. Classical Mechanics The Hamiltonian solution to the rigid rotor is $H = T$ since, $H = T + V$ Where $T$ is kinetic energy and $V$ is potential energy. Potential energy, $V$, is 0 because there is no resistance to the rotation (similar to a particle in a box model). Since $H = T$, we can also say that: ${T = }\dfrac{1}{2}\sum{m_{i}v_{i}^2}$ However, we have to determine $v_i$ in terms of rotation since we are dealing with rotation. Since, $\omega = \dfrac{v}{r}$ where $\omega$ = angular velocity, we can say that: $v_{i} = \omega{X}r_{i}$ Thus we can rewrite the T equation as: $T = \dfrac{1}{2}\sum{m_{i}v_{i}\left(\omega{X}r_{i}\right)}$ Since $\omega$ is a scalar constant, we can rewrite the T equation as: $T = \dfrac{\omega}{2}\sum{m_{i}\left(v_{i}{X}r_{i}\right)} = \dfrac{\omega}{2}\sum{l_{i}} = \omega\dfrac{L}{2}$ where $l_i$ is the angular momentum of the ith particle, and L is the angular momentum of the entire system. Also, we know from physics that, $L = I\omega$ where I is the moment of inertia of the rigid body relative to the axis of rotation. We can rewrite the T equation as, $T = \omega\dfrac{{I}\omega}{2} = \dfrac{1}{2}{I}\omega^2$ Quantum Mechanics The internal Hamiltonian, H, is: $H = \dfrac{i^{2}\hbar^{2}}{2I}$ and the Schrödinger Equation for rigid rotor is: $\dfrac{i^{2}\hbar^{2}}{2I}\psi = E\psi$ Thus, we get: $E_n = \dfrac{J(J+1)h^2}{8\pi^2I}$ where $J$ is a rotational quantum number and $\hbar$ is the reduced Planck's constant. However, if we let: $B = \dfrac {h}{8 \pi^2I}$ where $B$ is a rotational constant, then we can substitute it into the $E_n$ equation and get: $E_{n} = J(J+1)Bh$ Considering the transition energy between two energy levels, the difference is a multiple of 2. That is, from $J = 0$ to $J = 1$, the $\Delta{E_{0 \rightarrow 1}}$ is 2Bh and from J = 1 to J = 2, the $\Delta{E}_{1 \rightarrow 2}$ is 4Bh. Theory When a gas molecule is irradiated with microwave radiation, a photon can be absorbed through the interaction of the photon’s electronic field with the electrons in the molecules. For the microwave region this energy absorption is in the range needed to cause transitions between rotational states of the molecule. However, only molecules with a permanent dipole that changes upon rotation can be investigated using microwave spectroscopy. This is due to the fact that their must be a charge difference across the molecule for the oscillating electric field of the photon to impart a torque upon the molecule around an axis that is perpendicular to this dipole and that passes through the molecules center of mass. This interaction can be expressed by the transition dipole moment for the transition between two rotational states $\text{Probability of Transition}=\int \psi_{rot}(F)\hat\mu \psi_{rot}(I)d\tau$ Where Ψrot(F) is the complex conjugate of the wave function for the final rotational state, Ψrot(I) is the wave function of the initial rotational state , and μ is the dipole moment operator with Cartesian coordinates of μx, μy, μz. For this integral to be nonzero the integrand must be an even function. This is due to the fact that any odd function integrated from negative infinity to positive infinity, or any other symmetric limits, is always zero. In addition to the constraints imposed by the transition moment integral, transitions between rotational states are also limited by the nature of the photon itself. A photon contains one unit of angular momentum, so when it interacts with a molecule it can only impart one unit of angular momentum to the molecule. This leads to the selection rule that a transition can only occur between rotational energy levels that are only one quantum rotation level (J) away from another1. $\Delta\textrm{J}=\pm 1$ The transition moment integral and the selection rule for rotational transitions tell if a transition from one rotational state to another is allowed. However, what these do not take into account is whether or not the state being transitioned from is actually populated, meaning that the molecule is in that energy state. This leads to the concept of the Boltzmann distribution of states. The Boltzmann distribution is a statistical distribution of energy states for an ensemble of molecules based on the temperature of the sample2. $\dfrac{n_J}{n_0} = \dfrac{e^{(-E_{rot}(J)/RT)}}{\sum_{J=1}^{J=n} e^{(-E_{rot}(J)/RT)}}$ where Erot(J) is the molar energy of the J rotational energy state of the molecule, • R is the gas constant, • T is the temperature of the sample. • n(J) is the number of molecules in the J rotational level, and • n0 is the total number of molecules in the sample. This distribution of energy states is the main contributing factor for the observed absorption intensity distributions seen in the microwave spectrum. This distribution makes it so that the absorption peaks that correspond to the transition from the energy state with the largest population based on the Boltzmann equation will have the largest absorption peak, with the peaks on either side steadily decreasing. Degrees of Freedom A molecule can have three types of degrees of freedom and a total of 3N degrees of freedom, where N equals the number of atoms in the molecule. These degrees of freedom can be broken down into 3 categories3. • Translational: These are the simplest of the degrees of freedom. These entail the movement of the entire molecule’s center of mass. This movement can be completely described by three orthogonal vectors and thus contains 3 degrees of freedom. • Rotational: These are rotations around the center of mass of the molecule and like the translational movement they can be completely described by three orthogonal vectors. This again means that this category contains only 3 degrees of freedom. However, in the case of a linear molecule only two degrees of freedom are present due to the rotation along the bonds in the molecule having a negligible inertia. • Vibrational: These are any other types of movement not assigned to rotational or translational movement and thus there are 3N – 6 degrees of vibrational freedom for a nonlinear molecule and 3N – 5 for a linear molecule. These vibrations include bending, stretching, wagging and many other aptly named internal movements of a molecule. These various vibrations arise due to the numerous combinations of different stretches, contractions, and bends that can occur between the bonds of atoms in the molecule. Each of these degrees of freedom is able to store energy. However, In the case of rotational and vibrational degrees of freedom, energy can only be stored in discrete amounts. This is due to the quantized break down of energy levels in a molecule described by quantum mechanics. In the case of rotations the energy stored is dependent on the rotational inertia of the gas along with the corresponding quantum number describing the energy level. Rotational Symmetries To analyze molecules for rotational spectroscopy, we can break molecules down into 5 categories based on their shapes and their moments of inertia around their 3 orthogonal rotational axes:4 1. Diatomic Molecules 2. Linear Molecules 3. Spherical Tops 4. Symmetrical Tops 5. Asymmetrical Tops Diatomic Molecules The rotations of a diatomic molecule can be modeled as a rigid rotor. This rigid rotor model has two masses attached to each other with a fixed distance between the two masses. It has an inertia (I) that is equal to the square of the fixed distance between the two masses multiplied by the reduced mass of the rigid rotor. $\large I_e= \mu r_e^2$ $\large \mu=\dfrac{m_1 m_2} {m_1+m_2}$ Using quantum mechanical calculations it can be shown that the energy levels of the rigid rotator depend on the inertia of the rigid rotator and the quantum rotational number J2. $E(J) = B_e J(J+1)$ $B_e = \dfrac{h}{8 \pi^2 cI_e}$ However, this rigid rotor model fails to take into account that bonds do not act like a rod with a fixed distance, but like a spring. This means that as the angular velocity of the molecule increases so does the distance between the atoms. This leads us to the nonrigid rotor model in which a centrifugal distortion term ($D_e$) is added to the energy equation to account for this stretching during rotation. $E(J)(cm^{-1}) = B_e J(J+1) – D_e J^2(J+1)^2$ This means that for a diatomic molecule the transitional energy between two rotational states equals $E=B_e[J'(J'+1)-J''(J''+1)]-D_e[J'^2(J'+1)^2-J''^2(J'+1)^2]\label{8}$ Where J’ is the quantum number of the final rotational energy state and J’’ is the quantum number of the initial rotational energy state. Using the selection rule of $\Delta{J}= \pm 1$ the spacing between peaks in the microwave absorption spectrum of a diatomic molecule will equal $E_R =(2B_e-4D_e)+(2B_e-12D_e){J}''-4D_e J''^3$ Linear Molecules Linear molecules behave in the same way as diatomic molecules when it comes to rotations. For this reason they can be modeled as a non-rigid rotor just like diatomic molecules. This means that linear molecule have the same equation for their rotational energy levels. The only difference is there are now more masses along the rotor. This means that the inertia is now the sum of the distance between each mass and the center of mass of the rotor multiplied by the square of the distance between them2. $\large I_e=\sum_{j=1}^{n} m_j r_{ej}^2$ Where mj is the mass of the jth mass on the rotor and rej is the equilibrium distance between the jth mass and the center of mass of the rotor. Spherical Tops Spherical tops are molecules in which all three orthogonal rotations have equal inertia and they are highly symmetrical. This means that the molecule has no dipole and for this reason spherical tops do not give a microwave rotational spectrum. Examples: Symmetrical Tops Symmetrical tops are molecules with two rotational axes that have the same inertia and one unique rotational axis with a different inertia. Symmetrical tops can be divided into two categories based on the relationship between the inertia of the unique axis and the inertia of the two axes with equivalent inertia. If the unique rotational axis has a greater inertia than the degenerate axes the molecule is called an oblate symmetrical top. If the unique rotational axis has a lower inertia than the degenerate axes the molecule is called a prolate symmetrical top. For simplification think of these two categories as either frisbees for oblate tops or footballs for prolate tops. Figure $3$: Symmetric Tops: (Left) Geometrical example of an oblate top and (right) a prolate top. Images used with permission from Wikipedia.com. In the case of linear molecules there is one degenerate rotational axis which in turn has a single rotational constant. With symmetrical tops now there is one unique axis and two degenerate axes. This means an additional rotational constant is needed to describe the energy levels of a symmetrical top. In addition to the rotational constant an additional quantum number must be introduced to describe the rotational energy levels of the symmetric top. These two additions give us the following rotational energy levels of a prolate and oblate symmetric top $E_{(J,K)}(cm^{-1})=B_e*J(J+1)+(A_e-B_e)K^2$ Where Be is the rotational constant of the unique axis, Ae is the rotational constant of the degenerate axes, $J$ is the total rotational angular momentum quantum number and K is the quantum number that represents the portion of the total angular momentum that lies along the unique rotational axis. This leads to the property that $K$ is always equal to or less than $J$. Thus we get the two selection rules for symmetric tops $\Delta J = 0, \pm1$ $\Delta K=0$ when $K\neq 0$ $\Delta J = \pm1$ $\Delta K=0$ when $K=0$ However, like the rigid rotor approximation for linear molecules, we must also take into account the elasticity of the bonds in symmetric tops. Therefore, in a similar manner to the rigid rotor we add a centrifugal coupling term, but this time we have one for each quantum number and one for the coupling between the two. $E_{(J,K)}(cm^{-1})=B_e J(J+1)-D_{eJ} J^2(J+1)^2+(A_e-B_e)*K^2 \label{13}$ $-D_{ek} K^4-D_{ejk} J(J +1)K^2 \label{14}$ Asymmetrical Tops Asymmetrical tops have three orthogonal rotational axes that all have different moments of inertia and most molecules fall into this category. Unlike linear molecules and symmetric tops these types of molecules do not have a simplified energy equation to determine the energy levels of the rotations. These types of molecules do not follow a specific pattern and usually have very complex microwave spectra. Additional Rotationally Sensitive Spectroscopies In addition to microwave spectroscopy, IR spectroscopy can also be used to probe rotational transitions in a molecule. However, in the case of IR spectroscopy the rotational transitions are coupled to the vibrational transitions of the molecule. One other spectroscopy that can probe the rotational transitions in a molecule is Raman spectroscopy, which uses UV-visible light scattering to determine energy levels in a molecule. However, a very high sensitivity detector must be used to analyze rotational energy levels of a molecule.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Spectroscopy/Rotational_Spectroscopy/Microwave_Rotational_Spectroscopy.txt
The rotational energy levels of a diatomic molecule in 3D space is given by the quantum mechanical solution to the rotating rigid rotor: $E = J(J + 1) \dfrac {\hbar ^2}{2I} \label {5.8.30}$ where • $J$ is a rotational quantum number ranging from $J=0$ to $J=\infty$. • $I$ is the moment of inertia and is the rotational equivalent to mass in translation. The moment of inertia of a molecule is very sensitive to the geometry to the the molecule (see below). Values of rotational inertia for common shapes of objects. In general, any three-dimension species (e.g., a molecule) will have three degrees of rotational energy since it can rotation in the $x$, $y$ and $z$ axis (i.e., the angular momentum vector can lie in each axis). This energy spacing for rotation in each degree is given by Equation $\ref{5.8.30}$ and is shown geometrically below. Energy spacing for a rigid rotor (in 3D) as a function of $J) quantum number. The lowest energy transition is the \(J=0$ to $J=1$ transition and corresponds to $E_{J=0 \rightarrow J=1} = [1(1 + 1) - 0(0+1)] \dfrac {\hbar ^2}{2I} = \dfrac { \hbar ^2}{I} \label{EQ2}$ Most of the mass of the molecule is in the nuclei, so when calculating the moment of inertia $I$ we can ignore the electrons and just use the nuclei. But the size of the nuclei is around $10^{-5}$ times smaller than the bond length. This means the moment of inertia around an axis along the bond is going to be about $10^{10}$ smaller than the moment of inertia around an axis normal to the bond. Therefore the energy level spacings will be around $10^{10}$ times bigger along the bond than normal to it. It is common to argue that linear molecules do not rotate perpendicular to the axis of symmetry and often justified in terms of symmetry of the molecule (see Group Theory). Therefore, out of three possible rotational degrees of freedom for a three dimensional object, only two are applicable to linear molecules and the third rotation is often ignored. This is calipalized in terms of justifying how many vibrational degrees of freedom a molecule has: • Non-linear molecules have 3N degrees of freedom in total: 3 are translational and 3 are rotational (all are allowed for non-linear molecules) so the remaining 3N-6 are vibrational. • In contrast, linear molecules have 3 translational and only 2 rotational, and to keep a total of 3N degrees of freedom, they have 3N-5 vibrational degrees. These equations are justified since rotation around the axis along the bond of the molecule requires huge energies (Equation $\ref{EQ2}$) since $I$ due to the much smaller moment of inertia. To excited these rotations, gamma ray photons are required and is the topic of high energy physics. Moreover, the temperature at which energy levels above the ground level are important is extremely high (e.g., it can be higher than the temperature of dissociation of the molecule). Therefore, at temperatures of practical interest, rotation around the axis of the linear molecule is not important for thermodynamic properties. Rotational Spectroscopy of Diatomic Molecules The rotation of a diatomic molecule can be described by the rigid rotor model. To imagine this model think of a spinning dumbbell. The dumbbell has two masses set at a fixed distance from one another and spins around its center of mass (COM). This model can be further simplified using the concept of reduced mass which allows the problem to be treated as a single body system. Introduction Similar to most quantum mechanical systems our model can be completely described by its wave function. Therefore, when we attempt to solve for the energy we are lead to the Schrödinger Equation. In the context of the rigid rotor where there is a natural center (rotation around the COM) the wave functions are best described in spherical coordinates. In addition to having pure rotational spectra diatomic molecules have rotational spectra associated with their vibrational spectra. The order of magnitude differs greatly between the two with the rotational transitions having energy proportional to 1-10 cm-1 (microwave radiation) and the vibrational transitions having energy proportional to 100-3,000 cm-1 (infrared radiation). Rotational spectroscopy is therefore referred to as microwave spectroscopy. Rigid Rotor Model A diatomic molecule consists of two masses bound together. The distance between the masses, or the bond length, (l) can be considered fixed because the level of vibration in the bond is small compared to the bond length. As the molecule rotates it does so around its COM (observed in Figure $1$:. as the intersection of $R_1$ and $R_2$) with a frequency of rotation of $\nu_{rot}$ given in radians per second. Reduced Mass The system can be simplified using the concept of reduced mass which allows it to be treated as one rotating body. The system can be entirely described by the fixed distance between the two masses instead of their individual radii of rotation. Relationships between the radii of rotation and bond length are derived from the COM given by: $M_{1}R_{1}=M_{2}R_{2},$ where l is the sum of the two radii of rotation: $l=R_{1}+R_{2}.$ Through simple algebra both radii can be found in terms of their masses and bond length: $R_{1}=\dfrac{M_{2}}{M_{1}+M_{2}}l$ and $R_{2}=\dfrac{M_{1}}{M_{1}+M_{2}}l.$ The kinetic energy of the system, $T$, is sum of the kinetic energy for each mass: $T=\dfrac{M_{1}v_{1}^2+M_{2}v_{2}^2}{2},$ where $v_{1}=2\pi{R_{1}}\nu_{rot}$ and $v_{2}=2\pi{R_{2}}\nu_{rot}.$ Using the angular velocity, $\omega=2\pi{\nu_{rot}}$ the kinetic energy can now be written as: $T=\dfrac{M_{1}R_{1}^2+M_{2}R_{2}^2}{2}\omega.$ With the moment of inertia, $I=M_{1}R_{1}^2+M_{2}R_{2}^2,$ the kinetic energy can be further simplified: $T=\dfrac{I\omega^2}{2}.$ The moment of inertia can be rewritten by plugging in for $R_1$ and $R_2$: $I=\dfrac{M_{1}M_{2}}{M_{1}+M_{2}}l^2,$ where $\dfrac{M_{1}M_{2}}{M_{1}+M_{2}}$ is the reduced mass, $\mu$. The moment of inertia and the system are now solely defined by a single mass, $\mu$, and a single length, $l$: $I=\mu{l^2}.$ Angular Momentum Another important concept when dealing with rotating systems is the the angular momentum defined by: $L=I\omega$ Looking back at the kinetic energy: $T=\dfrac{I\omega^2}{2}=\dfrac{I^2\omega^2}{2I}=\dfrac{L^2}{2I}$ The angular momentum can now be described in terms of the moment of inertia and kinetic energy: $L^2=2IT$. Setting up the Schrödinger Equation The wave functions for the rigid rotor model are found from solving the time-independent Schrödinger Equation: $\hat{H}\psi=E\psi \label{2.1}$ Where the Hamiltonian Operator is: $\hat{H}=\dfrac{-\hbar}{2\mu}\nabla^2+V(r) \label{2.2}$ where $\nabla^2$ is the Laplacian Operator and can be expressed in either Cartesian coordinates: $\nabla^2=\dfrac{\partial^2}{\partial{x^2}}+\dfrac{\partial^2}{\partial{y^2}}+\dfrac{\partial^2}{\partial{z^2}} \label{2.3}$ or in spherical coordinates: $\nabla^2=\dfrac{1}{r^2}\dfrac{\partial}{\partial{r}}\left(r^2\dfrac{\partial}{\partial{r}}\right)+\dfrac{1}{r^2\sin{\theta}}\dfrac{\partial}{\partial{\theta}}\left(\sin{\theta}\dfrac{\partial}{\partial{\theta}}\right)+\dfrac{1}{r^2\sin^2{\theta}}\dfrac{\partial^2}{\partial{\phi}} \label{2.4}$ At this point it is important to incorporate two assumptions: • The distance between the two masses is fixed. This causes the terms in the Laplacian containing $\dfrac{\partial}{\partial{r}}$ to be zero. • The orientation of the masses is completely described by $\theta$ and $\phi$ and in the absence of electric or magnetic fields the energy is independent of orientation. This causes the potential energy portion of the Hamiltonian to be zero. The wave functions $\psi{\left(\theta,\phi\right)}$ are customarily represented by $Y\left(\theta,\phi\right)$ and are called spherical harmonics. The Hamiltonian Operator can now be written: $\hat{H}=\hat{T}=\dfrac{-\hbar^2}{2\mu{l^2}}\left[\dfrac{1}{\sin{\theta}}\dfrac{\partial}{\partial{\theta}}\left(\sin{\theta}\dfrac{\partial}{\partial{\theta}}\right)+\dfrac{1}{\sin{\theta}}\dfrac{\partial^2}{\partial{\phi^2}}\right]\label{2.5}$ with the Angular Momentum Operator being defined: $\hat{L}=2I\hat{T}$ $\hat{L}=-\hbar^2\left[\dfrac{1}{\sin{\theta}}\dfrac{\partial}{\partial{\theta}}\left(\sin{\theta}\dfrac{\partial}{\partial{\theta}}\right)+\dfrac{1}{\sin{\theta}}\dfrac{\partial^2}{\partial{\phi^2}}\right]$ The Schrödinger Equation now expressed: $\dfrac{-\hbar^2}{2I}\left[\dfrac{1}{\sin{\theta}}\dfrac{\partial}{\partial{\theta}}\left(\sin{\theta}\dfrac{\partial}{\partial{\theta}}\right)+\dfrac{1}{\sin{\theta}}\dfrac{\partial^2}{\partial{\phi^2}}\right]Y\left(\theta,\phi\right)=EY\left(\theta,\phi\right) \label{2.6}$ Solving the Schrödinger Equation The Schrödinger Equation can be solved using separation of variables. Step 1: Let $Y\left(\theta,\phi\right)=\Theta\left(\theta\right)\Phi\left(\phi\right)$, and substitute: $\beta=\dfrac{2IE}{\hbar^2}$. Set the Schrödinger Equation equal to zero: $\dfrac{\sin{\theta}}{\Theta\left(\theta\right)}\dfrac{d}{d\theta}\left(\sin{\theta}\dfrac{d\Theta}{d\theta}\right)+\beta\sin^2\theta+\dfrac{1}{\Phi\left(\phi\right)}\dfrac{d^2\Phi}{d\phi^2}=0$ Step 2: Because the terms containing $\Theta\left(\theta\right)$ are equal to the terms containing $\Phi\left(\phi\right)$ they must equal the same constant in order to be defined for all values: $\dfrac{\sin{\theta}}{\Theta\left(\theta\right)}\dfrac{d}{d\theta}\left(\sin{\theta}\dfrac{d\Theta}{d\theta}\right)+\beta\sin^2\theta=m^2$ $\dfrac{1}{\Phi\left(\phi\right)}\dfrac{d^2\Phi}{d\phi^2}=-m^2$ Step 3: Solving for $\Phi$ is fairly simple and yields: $\Phi\left(\phi\right)=\dfrac{1}{\sqrt{2\pi}}e^{im\phi}$ where $m=0,\pm{1},\pm{2},...$ Solving for $\theta$ is considerably more complicated but gives the quantized result: $\beta=J(J+1)$ where $J$ is the rotational level with $J=0, 1, 2,...$ Step 4: The energy is quantized by expressing in terms of $\beta$: $E=\dfrac{\hbar^2\beta}{2I}$ Step 5: Using the rotational constant, $B=\dfrac{\hbar^2}{2I}$, the energy is further simplified: $E=BJ(J+1)$ Energy of Rotational Transitions When a molecule is irradiated with photons of light it may absorb the radiation and undergo an energy transition. The energy of the transition must be equivalent to the energy of the photon of light absorbed given by: $E=h\nu$. For a diatomic molecule the energy difference between rotational levels (J to J+1) is given by: $E_{J+1}-E_{J}=B(J+1)(J+2)-BJ(J=1)=2B(J+1)$ with J=0, 1, 2,... Because the difference of energy between rotational levels is in the microwave region (1-10 cm-1) rotational spectroscopy is commonly called microwave spectroscopy. In spectroscopy it is customary to represent energy in wave numbers (cm-1), in this notation B is written as $\tilde{B}$. To convert from units of energy to wave numbers simply divide by h and c, where c is the speed of light in cm/s (c=2.998e10 cm/s). In wave numbers $\tilde{B}=\dfrac{h}{8\pi{cI}}$. Figure $2$: predicts the rotational spectra of a diatomic molecule to have several peaks spaced by $2 \tilde{B}$. This contrasts vibrational spectra which have only one fundamental peak for each vibrational mode. From the rotational spectrum of a diatomic molecule the bond length can be determined. Because $\tilde{B}$ is a function of $I$ and therefore a function of $l$ (bond length), so $l$ can be readily solved for: $l=\sqrt{\dfrac{h}{8\pi^2{c}\tilde{B}\mu}}.$ Selection rules only permit transitions between consecutive rotational levels: $\Delta{J}=J\pm{1}$, and require the molecule to contain a permanent dipole moment. Due to the dipole requirement, molecules such as HF and HCl have pure rotational spectra and molecules such as H2 and N2 are rotationally inactive. Centrifugal Distortion As molecules are excited to higher rotational energies they spin at a faster rate. The faster rate of spin increases the centrifugal force pushing outward on the molecules resulting in a longer average bond length. Looking back, B and l are inversely related. Therefore the addition of centrifugal distortion at higher rotational levels decreases the spacing between rotational levels. The correction for the centrifugal distortion may be found through perturbation theory: $E_{J}=\tilde{B}J(J+1)-\tilde{D}J^2(J+1)^2.$ Rotation-Vibration Transitions Rotational transitions are on the order of 1-10 cm-1, while vibrational transitions are on the order of 1000 cm-1. The difference of magnitude between the energy transitions allow rotational levels to be superimposed within vibrational levels. Combining the energy of the rotational levels, $\tilde{E}_{J}=\tilde{B}J(J+1)$, with the vibrational levels, $\tilde{E}_{v}=\tilde{w}\left(v+1/2\right)$, yields the total energy of the respective rotation-vibration levels: $\tilde{E}_{v,J}=\tilde{w} \left(v+1/2\right)+\tilde{B}J(J+1)$ Following the selection rule, $\Delta{J}=J\pm{1}$, Figure 3. shows all of the allowed transitions for the first three rotational states, where J" is the initial state and J' is the final state. When the $\Delta{J}=+{1}$ transitions are considered (blue transitions) the initial energy is given by: $\tilde{E}_{0,J}=\tilde{w}(1/2)+\tilde{B}J(J+1)$ and the final energy is given by: $\tilde{E}_{v,J+1}=\tilde{w}(3/2)+\tilde{B}(J+1)(J+2)$. The energy of the transition, $\Delta{\tilde{\nu}}=\tilde{E}_{1,J+1}-\tilde{E}_{0,J}$, is therefore: $\Delta{\tilde{\nu}}=\tilde{w}+2\tilde{B}(J+1)$ where J"=0, 1, 2,... When the $\Delta{J}=-{1}$ transitions are considered (red transitions) the initial energy is given by: $\tilde{E}_{v,J}=\tilde{w}\left(1/2\right)+\tilde{B}J(J+1)$ and the final energy is given by: $\tilde{E}_{v,J-1}=\tilde{w}\left(3/2\right)+\tilde{B}(J-1)(J).$ The energy of the transition is therefore: $\Delta{\tilde{\nu}}=\tilde{w}-2\tilde{B}(J)$ where J"=1, 2, 3,... The difference in energy between the J+1 transitions and J-1 transitions causes splitting of vibrational spectra into two branches. The J-1 transitions, shown by the red lines in Figure $3$, are lower in energy than the pure vibrational transition and form the P-branch. The J+1 transitions, shown by the blue lines in Figure 3. are higher in energy than the pure vibrational transition and form the R-branch. Notice that because the $\Delta{J}=\pm {0}$ transition is forbidden there is no spectral line associated with the pure vibrational transition. Therefore there is a gap between the P-branch and R-branch, known as the q branch. In the high resolution HCl rotation-vibration spectrum the splitting of the P-branch and R-branch is clearly visible. Due to the small spacing between rotational levels high resolution spectrophotometers are required to distinguish the rotational transitions. Rotation-Vibration Interactions Recall the Rigid-Rotor assumption that the bond length between two atoms in a diatomic molecule is fixed. However, the anharmonicity correction for the harmonic oscillator predicts the gaps between energy levels to decrease and the equilibrium bond length to increase as higher vibrational levels are accessed. Due to the relationship between the rotational constant and bond length: $\tilde{B}=\dfrac{h}{8\pi^2{c}\mu{l^2}}$ The rotational constant is dependent on the vibrational level: $\tilde{B}_{v}=\tilde{B}-\tilde{\alpha}\left(v+\dfrac{1}{2}\right)$ Where $\tilde{\alpha}$ is the anharmonicity correction and $v$ is the vibrational level. As a consequence the spacing between rotational levels decreases at higher vibrational levels and unequal spacing between rotational levels in rotation-vibration spectra occurs. Including the rotation-vibration interaction the spectra can be predicted. For the R-branch $\tilde{E}_{1,J+1}-\tilde{E}_{0,J}$ $\tilde{\nu}=\left[\tilde{w}\left(\dfrac{3}{2}\right)+\tilde{B}_{1}\left(J+1\right)\left(J+2\right)\right]-\left[\tilde{w}\left(\dfrac{1}{2}\right)+\tilde{B}_{0}J\left(J+1\right)\right]$ $\tilde{\nu}=\tilde{w}+\left(\tilde{B}_{1}-\tilde{B}_{0}\right)J^2+\left(3\tilde{B}_{1}-\tilde{B}_{0}\right)J+2\tilde{B}_{1}$ where J=0, 1, 2,... For the P-branch $\tilde{E}_{1,J-1}-\tilde{E}_{0,J}$ $\tilde{\nu}=\left[\tilde{w}\left(\dfrac{3}{2}\right)+\tilde{B}_{1}\left(J-1\right)J\right]-\left[\tilde{w}\left(\dfrac{1}{2}\right)+\tilde{B}_{0}J\left(J+1\right)\right]$ $\tilde{\nu}=\tilde{w}+\left(\tilde{B}_{1}-\tilde{B}_{0}\right)J^2-\left(\tilde{B}_{1}+\tilde{B}_{0}\right)J$ where J=1, 2, 3,... Because $\tilde{B}_{1}<\tilde{B}_{0}$, as J increases: • Spacing in the R-branch decreases. • Spacing in the P-branch increases. Problems 1. What is the potential energy of the Rigid-Rotor? 2. Derive the Schrödinger Equation for the Rigid-Rotor. 3. Researchers have been interested in knowing what Godzilla uses as the fuel source for his fire breathing. A recent breakthrough was made and some residue containing Godzilla's non-combusted fuel was recovered. Studies on the residue showed that the fuel, Compound G, is a diatomic molecule and has a reduced mass of 1.615x10-27 kg. In addition, a microwave spectrum of Compound G was obtained and revealed equally spaced lines separated by 4.33 cm-1. Using the Rigid-Rotor model determine the bond length of Compound G. 4. How would deuterium substitution effect the pure rotational spectrum of HCl?
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Spectroscopy/Rotational_Spectroscopy/Rotation_of_Linear_Molecules.txt
In this section, we will learn how the rotational transitions of molecules can accompany the vibrational transitions. It is important to know how each peak correlates to the molecular processes of molecules. Rovibrational spectra can be analyzed to determine the average bond length. Introduction Each of the normal modes of vibration of heteronuclear diatomic molecules in the gas phase also contains closely-spaced (1-10 cm-1 difference) energy states attributable to rotational transitions that accompany the vibrational transitions. A molecule’s rotation can be affected by its vibrational transition because there is a change in bond length, so these rotational transitions are expected to occur. Since vibrational energy states are on the order of 1000 cm-1, the rotational energy states can be superimposed upon the vibrational energy states. Selection Rules Rotational and Vibration transitions (also known as rigid rotor and harmonic oscillator) of molecules help us identify how molecules interact with each other, their bond length as mentioned in the previous section. In order to know each transition, we have to consider other terms like wavenumber, force constant, quantum number, etc. There are rotational energy levels associated with all vibrational levels. From this, vibrational transitions can couple with rotational transitions to give rovibrational spectra. Rovibrational spectra can be analyzed to determine the average bond length. We treat the molecule's vibrations as those of a harmonic oscillator (ignoring anharmonicity). The energy of a vibration is quantized in discrete levels and given by $E_v=h\nu \left(v+\dfrac{1}{2} \right) \nonumber$ Where v is the vibrational quantum number and can have integer values 0, 1, 2..., and $\nu$ is the frequency of the vibration given by: $\nu=\dfrac{1}{2\pi} \sqrt{ \dfrac{k}{\mu}} \nonumber$ Where k is the force constant and $\mu$ is the reduced mass of a diatomic molecule with atom masses m1 and m2, given by $\mu=\dfrac{{m}_1{m}_2}{{m}_1+{m}_2}\nonumber$ We treat the molecule's rotations as those of a rigid rotor (ignoring centrifugal distortion). The energy of a rotation is also quantized in discrete levels given by $E_r=\dfrac{h^2}{8\pi^2I} J(J+1)\nonumber$ In which $I$ is the moment of inertia, given by ${I}=\mu{r}^2\nonumber$ where $\mu$ is the reduced mass from above and r is the equilibrium bond length. Experimentally, frequencies or wavenumbers are measured rather than energies, and dividing by h or hc gives more commonly seen term symbols, F(J) using the rotational quantum number J and the rotational constant B in either frequency $F(J)=\dfrac{E_r}{h}=\dfrac{h}{8\pi^2I} J(J+1)=BJ(J+1)\nonumber$ or wavenumbers $F(J)=\dfrac{E_r}{hc}=\dfrac{h}{8\pi^2cI} J(J+1)=BJ(J+1)\nonumber$ It is important to note in which units one is working since the rotational constant is always represented as B, whether in frequency or wavenumbers. Vibrational Transition Selection Rules At room temperature, typically only the lowest energy vibrational state v= 0 is populated, so typically v0 = 0 and ∆v = +1. The full selection rule is technically that ∆v = ±1, however here we assume energy can only go upwards because of the lack of population in the upper vibrational states. Rotational Transition Selection Rules At room temperature, states with J≠0 can be populated since they represent the fine structure of vibrational states and have smaller energy differences than successive vibrational levels. Additionally, ∆J = ±1 since a photon contains one quantum of angular momentum and we abide by the principle of conservation of energy. This is also the selection rule for rotational transitions. The transition ∆J = 0 (i.e. J" = 0 and J' = 0), but where v0 = 0 and ∆v = +1, is forbidden and the pure vibrational transition is not observed in most cases. The rotational selection rule gives rise to an R-branch (when ∆J = +1) and a P-branch (when ∆J = -1). Each line of the branch is labeled R(J) or P(J), where J represents the value of the lower state. R-branch When $∆J = +1$, i.e. the rotational quantum number in the ground state is one more than the rotational quantum number in the excited state – R branch (in French, riche or rich). To find the energy of a line of the R-branch: \begin{align*} \Delta{E} &=h\nu_0 +hB \left [J(J+1)-J^\prime (J^\prime{+1}) \right] \[4pt] &=h\nu_0 +hB \left[(J+1)(J+2)-J(J+1)\right] \[4pt] &=h\nu_0 +2hB(J+1) \end{align*} P-branch When $∆J = -1$, i.e. the rotational quantum number in the ground state is one less than the rotational quantum number in the excited state – P branch (in French, pauvre or poor). To find the energy of a line of the P-branch: \begin{align*} \Delta{E} &=h\nu_0 +hB \left [J(J+1)-J^\prime(J^\prime+1) \right] \[4pt] &=h\nu_0 +hB \left [J(J-1)-J(J+1) \right] \[4pt] &=h\nu_0 -2hBJ \end{align*} Q-branch When $∆J = 0$, i.e. the rotational quantum number in the ground state is the same as the rotational quantum number in the excited state – Q branch (simple, the letter between P and R). To find the energy of a line of the Q-branch: \begin{align*} \Delta{E} &= h\nu_0 +hB[J(J+1)-J^\prime(J^\prime+1)] \[4pt] &=h\nu_0 \end{align*} The Q-branch can be observed in polyatomic molecules and diatomic molecules with electronic angular momentum in the ground electronic state, e.g. nitric oxide, NO. Most diatomics, such as O2, have a small moment of inertia and thus very small angular momentum and yield no Q-branch. As seen in Figure 1, the lines of the P-branch (represented by purple arrows) and R-branch (represented by red arrows) are separated by specific multiples of B (2B), thus the bond length can be deduced without the need for pure rotational spectroscopy. Energy The total nuclear energy of the combined rotation-vibration terms, $S(v, J)$, can be written as the sum of the vibrational energy and the rotational energy $S(v,J)=G(v)+F(J) \nonumber$ Where $G(v)$ represents the energy of the harmonic oscillator, ignoring anharmonic components and $S(J)$ represents the energy of a rigid rotor, ignoring centrifugal distortion. From this, we can derive $S(v,J)=\nu_0 v+\dfrac{1}{2}+BJ(J+1)\nonumber$ The relative intensity of the P- and R-branch lines depends on the thermal distribution of electrons; more specifically, they depend on the population of the lower $J$ state. If we represent the population of the Jth upper level as NJ and the population of the lower state as N0, we can find the population of the upper state relative to the lower state using the Boltzmann distribution: $\dfrac{N_J}{N_0}={(2J+1)e}^{-E_r/kT}\nonumber$ (2J+1) gives the degeneracy of the Jth upper level arising from the allowed values of $M_J$ (+J to –J). As J increases, the degeneracy factor increases and the exponential factor decreases until at high J, the exponential factor wins out and NJ/N0 approaches zero at a certain level, Jmax. Thus, when $\dfrac{d}{dJ} \left( \dfrac{N_J}{N_0} \right)=0\nonumber$ by differentiation, we obtain $J_{max}=\left(\dfrac{kT}{2hcB}\right)^\frac{1}{2}-\dfrac{1}{2}\nonumber$ This is the reason that rovibrational spectral lines increase in energy to a maximum as J increases, then decrease to zero as J continues to increase, as seen in Figure 2 and Figure 3. From this relationship, we can also deduce that in heavier molecules, B will decrease because the moment of inertia will increase, and the decrease in the exponential factor is less pronounced. This results in the population distribution shifting to higher values of J. Similarly, as temperature increases, the population distribution will shift towards higher values of J. Ideal Spectrum The spectrum we expect, based on the conditions described above, consists of lines equidistant in energy from one another, separated by a value of 2B. The relative intensity of the lines is a function of the rotational populations of the ground states, i.e. the intensity is proportional to the number of molecules that have made the transition. The overall intensity of the lines depends on the vibrational transition dipole moment. Between P(1) and R(0) lies the zero gap, where the the first lines of both the P- and R-branch are separated by 4B, assuming that the rotational constant B is equal for both energy levels. The zero gap is also where we would expect the Q-branch, depicted as the dotted line, if it is allowed. Real Spectra We find that real spectra do not exactly fit the expectations from above. As energy increases, the R-branch lines become increasingly similar in energy (i.e., the lines move closer together) and as energy decreases, the P-branch lines become increasingly dissimilar in energy (i.e. the lines move farther apart). This is attributable to two phenomena: rotational-vibrational coupling and centrifugal distortion. Rotational-Vibrational Coupling As a diatomic molecule vibrates, its bond length changes. Since the moment of inertia is dependent on the bond length, it too changes and, in turn, changes the rotational constant B. We assumed above that B of R(0) and B of P(1) were equal, however they differ because of this phenomenon and B is given by $B_e= \left(-\alpha_e \nu+\dfrac{1}{2}\right)\nonumber$ Where ${B}_{e}$ is the rotational constant for a rigid rotor and $\alpha_{e}$ is the rotational-vibrational coupling constant. The information in the band can be used to determine B0 and B1 of the two different energy states as well as the rotational-vibrational coupling constant, which can be found by the method of combination differences. Combination Differences Combination differences involves finding the values of B0 and B1 rotational-vibrational coupling constant by measuring the change for two different transitions sharing a common state. To determine B1, we pair transitions sharing a common lower state; here, R(1) and P(1). Note that the vibrational level does not change. Both branches begin with J = 1, so by finding the difference in energy between the lines, we find B1. $\Delta E_R-\Delta E_P = E(\nu=1, J' =J+1) - E(\nu=1,J' =J-1)\nonumber$ Inserting this information into the equation from above, we obtain $=\tilde{\nu} [R(J-1)]-\tilde{\nu} [P(J+1)]\nonumber$ $=\omega_0+B_1 (J+1)(J+2)-B_0 J(J+1) - \omega_0 -B_1(J-1)J + B_0 J(J+1)\nonumber$ $={4B}_1 \left(J+\dfrac{1}{2} \right)\nonumber$ If we plot $\Delta E_R-\Delta E_P$ against $J+ \dfrac{1}{2}$, we obtain a straight line with slope 4B1. Similarly, we can determine B0 by finding wavenumber differences in transitions sharing a common upper state; here, R(0) and P(2). Both branches terminate at J=1 and differences will only depend on B0. \begin{align*} &=\tilde {\nu} [R(J-1)]- \tilde{\nu} [P(J+1)] \[4pt] &=\omega_0+B_1 J(J+1)-B_0 J(J-1)- \omega_0-B_1J(J+1)+B_0 (J+1)(J+2) \[4pt] &={4B}_0{(J+}\dfrac{1}{2}{)} \end{align*} As before, if we plot $\Delta{E}_{R}-\Delta{E}_{P}\nonumber$ vs. ${(J+}\dfrac{1}{2}{)}\nonumber$, we obtain a straight line with slope 4B0. Following from this, we can obtain the rotational-vibrational coupling constant: $={B}_1-{B}_0=\alpha\nonumber$ Centrifugal Distortion Similarly to rotational-vibrational coupling, centrifugal distortion is related to the changing bond length of a molecule. A real molecule does not behave as a rigid rotor that has a rigid rod for a chemical bond, but rather acts as if it has a spring for a chemical bond. As the rotational velocity of a molecule increases, its bond length increases and its moment of inertia increases. As the moment of inertia increases, the rotational constant B decreases. ${F(J)=BJ(J+1)-DJ}^2{(J+1)}^2\nonumber$ Where $D$ is the centrifugal distortion constant and is related to the vibration wavenumber, $\omega$ $D=\dfrac{4B^3}{\omega^2}\nonumber$ When the above factors are accounted for, the actual energy of a rovibrational state is $S(v,J)=\nu_0v+\dfrac{1}{2}+B_e J (J+1)- \alpha_e \left(v+\dfrac{1}{2}\right) J(J+1)-D_e[J(J+1)]^2\nonumber$ Problems Find the reduced mass of D35Cl in kg, if the mass of D-2 is 2.014 amu and the mass of Cl-35 is 34.968 amu. Answer $\dfrac{2.014 amu*34.968 amu}{2.014 amu + 34.968 amu}$ gives 1.807 amu. To convert to kg, multiple by 1.66 x 10-27 kg/amu. Answer: 3.00 x 10-27 kg Using information found in problem 1, calculate the rotational constant B (in wavenumbers) of D35Cl given that the average bond length is 1.2745 Å. Answer We know that in wavenumbers, $B=\dfrac{h}{8\pi^2cI}$. First, we must solve for the moment of inertia, I, using ${I}=\mu{r}^2=(3.00*10^{-27} kg)(1.2745 *10^{-10}m)^2\nonumber$ = 4.87 x 10-47 kg•m2= I We can now substitute into the original formula to solve for B. h is Planck's constant, c is the speed of light in m/s and I = 4.87 x 10-47 kg•m2. This will give us the answer in m-1, then we can convert to cm-1. Answer: 5.74 cm-1. Using the rigid rotor approximation, estimate the bond length in a 12C16O molecule if the energy difference between J=1 and J=3 were to equal 14,234 cm-1. Answer We use the same formula as above and expand the moment of inertia in order to solve for the average bond length. $B=\dfrac{h}{8\pi^2 c\mu r^2}\nonumber$ We can deduce the rotational constant B since we know the distance between two energy states and the relationship $F(J)=BJ(J+1)\nonumber$ The distance between J=1 and J=3 is 10B, so using the fact that B = 14,234 cm-1, B=1423.4 cm-1. We convert this to m-1 so that it will match up with the units of the speed of light (m/s) and obtain B = 142340 m-1. Using the reduced mass formula, we find that µ = 1.138 x 10-26 kg. Answer: r = 81 Å.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Spectroscopy/Rotational_Spectroscopy/Rovibrational_Spectroscopy.txt
Infrared Spectroscopy is the analysis of infrared light interacting with a molecule. This can be analyzed in three ways by measuring absorption, emission and reflection. The main use of this technique is in organic and inorganic chemistry. It is used by chemists to determine functional groups in molecules. IR Spectroscopy measures the vibrations of atoms, and based on this it is possible to determine the functional groups.5 Generally, stronger bonds and light atoms will vibrate at a high stretching frequency (wavenumber). • How an FTIR Spectrometer Operates FTIR spectrometers (Fourier Transform Infrared Spectrometer) are widely used in organic synthesis, polymer science, petrochemical engineering, pharmaceutical industry and food analysis. In addition, since FTIR spectrometers can be hyphenated to chromatography, the mechanism of chemical reactions and the detection of unstable substances can be investigated with such instruments. • Identifying the Presence of Particular Groups This page explains how to use an infra-red spectrum to identify the presence of a few simple bonds in organic compounds. • Infrared: Application Infrared spectroscopy, an analytical technique that takes advantage of the vibrational transitions of a molecule, has been of great significance to scientific researchers in many fields such as protein characterization, nanoscale semiconductor analysis and space exploration. • Infrared: Interpretation Infrared spectroscopy is the study of the interaction of infrared light with matter. The fundamental measurement obtained in infrared spectroscopy is an infrared spectrum, which is a plot of measured infrared intensity versus wavelength (or frequency) of light. • Infrared Spectroscopy Infrared (IR) spectroscopy is one of the most common and widely used spectroscopic techniques employed mainly by inorganic and organic chemists due to its usefulness in determining structures of compounds and identifying them. Chemical compounds have different chemical properties due to the presence of different functional groups. • Interpreting Infrared Spectra This chapter will focus on infrared (IR) spectroscopy. The wavelengths found in infrared radiation are a little longer than those found in visible light. IR spectroscopy is useful for finding out what kinds of bonds are present in a molecule, and knowing what kinds of bonds are present is a good start towards knowing what the structure could be. • IR Spectroscopy Background • The Fingerprint Region The fingerprint region is the region to the right-hand side of the diagram (from about 1500 to 500 cm-1) usually contains a very complicated series of absorptions. These are mainly due to all manner of bending vibrations within the molecule. Infrared Spectroscopy FTIR spectrometers (Fourier Transform Infrared Spectrometer) are widely used in organic synthesis, polymer science, petrochemical engineering, pharmaceutical industry and food analysis. In addition, since FTIR spectrometers can be hyphenated to chromatography, the mechanism of chemical reactions and the detection of unstable substances can be investigated with such instruments. Introduction The range of Infrared region is 12800 ~ 10 cm-1 and can be divided into near-infrared region (12800 ~ 4000 cm-1), mid-infrared region (4000 ~ 200 cm-1) and far-infrared region (50 ~ 1000 cm-1). The discovery of infrared light can be dated back to the 19th century. Since then, scientists have established various ways to utilize infrared light. Infrared absorption spectroscopy is the method which scientists use to determine the structures of molecules with the molecules’ characteristic absorption of infrared radiation. Infrared spectrum is molecular vibrational spectrum. When exposed to infrared radiation, sample molecules selectively absorb radiation of specific wavelengths which causes the change of dipole moment of sample molecules. Consequently, the vibrational energy levels of sample molecules transfer from ground state to excited state. The frequency of the absorption peak is determined by the vibrational energy gap. The number of absorption peaks is related to the number of vibrational freedom of the molecule. The intensity of absorption peaks is related to the change of dipole moment and the possibility of the transition of energy levels. Therefore, by analyzing the infrared spectrum, one can readily obtain abundant structure information of a molecule. Most molecules are infrared active except for several homonuclear diatomic molecules such as O2, N2 and Cl2 due to the zero dipole change in the vibration and rotation of these molecules. What makes infrared absorption spectroscopy even more useful is the fact that it is capable to analyze all gas, liquid and solid samples. The common used region for infrared absorption spectroscopy is 4000 ~ 400 cm-1 because the absorption radiation of most organic compounds and inorganic ions is within this region. FTIR spectrometers are the third generation infrared spectrometer. FTIR spectrometers have several prominent advantages: (1) The signal-to-noise ratio of spectrum is significantly higher than the previous generation infrared spectrometers. (2) The accuracy of wavenumber is high. The error is within the range of ± 0.01 cm-1. (3) The scan time of all frequencies is short (approximately 1 s). (4) The resolution is extremely high (0.1 ~ 0.005 cm-1). (5) The scan range is wide (1000 ~ 10 cm-1). (6) The interference from stray light is reduced. Due to these advantages, FTIR Spectrometers have replaced dispersive IR spectrometers. Development of IR Spectrometers Up till FTIR spectrometers, there have been three generations of IR spectrometers. 1. The first generation IR spectrometer was invented in late 1950s. It utilizes prism optical splitting system. The prisms are made of NaCl. The requirement of the sample’s water content and particle size is extremely strict. Further more, the scan range is narrow. Additionally, the repeatability is fairly poor. As a result, the first generation IR spectrometer is no longer in use. 2. The second generation IR spectrometer was introduced to the world in 1960s. It utilizes gratings as the monochrometer. The performance of the second generation IR spectrometer is much better compared with IR spectrometers with prism monochrometer, But there are still several prominent weaknesses such as low sensitivity, low scan speed and poor wavelength accuracy which rendered it out of date after the invention of the third generation IR spectrometer. 3. The invention of the third generation IR spectrometer, Fourier transform infrared spectrometer, marked the abdication of monochrometer and the prosperity of interferometer. With this replacement, IR spectrometers became exceptionally powerful. Consequently, various applications of IR spectrometer have been realized. Dispersive IR Spectrometers To understand the powerfulness and usefulness of FTIR spectrometer, it is essential to have some background information of dispersive IR Spectrometer. The basic components of a dispersive IR spectrometer include a radiation source, monochromator, and detector. The common IR radiation sources are inert solids that are heated electrically to promote thermal emission of radiation in the infrared region of the electromagnetic spectrum. The monochromator is a device used to disperse or separate a broad spectrum of IR radiation into individual narrow IR frequencies. Generally, dispersive spectrometers have a double-beam design with two equivalent beams from the same source passing through the sample and reference chambers as independent beams. These reference and sample beams are alternately focused on the detector by making use of an optical chopper, such as, a sector mirror. One beam will proceed, traveling through the sample, while the other beam will pass through a reference species for analytical comparison of transmitted photon wavefront information. After the incident radiation travels through the sample species, the emitted wavefront of radiation is dispersed by a monochromator (gratings and slits) into its component frequencies. A combination of prisms or gratings with variable-slit mechanisms, mirrors, and filters comprise the dispersive system. Narrower slits gives better resolution by distinguishing more closely spaced frequencies of radiation and wider slits allow more light to reach the detector and provide better system sensitivity. The emitted wavefront beam (analog spectral output) hits the detector and generates an electrical signal as a response. Detectors are devices that convert the analog spectral output into an electrical signal. These electrical signals are further processed by the computer using mathematical algorithm to arrive at the final spectrum. The detectors used in IR spectrometers can be classified as either photon/quantum detectors or thermal detectors. It is the absorption of IR radiation by the sample, producing a change of IR radiation intensity, which gets detected as an off-null signal (e.g. different from reference signal). This change is translated into the recorder response through the actions of synchronous motors. Each frequency that passes through the sample is measured individually by the detector which consequently slows the process of scanning the entire IR region. A block diagram of a classic dispersive IR spectrometer is shown in Figure $1$. FTIR Spectrometers The Components of FTIR Spectrometers A common FTIR spectrometer consists of a source, interferometer, sample compartment, detector, amplifier, A/D convertor, and a computer. The source generates radiation which passes the sample through the interferometer and reaches the detector. Then the signal is amplified and converted to digital signal by the amplifier and analog-to-digital converter, respectively. Eventually, the signal is transferred to a computer in which Fourier transform is carried out. Figure $2$ is a block diagram of an FTIR spectrometer. The major difference between an FTIR spectrometer and a dispersive IR spectrometer is the Michelson interferometer. Michelson Interferometer The Michelson interferometer, which is the core of FTIR spectrometers, is used to split one beam of light into two so that the paths of the two beams are different. Then the Michelson interferometer recombines the two beams and conducts them into the detector where the difference of the intensity of these two beams are measured as a function of the difference of the paths. Figure $3$ is a schematic of the Michelson Interferometer. A typical Michelson interferometer consists of two perpendicular mirrors and a beamsplitter. One of the mirror is a stationary mirror and another one is a movable mirror. The beamsplitter is designed to transmit half of the light and reflect half of the light. Subsequently, the transmitted light and the reflected light strike the stationary mirror and the movable mirror, respectively. When reflected back by the mirrors, two beams of light recombine with each other at the beamsplitter. If the distances travelled by two beams are the same which means the distances between two mirrors and the beamsplitter are the same, the situation is defined as zero path difference (ZPD). But imagine if the movable mirror moves away from the beamsplitter, the light beam which strikes the movable mirror will travel a longer distance than the light beam which strikes the stationary mirror. The distance which the movable mirror is away from the ZPD is defined as the mirror displacement and is represented by ∆. It is obvious that the extra distance travelled by the light which strikes the movable mirror is 2∆. The extra distance is defined as the optical path difference (OPD) and is represented by delta. Therefore, $\delta =2\Delta \label{1}$ It is well established that when OPD is the multiples of the wavelength, constructive interference occurs because crests overlap with crests, troughs with troughs. As a result, a maximum intensity signal is observed by the detector. This situation can be described by the following equation: $\delta =n\lambda \label{2}$ with n = 0,1,2,3... In contrast, when OPD is the half wavelength or half wavelength add multiples of wavelength, destructive interference occurs because crests overlap with troughs. Consequently, a minimum intensity signal is observed by the detector. This situation can be described by the following equation: $\delta =(n+\dfrac{1}{2})\lambda \label{3}$ with n = 0,1,2,3... These two situations are two extreme situations. If the OPD is neither n-fold wavelengths nor (n+1/2)-fold wavelengths, the interference should be between constructive and destructive. So the intensity of the signal should be between maximum and minimum. Since the mirror moves back and forth, the intensity of the signal increases and decreases which gives rise to a cosine wave. The plot is defined as an interferogram. When detecting the radiation of a broad band source rather than a single-wavelength source, a peak at ZPD is found in the interferogram. At the other distance scanned, the signal decays quickly since the mirror moves back and forth. Figure $4$(a) shows an interferogram of a broad band source. Fourier Transform of Interferogram to Spectrum The interferogram is a function of time and the values outputted by this function of time are said to make up the time domain. The time domain is Fourier transformed to get a frequency domain, which is deconvolved to product a spectrum. Figure $4$ shows the Fast Fourier transform from an interferogram of polychromatic light to its spectrum. The Fourier Transform The first one who found that a spectrum and its interferogram are related via a Fourier transform was Lord Rayleigh. He made the discover in 1892. But the first one who successfully converted an interferogram to its spectrum was Fellgett who made the accomplishment after more than half a century. Fast Fourier transform method on which the modern FTIR spectrometer based was introduced to the world by Cooley and Turkey in 1965. It has been applied widely to analytical methods such as infrared spectrometry, nuclear magnetic resonance and mass spectrometry due to several prominent advantages which are listed in Table $1$. Table $1$. Advantages of Fourier Transform over Continuous-Wave Spectrometry Fourier transform, named after the French mathematician and physicist Jean Baptiste Joseph Fourier, is a mathematical method to transform a function into a new function. The following equation is a common form of the Fourier transform with unitary normalization constants: $F(\omega )=\dfrac{1}{\sqrt{2\pi }}\int_{-\infty}^{\infty}f(t)e^{-i\omega t}dt \label{4}$ in which t is time, i is the square root of -1. The following equation is another form of the Fourier transform(cosine transform) which applies to real, even functions: $F(\nu )=\dfrac{1}{\sqrt{2\pi }}\int_{-\infty}^{\infty}f(t)\cos(2\pi \nu t)dt \label{5}$ The following equation shows how f(t) is related to F(v) via a Fourier transform: $f\left ( t \right )=\dfrac{1}{\sqrt{2\pi }}\int_{-\infty}^{\infty}F(\nu )\cos(2\pi \nu t)d\nu \label{6}$ An Alternative Explanation of the Fourier Transform in FTIR Spectrometers The math description of the Fourier transform can be tedious and confusing. An alternative explanation of the Fourier transform in FTIR spectrometers is provided here before we jump into the math description to give you a rough impression which may help you understand the math description. The interferogram obtained is a plot of the intensity of signal versus OPD. A Fourier transform can be viewed as the inversion of the independent variable of a function. Thus, Fourier transform of the interferogram can be viewed as the inversion of OPD. The unit of OPD is centimeter, so the inversion of OPD has a unit of inverse centimeters, cm-1. Inverse centimeters are also known as wavenumbers. After the Fourier transform, a plot of intensity of signal versus wavenumber is produced. Such a plot is an IR spectrum. Although this explanation is easy to understand, it is not perfectly rigorous. Simplified Math Description of the Fourier Transform in FTIR The wave functions of the reflected and transmitted beams may be represented by the general form of: $E_{1}=rtc E_{m}\times \cos(\nu t-2\pi kx) \label{7}$ and $E_{1}=rtc E_{m}\times \cos[\nu t-2\pi k(\nu x+\Delta d)] \label{8}$ where • $\Delta{d}$ is the path difference, • $r$ is the reflectance (amplitude) of the beam splitter, • $t$ is the transmittance, and • $c$ is the polarization constant. The resultant wave function of their superposition at the detector is represented as: $E=E_{1}+E_{2}=2(r\times t\times c\times E_{m})\times \cos(\nu t-2\pi kx)\cos(\pi k\Delta d) \label{9}$ where Em,, ν, and k are the amplitude, frequency and wave number of the IR radiation source. The intensity ($I$) detected is the time average of $E_2$ and is written as $I=4r^{2}t^{2}c^{2}E_{m}^{2}cos^{2}(\nu t-2\pi kx)\cos^{2}(\pi k\Delta d) \label{10}$ Since the time average of the first cosine term is just ½, then $I=2I(k)\cos^{2}(\pi k\Delta d) \label{11}$ and $I(\Delta d)=I(k)[1+\cos(2\pi k\Delta d)]\label{12}$ where $I(k)$ is a constant that depends only upon $k$ and $I(∆d)$ is the interferogram. From $I(∆d)$ we can get $I(k)$ using Fourier transform as follows: $I(\Delta d)-I(\infty)=\int_{0}^{k_{m}}I(k)\cos(2\Pi k\Delta d)dk \label{13}$ Letting Km ∞, we can write $I(k)=\int_{0}^{\infty}[I(\Delta d)-I(\infty)]\cos(2\Pi k\Delta d)d\Delta d \label{14}$ The physically measured information recorded at the detector produces an interferogram, which provides information about a response change over time within the mirror scan distance. Therefore, the interferogram obtained at the detector is a time domain spectrum. This procedure involves sampling each position, which can take a long time if the signal is small and the number of frequencies being sampled is large. In terms of ordinary frequency, $\nu$, the Fourier transform of this is given by (angular frequency $\omega= s\pi \nu$): $f(\nu )=\int_{-\infty}^{\infty}f(t)e^{-i2\Pi \nu t}dt \label{15}$ The inverse Fourier transform is given by: $f(\nu )=\int_{-\infty}^{\infty}f(t)e^{+i2\pi \nu t}dt \label{16}$ The interferogram is transformed into IR absorption spectrum (Figure $5$) that is commonly recognizable with absorption intensity or % transmittance plotted against the wavelength or wavenumber. The ratio of radiant power transmitted by the sample (I) relative to the radiant power of incident light on the sample (I0) results in quantity of Transmittance, (T). Absorbance (A) is the logarithm to the base 10 of the reciprocal of the transmittance (T): $A=log_{10}\dfrac{1}{T}=-log_{10}T=-log_{10}\dfrac{I}{I_{0}} \label{17}$ Hands-on Operation of an FTIR Spectrometer Step 1: The first step is sample preparation. The standard method to prepare solid sample for FTIR spectrometer is to use KBr. About 2 mg of sample and 200 mg KBr are dried and ground. The particle size should be unified and less than two micrometers. Then, the mixture is squeezed to form transparent pellets which can be measured directly. For liquids with high boiling point or viscous solution, it can be added in between two NaCl pellets. Then the sample is fixed in the cell by skews and measured. For volatile liquid sample, it is dissolved in CS2 or CCl4 to form 10% solution. Then the solution is injected into a liquid cell for measurement. Gas sample needs to be measured in a gas cell with two KBr windows on each side. The gas cell should first be vacuumed. Then the sample can be introduced to the gas cell for measurement. Step 2: The second step is getting a background spectrum by collecting an interferogram and its subsequent conversion to frequency data by inverse Fourier transform. We obtain the background spectrum because the solvent in which we place our sample will have traces of dissolved gases as well as solvent molecules that contribute information that are not our sample. The background spectrum will contain information about the species of gases and solvent molecules, which may then be subtracted away from our sample spectrum in order to gain information about just the sample. Figure $6$ shows an example of an FTIR background spectrum. The background spectrum also takes into account several other factors related to the instrument performance, which includes information about the source, interferometer, detector, and the contribution of ambient water (note the two irregular groups of lines at about 3600 cm–1 and about 1600 cm–1 in Figure $6$) and carbon dioxide (note the doublet at 2360 cm–1 and sharp spike at 667 cm–1 in Figure $6$) present in the optical bench. Step 3: Next, we collect a single-beam spectrum of the sample, which will contain absorption bands from the sample as well as the background (gaseous or solvent). Step 4: The ratio between the single-beam sample spectrum and the single beam background spectrum gives the spectrum of the sample (Figure $7$). Step 5: Data analysis is done by assigning the observed absorption frequency bands in the sample spectrum to appropriate normal modes of vibrations in the molecules. Portable FTIR Spectrometers Despite of the powerfulness of traditional FTIR spectrometers, they are not suitable for real-time monitoring or field use. So various portable FTIR spectrometers have been developed. Below are two examples. Ahonen et al developed a portable, real-time FTIR spectrometer as a gas analyzer for industrial hygiene use. The instrument consists of an operational keyboard, a control panel, signal and control processing electronics, an interferometer, a heatable sample cell and a detector. All the components were packed into a cart. To minimize the size of the instrument, the resolution of FTIR spectrometer was sacraficed. But it is good enough for the use of industrial hygiene. The correlation coefficient of hygienic effect between the analyzer and adsorption tubes is about 1 mg/m3. Korb et al developed a portable FTIR spectrometer which only weighs about 12.5 kg so that it can be held by hand. Moreover, the energy source of the instrument is battery so that the mobility is significantly enhanced. Besides, the instrument can function well within the temperature range of 0 to 45 oC and the humidity range of 0 to 100%. Additionally, this instrument resists vibration. It works well in an operating helicopter. Consequently, this instrument is excellent for the analysis of radiation from the surface and atmosphere of the Earth. The instrument is also very stable. After a three-year operation, it did not lose optical alignment. The reduction of size was implemented by a creative design of optical system and accessory components. Two KBr prisms were used to constitute the interferometer cavity. Optical coatings replaced the mirrors and beam splitter in the interferometer. The optical path is shortened with a much more compact packaging of components. A small, low energy consuming interferometer drive was designed. It is also mass balanced to resist vibration. The common He-Ne tube was replaced by a smaller laser diode.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Spectroscopy/Vibrational_Spectroscopy/Infrared_Spectroscopy/How_an_FTIR_Spectrometer_Operates.txt
This page describes what an infra-red spectrum is and how it arises from bond vibrations within organic molecules. How an infra-red spectrum is produced You probably know that visible light is made up of a continuous range of different electromagnetic frequencies - each frequency can be seen as a different color. Infra-red radiation also consists of a continuous range of frequencies - it so happens that our eyes can't detect them. If you shine a range of infra-red frequencies one at a time through a sample of an organic compound, you find that some frequencies get absorbed by the compound. A detector on the other side of the compound would show that some frequencies pass through the compound with almost no loss, but other frequencies are strongly absorbed. How much of a particular frequency gets through the compound is measured as percentage transmittance. A percentage transmittance of 100 would mean that all of that frequency passed straight through the compound without any being absorbed. In practice, that never happens - there is always some small loss, giving a transmittance of perhaps 95% as the best you can achieve. A transmittance of only 5% would mean that nearly all of that particular frequency is absorbed by the compound. A very high absorption of this sort tells you important things about the bonds in the compound. What an infra-red spectrum looks like A graph is produced showing how the percentage transmittance varies with the frequency of the infra-red radiation. Notice that an unusual measure of frequency is used on the horizontal axis. Wavenumber is defined like this: Similarly, don't worry about the change of scale half-way across the horizontal axis. You will find infra-red spectra where the scale is consistent all the way across, infra-red spectra where the scale changes at around 2000 cm-1, and very occasionally where the scale changes again at around 1000 cm-1. As you will see when we look at how to interpret infra-red spectra, this does not cause any problems - you simply need to be careful reading the horizontal scale. What causes some frequencies to be absorbed? Each frequency of light (including infra-red) has a certain energy. If a particular frequency is being absorbed as it passes through the compound being investigated, it must mean that its energy is being transferred to the compound. Energies in infra-red radiation correspond to the energies involved in bond vibrations. Bond stretching In covalent bonds, atoms aren't joined by rigid links - the two atoms are held together because both nuclei are attracted to the same pair of electrons. The two nuclei can vibrate backwards and forwards - towards and away from each other - around an average position. The diagram shows the stretching that happens in a carbon-oxygen single bond. There will, of course, be other atoms attached to both the carbon and the oxygen. For example, it could be the carbon-oxygen bond in methanol, CH3OH. The energy involved in this vibration depends on things like the length of the bond and the mass of the atoms at either end. That means that each different bond will vibrate in a different way, involving different amounts of energy. Bonds are vibrating all the time, but if you shine exactly the right amount of energy on a bond, you can kick it into a higher state of vibration. The amount of energy it needs to do this will vary from bond to bond, and so each different bond will absorb a different frequency (and hence energy) of infra-red radiation. Bond bending As well as stretching, bonds can also bend. The diagram shows the bending of the bonds in a water molecule. The effect of this, of course, is that the bond angle between the two hydrogen-oxygen bonds fluctuates slightly around its average value. Imagine a lab model of a water molecule where the atoms are joined together with springs. These bending vibrations are what you would see if you shook the model gently. Again, bonds will be vibrating like this all the time and, again, if you shine exactly the right amount of energy on the bond, you can kick it into a higher state of vibration. Since the energies involved with the bending will be different for each kind of bond, each different bond will absorb a different frequency of infra-red radiation in order to make this jump from one state to a higher one. Tying all this together Look again at the infra-red spectrum of propan-1-ol, CH3CH2CH2OH: In the diagram, three sample absorptions are picked out to show you the bond vibrations which produced them. Notice that bond stretching and bending produce different troughs in the spectrum. Contributor Jim Clark (Chemguide.co.uk)
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Spectroscopy/Vibrational_Spectroscopy/Infrared_Spectroscopy/IR_Spectroscopy_Background.txt
This page explains how to use an infra-red spectrum to identify the presence of a few simple bonds in organic compounds. The infrared spectrum for a simple carboxylic acid: Ethanoic acid Ethanoic acid has the structure: You will see that it contains the following bonds: • carbon-oxygen double, C=O • carbon-oxygen single, C-O • oxygen-hydrogen, O-H • carbon-hydrogen, C-H • carbon-carbon single, C-C The carbon-carbon bond has absorptions which occur over a wide range of wavenumbers in the fingerprint region - that makes it very difficult to pick out on an infra-red spectrum. The carbon-oxygen single bond also has an absorbtion in the fingerprint region, varying between 1000 and 1300 cm-1 depending on the molecule it is in. You have to be very wary about picking out a particular trough as being due to a C-O bond. The other bonds in ethanoic acid have easily recognized absorptions outside the fingerprint region. • The C-H bond (where the hydrogen is attached to a carbon which is singly-bonded to everything else) absorbs somewhere in the range from 2853 - 2962 cm-1. Because that bond is present in most organic compounds, that's not terribly useful! What it means is that you can ignore a trough just under 3000 cm-1, because that is probably just due to C-H bonds. • The carbon-oxygen double bond, C=O, is one of the really useful absorptions, found in the range 1680 - 1750 cm-1. Its position varies slightly depending on what sort of compound it is in. • The other really useful bond is the O-H bond. This absorbs differently depending on its environment. It is easily recognised in an acid because it produces a very broad trough in the range 2500 - 3300 cm-1. The infrared spectrum for ethanoic acid looks like this: The possible absorption due to the C-O single bond is queried because it lies in the fingerprint region. You couldn't be sure that this trough wasn't caused by something else. The infrared spectrum for an alcohol: Ethanol The O-H bond in an alcohol absorbs at a higher wavenumber than it does in an acid - somewhere between 3230 - 3550 cm-1. In fact this absorption would be at a higher number still if the alcohol isn't hydrogen bonded - for example, in the gas state. All the infra-red spectra on this page are from liquids - so that possibility will never apply. Notice the absorption due to the C-H bonds just under 3000 cm-1, and also the troughs between 1000 and 1100 cm-1 - one of which will be due to the C-O bond. The infrared spectrum for an ester: Ethyl ethanoate This time the O-H absorption is missing completely. Don't confuse it with the C-H trough fractionally less than 3000 cm-1. The presence of the C=O double bond is seen at about 1740 cm-1. The C-O single bond is the absorption at about 1240 cm-1. Whether or not you could pick that out would depend on the detail given by the table of data which you get in your exam, because C-O single bonds vary anywhere between 1000 and 1300 cm-1 depending on what sort of compound they are in. Some tables of data fine it down, so that they will tell you that an absorption from 1230 - 1250 is the C-O bond in an ethanoate. The infrared spectrum for a ketone: Propanone You will find that this is very similar to the infra-red spectrum for ethyl ethanoate, an ester. Again, there is no trough due to the O-H bond, and again there is a marked absorption at about 1700 cm-1 due to the C=O. Confusingly, there are also absorptions which look as if they might be due to C-O single bonds - which, of course, aren't present in propanone. This reinforces the care you have to take in trying to identify any absorptions in the fingerprint region. Aldehydes will have similar infra-red spectra to ketones. The infrared spectrum for a hydroxy-acid: 2-hydroxypropanoic acid (lactic acid) This is interesting because it contains two different sorts of O-H bond - the one in the acid and the simple "alcohol" type in the chain attached to the -COOH group. The O-H bond in the acid group absorbs between 2500 and 3300, the one in the chain between 3230 and 3550 cm-1. Taken together, that gives this immense trough covering the whole range from 2500 to 3550 cm-1. Lost in that trough as well will be absorptions due to the C-H bonds. Notice also the presence of the strong C=O absorption at about 1730 cm-1. The infrared spectrum for a primary amine: 1-aminobutane Primary amines contain the -NH2 group, and so have N-H bonds. These absorb somewhere between 3100 and 3500 cm-1. That double trough (typical of primary amines) can be seen clearly on the spectrum to the left of the C-H absorptions.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Spectroscopy/Vibrational_Spectroscopy/Infrared_Spectroscopy/Identifying_the_Presence_of_Particular_Groups.txt
Infrared spectroscopy, an analytical technique that takes advantage of the vibrational transitions of a molecule, has been of great significance to scientific researchers in many fields such as protein characterization, nanoscale semiconductor analysis and space exploration. Introduction Infrared spectroscopy is the study of interaction of infrared light with matter, which can be used to identify unknown materials, examine the quality of a sample or determine the amount of components in a mixture. Infrared light refers to electromagnetic radiation with wavenumber ranging from 13000 – 10 cm-1 (corresponding wavelength from 0.78 – 1000 μm). Infrared region is further divided into three subregions: near-infrared (13000 – 4000 cm-1 or 0.78 – 2.5 μm), mid-infrared (4000 – 400 cm-1 or 2.5 – 25 μm) and far-infrared (400 – 10 cm-1 or 25 – 1000 μm). The most commonly used is the middle infrared region, since molecules can absorb radiations in this region to induce the vibrational excitation of functional groups. Recently, applications of near infrared spectroscopy have also been developed. By passing infrared light through a sample and measuring the absorption or transmittance of light at each frequency, an infrared spectrum is obtained, with peaks corresponding to the frequency of absorbed radiation. Since all groups have their characteristic vibrational frequencies, information regarding molecular structure can be gained from the spectrum. Infrared spectroscopy is capable of analyzing samples in almost any phase (liquid, solid, or gas), and can be used alone or in combination with other instruments following different sampling procedures. Besides fundamental vibrational modes, other factors such as overtone and combination bands, Fermi resonance, coupling and vibration-rotational bands also appear in the spectrum. Due to the high information content of its spectrum, infrared spectroscopy has been a very common and useful tool for structure elucidation and substance identification. Instrumentation Most commonly used instruments in infrared spectroscopy are dispersive infrared spectrometer and Fourier transform infrared spectrometer. Dispersive infrared spectrometer Dispersive infrared spectrometer is mainly composed of radiation source, monochromator and detector. For mid-infrared region, Globar (silicon carbide), Nernst glower (oxides of zirconium, yttrium and erbium) and metallic helices (chromium-nickel alloy or tungsten) are frequently used as radiation sources. Tungsten-halogen lamps and metallic conductors coated with ceramic are utilized as sources for near-infrared region. A mercury high-pressure lamp is suitable for far-infrared region. Monochromator in conjunction with slits, mirrors and filters separates the wavelengths of light emitted. The dispersive elements within monochromator are prisms or gratings. Gratings have gradually replaced prisms due to their comparatively low cost and good quality. As shown in Figure 1, radiation passes through both a sample and a reference path. Then the beams are directed to a diffraction grating (splitter), which disperses the light into component frequencies and directs each wavelength through a slit to the detector. The detector produces an electrical signal and results in a recorder response. Figure 1. Schematic illustration of dispersive infrared spectrometer. Figure from Wikipedia. Two types of detectors are employed in dispersive infrared spectrometer, namely, thermal detectors and photon detectors. Thermal detectors include thermocouples, thermistors, and pneumatic devices, which measure the heating effect generated by infrared radiation. Photon detectors are semiconductor-based. Radiation is able to promote electrons in photon detectors from valence band to conduction band, generating a small current. Photon detectors have faster response and higher sensitivity than do thermal detectors but are more susceptible to thermal noise. Fourier transform infrared spectrometer Dispersive infrared spectrometer has many limitations because it examines component frequencies individually, resulting in slow speed and low sensitivity. Fourier transform infrared (FTIR) spectrometer is preferred over dispersive spectrometer, since it is capable of handling all frequencies simultaneously with high throughput, reducing the time required for analysis. The radiation sources used in dispersive infrared spectrometer can also be used in FTIR spectrometer. In contrast with the monochromator in dispersive spectrometer, FTIR spectrometer as shown in Figure 2 employs an interferometer. The beamsplitter within the interferometer splits the incoming infrared beam into two beams, one of which is reflected by a fixed mirror, while the other one reflected by a moving mirror perpendicular to the fixed one. The length of path one beam travels is fixed and that of the other one is changing as the mirror moves, generating an optical path difference between the two beams. After meeting back at the beamsplitter, the two beams recombine, interfere with each other, and yield an interferogram. The interferogram produces inference signal as a function of optical path difference. It is converted to a spectrum of absorbance or transmittance versus wavenumber or frequency by Fourier transform. Figure 2. Schematic representation of Fourier transform infrared spectrometer. Figure from Wikipedia. Detectors used in FTIR spectrometers are mainly pyroelectric and photoconductive detectors. The former are constructed of crystalline materials (such as deuterated triglycine sulfate) whose electric polarization rely on temperature. The change in temperature leads to change in charge distribution of the detector and electric signal is produced. The latter (such as mercury cadmium telluride) provide better sensitivity and faster speed than do pyroelectric detectors over a broad spectral range. However, liquid nitrogen is needed for cooling of photoconductive detectors. Application Since different molecules with different combination of atoms produce their unique spectra, infrared spectroscopy can be used to qualitatively identify substances. In addition, the intensity of the peaks in the spectrum is proportional to the amount of substance present, enabling its application for quantitative analysis. Qualitative analysis For qualitative identification purposes, the spectrum is commonly presented as transmittance versus wavenumber. Functional groups have their characteristic fundamental vibrations which give rise to absorption at certain frequency range in the spectrum (Figure 3). Figure 3.Infrared spectrum of 1-hexanol. Each band in a spectrum can be attributed to stretching or bending mode of a bond. Almost all the fundamental vibrations appear in the mid-infrared region. For instance, 4000 – 2500 cm-1 region usually can be assigned to stretching modes of O-H, N-H or C-H. Triple-bond stretching modes appear in the region of 2500 – 2000 cm-1. C=C and C=O stretching bands fall in the 2000 – 1500 cm-1 region. Hence, characterization of functional groups in substances according to the frequencies and intensities of absorption peaks is feasible, and also structures of molecules can be proposed. This method is applicable to organic molecules, inorganic molecules, polymers, etc. A detailed frequency list of functional groups is shown in Figure 4. Note that several functional groups may absorb at the same frequency range, and a functional group may have multiple-characteristic absorption peaks, especially for 1500 – 650 cm-1, which is called the fingerprint region. Figure 4. Characteristic infrared absorption frequencies. Bands in the near-infrared region are overtones or combination bands. They are weak in intensity and overlapped, making them not as useful for qualitative analysis as those in mid-infrared region. However, absorptions in this region are helpful in exploiting information related to vibrations of molecules containing heavy atoms, molecular skeleton vibrations, molecular torsions and crystal lattice vibrations. Besides structural elucidation, another qualitative application of infrared spectroscopy is the identification of a compound with a reference infrared spectrum. If all the peaks of the unknown match those of the reference, the compound can be identified. Additional reference spectra are available online at databases such as NIST Chemistry WebBook. Quantitative analysis Absorbance is used for quantitative analysis due to its linear dependence on concentration. Given by Beer-Lambert law, absorbance is directly proportional to the concentration and pathlength of sample: $A=\epsilon c l$ where A is absorbance, ε the molar extinction coefficient or molar absorptivity which is characteristic for a specific substance, c the concentration and l the pathlength (or the thickness) of sample. The conversion from transmittance to absorbance is given by $A=-\log T$ where T is transmittance. For quantitative analysis of liquid samples, usually an isolated peak with high molar absorptivity that appears in the spectrum of the compound is chosen. A calibration curve of absorbance at the chosen frequency against concentration of the compound is acquired by measuring the absorbance of a series of standard compound solution with known concentrations. These data are then graphed to get a linear plot, from which the concentration of the unknown can be calculated after measuring its absorbance at the same frequency. The number of functional groups can also be calculated in this way, since the molar absorptivity of the band is proportional to the number of functional groups that are present in the compound. For solid samples, an internal standard with a constant known amount is added to the unknown sample and the standards. Then similar procedures as those with liquid samples are carried out except that the calibration curve is a graph of the ratio of absorbance of analyte to that of the internal standard versus concentration of the analyte. A multi-component analysis of the mixture is also feasible since different components have different values of molar absorptivity at the same frequency. However, infrared spectroscopy may be more susceptible to deviation from Beer's law than is UV-Vis spectroscopy because of its narrow bands, complex spectra, weak incident beam, low transducer sensitivity and solvent absorption. Reference 1. Settle, F. A. Handbook of instrumental techniques for analytical chemistry; Prentice Hall PTR: Upper Saddle River, NJ, 1997. 2. Heigl, J. J.; Bell, M.; White, J. U. Anal. Chem.1947, 19, 293. 3. Baker, A. W. J. Phys. Chem. 1957, 61, 450. 4. Kamariotis, A.; Boyarkin, O. V.; Mercier, S. R.; Beck, R. D.; Bush, M. F.; Williams, E. R.; Rizzo, T. R. J. Am. Chem. Soc. 2006, 128, 905. 5. Stuart, B. Infrared spectroscopy fundamentals and applications; J. Wiley: Chichester, Eng.; Hoboken, N.J., 2004. 6. Günzler, H.; Heise, H. M. IR spectroscopy: an introduction; Wiley-VCH: Weinheim, 2002. 7. Wartewig, S. IR and Raman spectroscopy: fundamental processin; Wiley-VCH: Weinheim, 2003. Contributor • Xixuan Li (UC Davis)
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Spectroscopy/Vibrational_Spectroscopy/Infrared_Spectroscopy/Infrared%3A_Application.txt
Infrared spectroscopy is the study of the interaction of infrared light with matter. The fundamental measurement obtained in infrared spectroscopy is an infrared spectrum, which is a plot of measured infrared intensity versus wavelength (or frequency) of light. Introduction In infrared spectroscopy, units called wavenumbers are normally used to denote different types of light. The frequency, wavelength, and wavenumber are related to each other via the following equation(1): (1) These equations show that light waves may be described by their frequency, wavelength or wavenumber. Here, we typically refer to light waves by their wavenumber, however it will be more convenient to refer to a light wave's frequency or wavelength. The wavenumber of several different types of light are shown in table 1. Table 1. The Electromagnetic spectrum showing the wavenumber of several different types of light. When a molecule absorbs infrared radiation, its chemical bonds vibrate. The bonds can stretch, contract, and bend. This is why infrared spectroscopy is a type of vibrational spectroscopy. Fortunately, the complex vibrational motion of a molecule can be broken down into a number of constituent vibrations called normal modes. For example, when a guitar string is plucked, the string vibrates at its normal mode frequency. Molecules, like guitar strings, vibrate at specfic frequencies so different molecules vibrate at different frequencies because their structures are different. This is why molecules can be distinguished using infrared spectroscopy. The first necessary condition for a molecule to absorb infrared light is that the molecule must have a vibration during which the change in dipole moment with respect to distance is non-zero. This condition can be summarized in equation(2) form as follows: (2) Vibrations that satisfy this equation are said to be infrared active. The H-Cl stretch of hydrogen chloride and the asymmetric stretch of CO2 are examples of infrared active vibrations. Infrared active vibrations cause the bands seen in an infrared spectrum. The second necessary condition for infrared absorbance is that the energy of the light impinging on a molecule must equal a vibrational energy level difference within the molecule. This condition can be summarized in equation(3) form as follows: (3) If the energy of a photon does not meet the criterion in this equation, it will be transmitted by the sample and if the photon energy satisfies this equation, that photon will be absorbed by the molecule.(See Infrared: Theory for more detail) As any other analytical techniques, infrared spectroscopy works well on some samples, and poorly on others. It is important to know the strengths and weaknesses of infrared spectroscopy so it can be used in the proper way. Some advantages and disadvantages of infrared spectroscopy are listed in table 2. Advantages Disadvantages Solids, Liquids, gases, semi-solids, powders and polymers are all analyzed The peak positions, intensities, widths, and shapes all provide useful information Fast and easy technique Sensitive technique (Micrograms of materials can be detected routinely) Inexpensive Atoms or monatomic ions do not have infrared spectra Homonuclear diatomic molecules do not posses infrared spectra Complex mixture and aqueous solutions are difficult to analyze using infrared spectroscopy Table 2. The Advantage and Disadvantage of Infrared Spectroscopy Origin of Peak Positions, Intensities, and Widths Peak Positions The equation(4) gives the frequency of light that a molecule will absorb, and gives the frequency of vibration of the normal mode excited by that light. (4) Only two variables in equation(4) are a chemical bond's force constant and reduced mass. Here, the reduced mass refers to (M1M2)/(M1+M2) where M1 and M2 are the masses of the two atoms, respectively. These two molecular properties determine the wavenumber at which a molecule will absorb infrared light. No two chemical substances in the universe have the same force constants and atomic masses, which is why the infrared spectrum of each chemical substance is unique. To understand the effect of atomic masses and force constant on the positions of infrared bands, table 3 and 4 are shown as an example, respectively. Table 3. An Example of an Mass Effect Bond C-H Stretch in cm-1 C-1H ~3000 C-2D ~2120 The reduced masses of C-1H and C-2D are different, but their force constants are the same. By simply doubling the mass of the hydrogen atom, the carbon-hydrogen stretching vibration is reduced by over 800cm-1. Table 4. An Example of an electronic Effect Bond C-H Stretch in cm-1 C-H ~3000 H-C=O ~2750 When a hydrogen is attached to a carbon with a C=O bond, the C-H stretch band position decrease to ~2750cm-1. These two C-H bonds have the same reduced mass but different force constants. The oxygen in the second molecule pulls electron density away from the C-H bond so it makes weaken and reduce the C-H force constant. This cause the C-H stretching vibration to be reduced by ~250cm-1. The Origin of Peak Intensities The different vibrations of the different functional groups in the molecule give rise to bands of differing intensity. This is because $\frac{\partial \mu}{\partial x}$ is different for each of these vibrations. For example, the most intense band in the spectrum of octane shown in Figure 3 is at 2971, 2863 cm-1 and is due to stretching of the C-H bond. One of the weaker bands in the spectrum of octane is at 726cm-1, and it is due to long-chain methyl rock of the carbon-carbon bonds in octane. The change in dipole moment with respect to distance for the C-H stretching is greater than that for the C-C rock vibration, which is why the C-H stretching band is the more intense than C-C rock vibration. Another factor that determines the peak intensity in infrared spectra is the concentration of molecules in the sample. The equation(5) that relates concentration to absorbance is Beer's law, (5) The absorptivity is the proportionality constant between concentration and absorbance, and is dependent on (µ/x)2. The absorptivity is an absolute measure of infrared absorbance intensity for a specific molecule at a specific wavenumber. For pure sample, concentration is at its maximum, and the peak intensities are true representations of the values of µ/x for different vibrations. However, in a mixture, two peaks may have different intensities because there are molecules present in different concentration. The Orgins of Peak Widths In general, the width of infrared bands for solid and liquid samples is determined by the number of chemical environments which is related to the strength of intermolecular interactions such as hydrogen bonding. Figure 1. shows hydrogen bond in water molecules and these water molecules are in different chemical environments. Because the number and strength of hydrogen bonds differs with chemical environment, the force constant varies and the wavenumber differs at which these molecules absorb infrared light. Figure 1. Hydrogen Bonding in water molecules In any sample where hydrogen bonding occurs, the number and strength of intermolecular interactions varies greatly within the sample, causing the bands in these samples to be particularly broad. This is illustrated in the spectra of ethanol(Fig7) and hexanoic acid(Fig11). When intermolecular interactions are weak, the number of chemical environments is small, and narrow infrared bands are observed. The Origin of Group Frequencies An important observation made by early researchers is that many functional group absorb infrared radiation at about the same wavenumber, regardless of the structure of the rest of the molecule. For example, C-H stretching vibrations usually appear between 3200 and 2800cm-1 and carbonyl(C=O) stretching vibrations usually appear between 1800 and 1600cm-1. This makes these bands diagnostic markers for the presence of a functional group in a sample. These types of infrared bands are called group frequencies because they tell us about the presence or absence of specific functional groups in a sample. Figure 2. Group frequency and fingerprint regions of the mid-infrared spectrum The region of the infrared spectrum from 1200 to 700 cm-1 is called the fingerprint region. This region is notable for the large number of infrared bands that are found there. Many different vibrations, including C-O, C-C and C-N single bond stretches, C-H bending vibrations, and some bands due to benzene rings are found in this region. The fingerprint region is often the most complex and confusing region to interpret, and is usually the last section of a spectrum to be interpreted. However, the utility of the fingerprint region is that the many bands there provide a fingerprint for a molecule. Spectral Interpretation by Application of Group Frequencies Organic Compounds One of the most common application of infrared spectroscopy is to the identification of organic compounds. The major classes of organic molecules are shown in this category and also linked on the bottom page for the number of collections of spectral information regarding organic molecules. Hydrocarbons Hydrocarbons compounds contain only C-H and C-C bonds, but there is plenty of information to be obtained from the infrared spectra arising from C-H stretching and C-H bending. In alkanes, which have very few bands, each band in the spectrum can be assigned: • Figure 3. shows the IR spectrum of octane. Since most organic compounds have these features, these C-H vibrations are usually not noted when interpreting a routine IR spectrum. Note that the change in dipole moment with respect to distance for the C-H stretching is greater than that for others shown, which is why the C-H stretch band is the more intense. In alkenes compounds, each band in the spectrum can be assigned: • Figure 4. shows the IR spectrum of 1-octene. As alkanes compounds, these bands are not specific and are generally not noted because they are present in almost all organic molecules. In alkynes, each band in the spectrum can be assigned: • The spectrum of 1-hexyne, a terminal alkyne, is shown below. In aromatic compounds, each band in the spectrum can be assigned: • Note that this is at slightly higher frequency than is the –C–H stretch in alkanes. This is a very useful tool for interpreting IR spectra. Only alkenes and aromatics show a C–H stretch slightly higher than 3000 cm-1. Figure 6. shows the spectrum of toluene. Figure 6. Infrared Spectrum of Toluene Functional Groups Containing the C-O Bond Alcohols have IR absorptions associated with both the O-H and the C-O stretching vibrations. • Figure 7. shows the spectrum of ethanol. Note the very broad, strong band of the O–H stretch. The carbonyl stretching vibration band C=O of saturated aliphatic ketones appears: • If a compound is suspected to be an aldehyde, a peak always appears around 2720 cm-1 which often appears as a shoulder-type peak just to the right of the alkyl C–H stretches. • Figure 9. shows the spectrum of butyraldehyde. The carbonyl stretch C=O of esters appears: • Figure 10. shows the spectrum of ethyl benzoate. The carbonyl stretch C=O of a carboxylic acid appears as an intense band from 1760-1690 cm-1. The exact position of this broad band depends on whether the carboxylic acid is saturated or unsaturated, dimerized, or has internal hydrogen bonding. • Figure 11. shows the spectrum of hexanoic acid. • Organic Compounds Containing Halogens Alkyl halides are compounds that have a C–X bond, where X is a halogen: bromine, chlorine, fluorene, or iodine. • The spectrum of 1-chloro-2-methylpropane are shown below. For more Infrared spectra Spectral database of organic molecules is introduced to use free database. Also, the infrared spectroscopy correlation tableis linked on bottom of page to find other assigned IR peaks. Inorganic Compounds Generally, the infrared bands for inorganic materials are broader, fewer in number and appear at lower wavenumbers than those observed for organic materials. If an inorganic compound forms covalent bonds within an ion, it can produce a characteristic infrared spectrum. Main infrared bands of some common inorganic ions: • Diatomic molecules produce one vibration along the chemical bond. Monatomic ligand, where metal s coordinate with atoms such as halogens, H, N or O, produce characteristic bands. These bands are summarized in below. Chracteristic infrared bands of diatomic inorganic molecules: M(metal), X(halogen) • The normal modes of vibration of linear and bent triatomic molecules are illustrated and some common linear and bent triatomic molecules are shown below. Note that some molecules show two bands for ?1because of Fermi resonance. Characteristic infrared bands(cm-1) of triatomic inorganic molecules: 1388, 1286 3311 2053 714, 784 327 667 712 486, 471 380 249 2349 2049 748 2219 842 Bent Molecules H2O O3 SnCl 2 3675 1135 354 1595 716 120 3756 1089 334 Identification There are a few general rules that can be used when using a mid-infrared spectrum for the determination of a molecular structure. The following is a suggested strategy for spectrum interpretation:2 1. Infrared spectroscopy is used to analyze a wide variety of samples, but it cannot solve every chemical analysis problem. When used in conjunction with other methods such as mass spectroscopy, nuclear magnetic resonance, and elemental analysis, infrared spectroscopy usually makes possible the positive identification of a sample. Outside Links • Spectral Database for Organic Compounds SDBS: http://riodb01.ibase.aist.go.jp/sdbs/ (National Institute of Advanced Industrial Science and Technology, date of access) • Infrared Spectroscopy Correlation Table: en.Wikipedia.org/wiki/Infrared_spectroscopy_correlation_table • FDM Reference Spectra Databases: http://www.fdmspectra.com/index.html • Other Usuful Web Pages: • www.cem.msu.edu/~reusch/Virtu...d/infrared.htm • Fermi resonance : en.Wikipedia.org/wiki/Fermi_resonance
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Spectroscopy/Vibrational_Spectroscopy/Infrared_Spectroscopy/Infrared%3A_Interpretation.txt
Infrared (IR) spectroscopy is one of the most common and widely used spectroscopic techniques employed mainly by inorganic and organic chemists due to its usefulness in determining structures of compounds and identifying them. Chemical compounds have different chemical properties due to the presence of different functional groups. Introduction Infrared (IR) spectroscopy is one of the most common and widely used spectroscopic techniques. Absorbing groups in the infrared region absorb within a certain wavelength region. The absorption peaks within this region are usually sharper when compared with absorption peaks from the ultraviolet and visible regions. In this way, IR spectroscopy can be very sensitive to determination of functional groups within a sample since different functional group absorbs different particular frequency of IR radiation. Also, each molecule has a characteristic spectrum often referred to as the fingerprint. A molecule can be identified by comparing its absorption peak to a data bank of spectra. IR spectroscopy is very useful in the identification and structure analysis of a variety of substances, including both organic and inorganic compounds. It can also be used for both qualitative and quantitative analysis of complex mixtures of similar compounds. The use of infrared spectroscopy began in the 1950's by Wilbur Kaye. He had designed a machine that tested the near-infrared spectrum and provided the theory to describe the results. Karl Norris started using IR Spectroscopy in the analytical world in the 1960's and as a result IR Spectroscopy became an accepted technique. There have been many advances in the field of IR Spec, the most notable was the application of Fourier Transformations to this technique thus creating an IR method that had higher resolution and a decrease in noise. The year this method became accepted in the field was in the late 1960's.4 Absorption Spectroscopy There are three main processes by which a molecule can absorb radiation. and each of these routes involves an increase of energy that is proportional to the light absorbed. The first route occurs when absorption of radiation leads to a higher rotational energy level in a rotational transition. The second route is a vibrational transition which occurs on absorption of quantized energy. This leads to an increased vibrational energy level. The third route involves electrons of molecules being raised to a higher electron energy, which is the electronic transition. It’s important to state that the energy is quantized and absorption of radiation causes a molecule to move to a higher internal energy level. This is achieved by the alternating electric field of the radiation interacting with the molecule and causing a change in the movement of the molecule. There are multiple possibilities for the different possible energy levels for the various types of transitions. The energy levels can be rated in the following order: electronic > vibrational > rotational. Each of these transitions differs by an order of magnitude. Rotational transitions occur at lower energies (longer wavelengths) and this energy is insufficient and cannot cause vibrational and electronic transitions but vibrational (near infra-red) and electronic transitions (ultraviolet region of the electromagnetic spectrum) require higher energies. The energy of IR radiation is weaker than that of visible and ultraviolet radiation, and so the type of radiation produced is different. Absorption of IR radiation is typical of molecular species that have a small energy difference between the rotational and vibrational states. A criterion for IR absorption is a net change in dipole moment in a molecule as it vibrates or rotates. Using the molecule HBr as an example, the charge distribution between hydrogen and bromine is not evenly distributed since bromine is more electronegative than hydrogen and has a higher electron density. $HBr$ thus has a large dipole moment and is thus polar. The dipole moment is determined by the magnitude of the charge difference and the distance between the two centers of charge. As the molecule vibrates, there is a fluctuation in its dipole moment; this causes a field that interacts with the electric field associated with radiation. If there is a match in frequency of the radiation and the natural vibration of the molecule, absorption occurs and this alters the amplitude of the molecular vibration. This also occurs when the rotation of asymmetric molecules around their centers results in a dipole moment change, which permits interaction with the radiation field. Molecules such as O2, N2, Br2, do not have a changing dipole moment (amplitude nor orientation) when they undergo rotational and vibrational motions, as a result, they cannot cannot absorb IR radiation. Diatomic Molecular Vibration The absorption of IR radiation by a molecule can be likened to two atoms attached to each other by a massless spring. Considering simple diatomic molecules, only one vibration is possible. The Hook's law potential on the other hand is based on an ideal spring \begin{align} F &= -kx \label{1} \[4pt] &= -\dfrac{dV(x)}{dx} \label{2} \end{align} this results in one dimensional space $V(r) = \dfrac{1}{2} k(r-r_{eq})^2 \label{3}$ One thing that the Morse and Harmonic oscillator have in common is the small displacements ($x=r-r_{eq}$) from the equilibrium. Solving the Schrödinger equation for the harmonic oscillator potential results in the energy levels results in $E_v = \left(v+\dfrac{1}{2}\right)hv_e \label{4}$ with $v=0,1,2,3,...,\,infinity$ $v_e = \dfrac{1}{2\pi} \sqrt{\dfrac{k}{\mu}} \label{5}$ When calculating the energy of a diatomic molecule, factors such as anharmonicity (has a similar curve with the harmonic oscillator at low potential energies but deviates at higher energies) are considered. The energy spacing in the harmonic oscillator is equal but not so with the anharmonic oscillator. The anharmonic oscillator is a deviation from the harmonic oscillator. Other considered terms include; centrifugal stretching, vibrational and rotational interactions have to be taken into account. The energy can be expressed mathematically as $E_v = \underset{\text{Harmonic Oscillator}}{\left(v+\dfrac{1}{2}\right)hv_e} - \underset{\text{anharmonicity}}{\left(v+\dfrac{1}{2}\right)^2 X_e hv_e} + \underset{\text{Rigid Rotor}}{B_e J (J+1)} - \underset{\text{centrifugal stretching}}{D_e J^2 (J+1)^2} -\alpha_e \underset{\text{rovibrational coupling}}{\left(v+\dfrac{1}{2}\right) J(J+1)} \label{6}$ The first and third terms represent the harmonicity and rigid rotor behavior of a diatomic molecule such as HCl. The second term represents anharmonicity and the fourth term represents centrifugal stretching. The fifth term represents the interaction between the vibration and rotational interaction of the molecule. Polyatomic Molecular Vibration The bond of a molecule experiences various types of vibrations and rotations. This causes the atom not to be stationary and to fluctuate continuously. Vibrational motions are defined by stretching and bending modes. These movements are easily defined for diatomic or triatomic molecules. This is not the case for large molecules due to several vibrational motions and interactions that will be experienced. When there is a continuous change in the interatomic distance along the axis of the bond between two atoms, this process is known as a stretching vibration. A change in the angle occurring between two bonds is known as a bending vibration. Four bending vibrations exist namely, wagging, twisting, rocking and scissoring. A CH2 group is used as an example to illustrate stretching and bending vibrations below. Symmetric Stretch Asymmetric Stretch Twisting Wagging Scissoring Rocking Figure 3: Types of Vibrational Modes. To ensure that no center of mass motion occurs, the center atom (yellow ball) will also move. Figure from Wikipedia As stated earlier, molecular vibrations consist of stretching and bending modes. A molecule consisting of (N) number of atoms has a total of 3N degrees of freedom, corresponding to the Cartesian coordinates of each atom in the molecule. In a non-linear molecule, 3 of these degrees of freedom are rotational, 3 are translational and the remainder is fundamental vibrations. In a linear molecule, there are 3 translational degrees of freedom and 2 are rotational. This is because in a linear molecule, all of the atoms lie on a single straight line and hence rotation about the bond axis is not possible. Mathematically the normal modes for a linear and non linear can be expressed as Linear Molecules: (3N - 5) degrees of freedom Non-Linear molecules: (3N - 6) degrees of freedom Example 1: Vibrations of Water Diagram of Stretching and Bending Modes for H2O. Solution H2O molecule is a non-linear molecule due to the uneven distribution of the electron density. O2 is more electronegative than H2 and carries a negative charge, while H has a partial positive charge. The total degrees of freedom for H2O will be 3(3)-6 = 9-6 = 3 degrees of freedom which correspond to the following stretching and bending vibrations. The vibrational modes are illustrated below: Example Vibrations of $CO_2$ Diagram of Stretching and Bending Modes for CO2. Solution CO2 is a linear molecule and thus has the formula (3N-5). It has 4 modes of vibration (3(3)-5). CO2 has 2 stretching modes, symmetric and asymmetric. The CO2 symmetric stretch is not IR active because there is no change in dipole moment because the net dipole moments are in opposite directions and as a result, they cancel each other. In the asymmetric stretch, O atom moves away from the C atom and generates a net change in dipole moments and hence absorbs IR radiation at 2350 cm-1. The other IR absorption occurs at 666 cm-1. CO2 symmetry with $D_{\infty h}$ CO2 has a total of four of stretching and bending modes but only two are seen. Two of its bands are degenerate and one of the vibration modes is symmetric hence it does not cause a dipole moment change because the polar directions cancel each other. The vibrational modes are illustrated below: The Deduction of Frequency The second law of Newton states that $F = ma\label{7}$ where m is the mass and a is the acceleration, acceleration is a 2nd order differential equation of distance with respect to time. Thus "a" can be written as $a = \dfrac{d^2 y}{d t} \label{8}$ Substituting this into Equation \ref{1} gives $\dfrac{m d^2 y}{d t^2}= - k y \label{9}$ the 2nd order differential equation of this equation is equal to $\dfrac{-k}{m}$ displacement of mass and time can be stated as $y = A\cos 2 \pi \nu_m t \label{10}$ where vm is the natural vibrational frequency and A is the maximum amplitude of the motion. On differentiating a second time the equation becomes $\dfrac{d^2 y}{d t^2} = - 4 \pi^2 \nu_m^2 A \cos 2 \pi \nu_m t \label{11}$ substituting the two equations above into Newton's second law for a harmonic oscillator, $m*\left (-4\pi^{2}\nu_{m}^{2} A \textrm{cos }2\pi\nu_{m}t \right ) = -k * \left ( A\textrm{cos }2\pi\nu_{m}t \right ) \label{12}$ If we cancel out the two functions $y$, $4m\pi^{2}\nu_{m}^{2} = k$ from above, we obtain the natural frequency of the oscillation. $\nu_m = \dfrac{1}{2\pi} \sqrt{\dfrac{k}{m}} \label{13}$ $\nu_m$ which is the natural frequency of the mechanical oscillator which depends on the force constant of the spring and the mass of the attached body and independent of energy imparted on the system. when there are two masses involved in the system then the mass used in the above equation becomes $\mu = \dfrac{m_1 m_2}{m_1+m_2} \label{14}$ The vibrational frequency can be rewritten as $\nu_m = \dfrac{1}{2\pi} \sqrt{\dfrac{k}{\mu}} \label{15}$ The Deduction of Wave Number Using the harmonic oscillator and wave equations of quantum mechanics, the energy can be written as $E = \left(v+\dfrac{1}{2}\right) \dfrac{h}{2\pi} \sqrt{\dfrac{k}{\mu}} \label{16}$ where h is Planck's constant and v is the vibrational quantum number and ranges from 0,1,2,3.... infinity. $E = \left(v+\dfrac{1}{2}\right)hv_m \label{17}$ where $\nu_m$ is the vibrational frequency. Transitions in vibrational energy levels can be brought about by absorption of radiation, provided the energy of the radiation exactly matches the difference in energy levels between the vibrational quantum states and provided the vibration causes a change in dipole moment. This can be expressed as ${\triangle E} = hv_m = \dfrac{h}{2\pi} \sqrt{\dfrac{k}{\mu}} \label{18}$ At room temperature, the majority of molecules are in the ground state v = 0, from the equation above $E_o = \dfrac{1}{2}hv_m \label{19}$ following the selection rule, when a molecule absorbs energy, there is a promotion to the first excited state $E_1 = \dfrac{3}{2} hv_m \label{20}$ $\left(\dfrac{3}{2} hv_m - \dfrac{1}{2} hv_m \right) = hv_m \label{21}$ The frequency of radiation v that will bring about this change is identical to the classical vibrational frequency of the bond vm and it can be expressed as $E_{radiation} = hv = {\triangle E} = hv_m = \dfrac{h}{2\pi} \sqrt{\dfrac{k}{\mu}} \label{22}$ The above equation can be modified so that the radiation can be expressed in wave numbers $\widetilde{\nu} = \dfrac{h}{2\pi c} \sqrt{\dfrac{k}{\mu}} \label{23}$ where • $c$ is the velocity of light (cm s-1) and • $\widetilde{\nu}$ is the wave number of an absorption maximum (cm-1) Theory of IR Molecular vibrational frequencies lie in the IR region of the electromagnetic spectrum, and they can be measured using the IR technique. In IR, polychromatic light (light having different frequencies) is passed through a sample and the intensity of the transmitted light is measured at each frequency. When molecules absorb IR radiation, transitions occur from a ground vibrational state to an excited vibrational state (Figure 1). For a molecule to be IR active there must be a change in dipole moment as a result of the vibration that occurs when IR radiation is absorbed. Dipole moment is a vector quantity and depends on the orientation of the molecule and the photon electric vector. The dipole moment changes as the bond expands and contracts. When all molecules are aligned as in a crystal and the photon vector points along a molecular axis such as z. Absorption occurs for the vibrations that displace the dipole along z. Vibrations that are totally x or y polarized would be absent. Dipole moment in a heteronuclear diatomic molecule can be described as uneven distribution of electron density between the atoms. One atom is more electronegative than the other and has a net negative charge. The dipole moment can be expressed mathematically as $\mu = er \label{24}$ The relationship between IR intensity and dipole moment is given as $I_{IR} \propto \left(\dfrac{d\mu}{dQ}\right)^2 \label{25}$ relating this to intensity of the IR radiation, we have have the following equation below. where $\mu$ is the dipole moment and $Q$ is the vibrational coordinate. The transition moment integral, that gives information about the probability of a transition occurring, for IR can also be written as $\langle \psi_ | \hat{M}| \psi_f \rangle \label{26}$ $i$ and $f$ represent are initial and final states. $\psi_i$ is the wave function. Relating this to IR intensity we have $I_{IR} \propto \langle \psi_ | \hat{M}| \psi_f \rangle \label{27}$ where $\hat{M}$ is the dipole moment and has the Cartesian coordinates, $\hat {M_x}$,$\hat {M_y}$, $\hat{M_z}$. In order for a transition to occur by dipole selection rules , at least one of the integrals must be non zero. Region of IR The IR region of the electromagnetic spectrum ranges in wavelength from 2 -15 µm. Conventionally the IR region is subdivided into three regions, near IR, mid IR and far IR. Most of the IR used originates from the mid IR region. The table below indicates the IR spectral regions Region Wavelength Wavenumbers (V), cm-1 Frequencies (v), HZ Near 0.78 -2.5 12800 - 4000 3.8 x 1014 - 1.2 x 1014 Middle 2.5 - 50 4000 - 200 3.8 x 1014 - 1.2 x 1014 Far 50 -100 200 -10 3.8 x 1014 - 1.2 x 1014 Most Used 2.5 -15 4000 -670 3.8 x 1014 - 1.2 x 1014 IR deals with the interaction between a molecule and radiation from the electromagnetic region ranging (4000- 40 cm-1). The cm-1 is the wave number scale and it can also be defined as 1/wavelength in cm. A linear wavenumber is often used due to its direct relationship with both frequency and energy. The frequency of the absorbed radiation causes the molecular vibrational frequency for the absorption process. The relationship is given below $\bar{v}(cm^{-1}) = \dfrac{1}{\lambda(\mu m)} \times 10^4 (\dfrac{\mu m}{cm}) = \dfrac{v(Hz)}{c(cm/s)} \label{28}$ • Near InfraRed Spectroscopy: Absorption bands in the near infrared (NIR) region (750 - 2500 nm) are weak because they arise from vibrational overtones and combination bands. Combination bands occur when two molecular vibrations are excited simultaneously. The intensity of overtone bands reduces by one order of overtone for each successive overtone. When a molecule is excited from the ground vibrational state to a higher vibrational state and the vibrational quantum number v is greater than or equal to 2 then an overtone absorption results. The first overtone results from v = 0 to v = 2. The second overtone occurs when v =0 transitions to v = 3. Transitions arising from the near ir absorption are weak, hence they are referred to as forbidden transitions but these transitions are relevant when non-destructive measurements are required such as a solid sample. Near IR spectra though have low absorption they have a high signal to noise ratio owing to intense radiation sources and NIR is able to penetrate undiluted samples and use longer path lengths; it becomes very useful for rapid measurement of more representative samples. • Far InfraRed Spectroscopy: The far IR region is particularly useful for inorganic studies due to stretching and bending vibrations of bonds between the metal atoms and ligands. The frequencies, which these vibrations are observed, are usually lower than 650 cm-1. Pure rotational absorption of gases is observed in the far IR region when there is a permanent dipole moment present. Examples include H2O, O3, HCl. IR Analysis Qualitative Analysis IR spectroscopy is a great method for identification of compounds, especially for identification of functional groups. Therefore, we can use group frequencies for structural analysis. Group frequencies are vibrations that are associated with certain functional groups. It is possible to identify a functional group of a molecule by comparing its vibrational frequency on an IR spectrum to an IR stored data bank. Here, we take the IR spectrum of Formaldehyde for an example. Formaldehyde has a C=O functional group and C-H bond. The value obtained from the following graph can be compared to those in reference data banks stored for Formaldehyde. A molecule with a C=O stretch has an IR band which is usually found near 1700 cm-1 and around 1400 cm-1 for CH2 bend. It's important to note that this value is dependent on other functional groups present on the molecule. The higher 1700 cm-1 indicates a large dipole moment change. It is easier to bend a molecule than stretch it, hence stretching vibrations have higher frequencies and require higher energies than bending modes. The finger print region is a region from 1400-650 cm-1. Each molecule has it's own characteristic print and is often cumbersome to attach any values to this region. Quantitative Analysis Infrared spectroscopy can also be applied in the field of quantitative analysis, although sometimes it's not as accurate as other analytical methods, like gas chromatography and liquid chromatography. The main theory of IR quantification is Beer's law or Beer-Lambert law, which is written as $A= \log \left ( \dfrac{I_0}{I} \right ) =\epsilon lc \label{29}$ Where A is the absorbance of the sample, I is the intensity of transmitted light, I0 is the intensity of incident light, l is the path length, a is the molar absorptivity of the substance, and c is the concentration of the substance. From the Beer's Law, we could figure out the relation between the absorbance and the concentration of the sample since the analytes have a particular molar absorptivity at a particular wavelength. Therefore, we could use IR spectroscopy and Beer's Law to find the concentration of substance or the components of mixture. This is how the IR quantification operated. Selection Rules of IR In order for vibrational transitions to occur, they are normally governed by some rules referred to as selection rules. 1. An interaction must occur between the oscillating field of the electromagnetic radiation and the vibrational molecule for a transition to occur. This can be expressed mathematically as $\left(\dfrac{d\mu}{dr}\right)_{r_{eq}} \not= 0 \label{30}$ $\triangle v = +1$ and $\triangle J = +1 \label{31}$ 1. This holds for a harmonic oscillator because the vibrational levels are equally spaced and that accounts for the single peak observed in any given molecular vibration. For gases J changes +1 for the R branch and -1 for the P branch.$\triangle J = 0$ is a forbidden transition and hence a q branch for a diatomic will not be present. For any anharmonic oscillator, the selection rule is not followed and it follows that the change in energy becomes smaller. This results in weaker transitions called overtones, then $\triangle v = +2$ (first overtone) can occur, as well as the 2nd overtone $\triangle v = +3$. The frequencies of the 1st and 2nd overtones provides information about the potential surface and about two to three times that of the fundamental frequency. 2. For a diatomic, since $\mu$ is known, measurement of ue provides a value for k, the force constant. $k = \left(\dfrac{d^2 V(r)}{dr^2}\right)_{r_{eq}} \label{32}$ where k is the force constant and indicates the strength of a bond. Influence Factors of IR • Isotope Effects: It's been observed that the effect on k when an atom is replaced by an isotope is negligible but it does have an effect on $\nu$ due to changes in the new mass. This is because the reduced mass has an effect on the rotational and vibrational behavior. • Solvent Effects: The polarity of solvent will have an influence on the IR spectra of organic compounds due to the interactions between solvent and compounds, which is called solvent effects. If we place a compound, which contains n, pi and pi* orbitals, into a polar solvent, the solvent will stabilizes these three orbitals in different extent. The stabilization effects of polar solvent on n orbital is the largest one, the next larger one is pi* orbital, and the effects on pi orbital is the smallest one. The spectra of n→pi* transition will shift to blue side, which means it will move to shorter wavelengths and higher energies since the polar solvent causes the energy difference between n orbital and pi* orbital to become bigger. The spectra of pi→pi* transition will shift to red side, which means it will move to longer wavelengths and lower energies since the polar solvent causes the energy difference between n orbital and pi* orbital to become smaller. Advantages of IR • High Scan Speed: Infrared spectroscopy can get information for the whole range of frequency simultaneously, within one second. Therefore, IR can be used to analyze a substance that is not very stable and finish the scan before it start to decompose. • High Resolution: The resolution of general prism spectrometer is only about 3 cm-1, but the resolution of infrared spectrometer is much higher. For example, the resolution of Grating infrared spectrometer could be 0.2 cm-1, the resolution of FT infrared spectrometer could be 0.1-0.005 cm-1. • High Sensitivity: With Fourier Transform, the infrared spectrometer doesn't need to use the slit and monochromator. In this way, the reflection specularity will be increased and the loss of energy in the analysis process will be decreased. Therefore the energy that reaches the detector is large enough and even very small amount of analytes could be detected. Nowadays, the infrared spectroscopy could detect the sample as small as 1-10 grams. • Wide Range of Application: Infrared spectroscopy could be used to analyze almost all organic compounds and some inorganic compounds. It has a wide range of application in both qualitative analysis and quantitative analysis. Also, the sample of Infrared spectroscopy doesn't have phase constraints. It could be gas, liquid or solid, which has enlarged the range of analytes a lot. • Large Amount of Information: Infrared Spectra could give us lots of structural information of the analytes, such as the type of compound, the functional group of compound, the stereoscopic structure of compound, the number and position of substituent group and so on. Depending on the available information form the functional part and the fingerprint part, infrared spectroscopy has become a great method to identify different kinds of compounds. • Non-Destructive: Infrared Spectroscopy is non-destructive to the sample. Disadvantages of IR • Sample Constraint: Infrared spectroscopy is not applicable to the sample that contains water since this solvent strongly absorb IR light. • Spectrum Complication: The IR spectrum is very complicated and the interpretation depends on lots of experience. Sometimes, we cannot definitely clarify the structure of the compound just based on one single IR spectrum. Other spectroscopy methods, such as ( Mass Spectrometry) MS and ( Nuclear Magnetic Resonance) NMR, are still needed to further interpret the specific structure. • Quantification: Infrared spectroscopy works well for the qualitative analysis of a large variety of samples, but quantitative analysis may be limited under certain conditions such as very high and low concentrations. Symmetry & IR Spectroscopy One of the most importance applications of IR spectroscopy is structural assignment of the molecule depending on the relationship between the molecule and observed IR absorption bands. Every molecule is corresponding to one particular symmetry point group. Then we can predict which point group the molecule is belonging to if we know its IR vibrational bands. Vice versa, we can also find out the IR active bands from the spectrum of the molecule if we know its symmetry. These are two main applications of group theory. We'll take the following problem as an example to illustrate how this works. Question How do you distinguish whether the structure of transition metal complex molecule M(CO)2L4 is cis or trans by inspection of the CO stretching region of the IR spectra? Answer For cis-M(CO)2L4, the symmetry point group of this molecule is C2v. C2v E C2 ${\sigma}$(xz) ${\sigma}$(yz) ${\gamma}$co 2 0 2 0 ${\gamma}$co = A1 + B1 Since A1 has a basis on z axis and B1 has a basis on x axis, there are two IR vibrational bands observed in the spectrum. For trans-M(CO)2L4, the symmetry point group of this molecule is D4h. D4h E C4 C2 C2' C2" i S4 ${\sigma_h}$ ${\sigma_v}$ ${\sigma_d}$ ${\gamma}$co 2 2 2 0 0 0 0 0 2 2 ${\gamma}$co = A1g + A2u Since A2u has a basis on z axis, there is only one IR vibrational band observed in the spectrum. Therefore, from what have been discussed above, we can distinguish these two structures based on the number of IR bands. Problems The frequency of C=O stretching is higher than that of C=C stretching. The Intensity of C=O stretching is stronger than that of C=C stretching. Explain it. Contributors and Attributions • Richard Osibanjo, Rachael Curtis, Zijuan Lai
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Spectroscopy/Vibrational_Spectroscopy/Infrared_Spectroscopy/Infrared_Spectroscopy.txt
This chapter will focus on infrared (IR) spectroscopy. The wavelengths found in infrared radiation are a little longer than those found in visible light. IR spectroscopy is useful for finding out what kinds of bonds are present in a molecule, and knowing what kinds of bonds are present is a good start towards knowing what the structure could be. Contributors Interpreting Infrared Spectra The IR spectra of nitrogen-containing compounds can be messier than the ones you have seen so far. N-H bends and C-N stretches tend to be broader and weaker than peaks involving oxygen atoms. However, some peaks in nitrogen compounds are useful. The problems in this section will guide you through some of these features. IR11. Appendix: IR Tabl Table of IR Absorptions Common. Note: strong, medium, weak refers to the length of the peak (in the y axis direction). Note: spectra taken by ATR method (used at CSB/SJU) have weaker peaks between 4000-2500 cm-1 compared to reference spectra taken by transmittance methods (typical on SDBS and other sites). Approximate Frequency (cm-1) Description Bond Vibration Notes 3500 - 3200 broad, round O-H much broader, lower frequency (3200-2500) if next to C=O 3400-3300 weak, triangular N-H stronger if next to C=O 3300 medium-strong =C-H (sp C-H) 3100-3000 weak-medium =C-H (sp2 C-H) can get bigger if lots of bonds present 3000-2900 weak-medium -C-H (sp3 C-H) can get bigger if lots of bonds present 2800 and 2700 medium C-H in O=C-H two peaks; "alligator jaws" 2250 medium C=N 2250-2100 weak-medium C=C stronger if near electronegative atoms 1800-1600 strong C=O lower frequency (1650-1550) if attached to O or N middle frequency if attached to C, H higher frequency (1800) if attached to Cl 1650-1450 weak-medium C=C lower frequency (1600-1450) if conjugated (i.e. C=C-C=C) often several if benzene present 1450 weak-medium H-C-H bend 1300 - 1000 medium-strong C-O higher frequency (1200-1300) if conjugated (i.e. O=C-O or C=C-O) 1250-1000 medium C-N 1000-650 strong C=C-H bend often several if benzene present IR2. Hydrocarbon Spectra All organic and biological compounds contain carbon and hydrogen, usually with various other elements as well. Hydrocarbons are compounds containing only carbon and hydrogen, but no other types of atoms. Since all organic compounds contain carbon and hydrogen, looking at hydrocarbon spectra will tell us what peaks are due to the basic C&H part of these molecules. It is sometimes useful to think of the C&H part of a molecule as the basic skeleton or scaffolding used to construct the molecule. The other atoms often form more interesting and active features, like the doors, windows and lights on a building. The simplest hydrocarbons contain only single bonds between their carbons, and no double or triple bonds. These hydrocarbons are variously referred to as saturated hydrocarbons, paraffins or alkanes. Examples of alkanes include hexane and nonane. (You can take a look at the Glossary to see what these names tell you about the structure.) Look at the IR spectrum of hexane. You should see: • Source: SDBSWeb : http://riodb01.ibase.aist.go.jp/sdbs/ (National Institute of Advanced Industrial Science and Technology of Japan, 14 July 2008) If you look at an IR spectrum of any other alkane, you will also see peaks at about 2900 and 1500 cm-1. The IR spectra of many organic compounds will show these peaks because the compound may contain paraffinic parts in addition to parts with other elements in them.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Spectroscopy/Vibrational_Spectroscopy/Infrared_Spectroscopy/Interpreting_Infrared_Spectra/Carbon_Nitrogen_Bonds.txt
Alkanes show two sets of peaks in the IR spectrum. Alkanes contain two kinds of bonds: C-C bonds and C-H bonds. However, these two facts are not related. The reasons are explained through bond polarity and molecular vibrations. Bond polarity can play a role in IR spectroscopy. • Molecular vibrations play a major role in IR spectroscopy. • The factors that govern what bonds (and what vibrations) show up at what frequencies are easily handled by computational chemistry software. In fact, prediction of absorption frequencies in IR spectra can be done using 17th century classical mechanics, specifically Hooke's Law (devised to explain the vibrational frequencies of springs). Computation is not the focus of this chapter but it may help you keep track of what kinds of vibrations absorb at what frequencies. Hooke's Law states: • IR light is absorbed if it is in resonance with a vibrating bond; that means the light's frequency is the same as the frequency of the bond vibration, or else an exact multiple of it (2x, 3x, 4x...). It's a little like pushing a child on a swing: unless you are pushing at the same frequency that the swing is swinging, you will not be able to transfer your energy to the swing. Hooke's Law in IR spectroscopy means: • Remember, there are two factors here, so you won't be able to make predictions knowing only one factor. Some strong bonds may not absorb at high frequency because they are between heavy atoms. The information is presented mostly to help you organize what bonds absorb at what general frequencies after you have learned about them. The reasons explaining why C-H bending vibrations are at lower frequency than C-H stretching vibrations are also related to Hooke's Law. An H-C-H bending vibration involves three atoms, not just two, so the mass involved is greater than in a C-H stretch. That means lower frequency. Also, it turns out that the "stiffness" of a bond angle (analogous to the strength of a spring) is less than the "stiffness" of a bond length; the angle has a little more latitude to change than does the length. Both factors lead to a lower bending frequency. IR4. Carbon Carbon Multip Unsaturated hydrocarbons contain only carbon and hydrogen, but also have some multiple bonds between carbons. One type of unsaturated hydrocarbon is an olefin, also known as an alkene. Alkenes contain double bonds between carbons. One example of an alkene is 1-heptene. It looks similar to hexane, except for the double bond from the first carbon to the second. Look at the IR spectrum of 1-heptene. You should see: • Source: SDBSWeb : http://riodb01.ibase.aist.go.jp/sdbs/ (National Institute of Advanced Industrial Science and Technology of Japan, 14 July 2008) So far, these peaks are the same as the ones seen for hexane. We can assign them as the C-H stretching and bending frequencies, respectively. Looking further, you will also see: • The peak at 3100 cm-1 hardly seems different from the C-H stretch seen before. It is also a C-H stretch, but from a different type of carbon. This stretch involves the sp2 or trigonal planar carbon of the double bond, whereas the peak at 2900 involves an sp3 or tetrahedral carbon. The peak at 1650 cm-1 can be identified via computational methods as arising from a carbon-carbon double bond stretch. It is a weak stretch because this bond is not very polar. Sometimes it is obscured by other, larger peaks.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Spectroscopy/Vibrational_Spectroscopy/Infrared_Spectroscopy/Interpreting_Infrared_Spectra/IR3._Subtle_Points_of_IR_.txt
These bonds are pretty polar, so they show up strongly in IR spectroscopy. IR spectroscopy is therefore a good way to determine what heteroatom-containing functional groups are present in a molecule. Compounds Containing C-O Single Bonds Oxygen forms two bonds. An oxygen atom could be found in between two carbons, as in dibutyl ether, or between a carbon and a hydrogen as in 1-butanol. Dibutyl ether is an example of an ether and 1-butanol is an example of an alcohol. If you look at an IR spectrum of dibutyl ether, you will see: • Source: SDBSWeb : http://riodb01.ibase.aist.go.jp/sdbs/ (National Institute of Advanced Industrial Science and Technology of Japan, 14 July 2008) If you look at an IR spectrum of 1-butanol, you will see: • Source: SDBSWeb : http://riodb01.ibase.aist.go.jp/sdbs/ (National Institute of Advanced Industrial Science and Technology of Japan, 14 July 2008) Peak shapes are sometimes very useful in recognizing what kind of bond is present. The rounded shape of most O-H stretching modes occurs because of hydrogen bonding between different hydroxy groups. Because protons are shared to varying extent with neighboring oxygens, the covalent O-H bonds in a sample of alcohol all vibrate at slightly different frequencies and show up at slightly different positions in the IR spectrum. Instead of seeing one sharp peak, you see a whole lot of them all smeared out into one broad blob. Since C-H bonds don't hydrogen bond very well, you don't see that phenomenon in an ether, and an O-H peak is very easy to distinguish in the IR spectrum. Problem IR.7. Even though there are only two C-O bonds in dibutyl ether, the C-O stretching mode is even stronger than the peak at 2900 cm-1 arising from 10 different C-H bonds. Explain why. Problem IR.8. The IR spectrum of methyl phenyl ether (aka anisole) has strong peaks at 1050 and 1250 cm-1. 1. Source: SDBSWeb : http://riodb01.ibase.aist.go.jp/sdbs/ (National Institute of Advanced Industrial Science and Technology of Japan, 14 July 2008) IR6. Carbon Oxygen Double The largest class of oxygen-containing molecules is carbonyl compounds, which contain C=O bonds. A C=O stretch is normally easy to find in an IR spectrum, because it is very strong and shows up in a part of the spectrum that is not cluttered with other peaks. Examples of carbonyl compounds include 2-octanone, a ketone, and butanal, an aldehyde. In an aldehyde, the carbonyl is at the end of a chain, with a hydrogen attached to the carbonyl carbon. If you look at the IR spectrum of 2-octanone: • Source: SDBSWeb : http://riodb01.ibase.aist.go.jp/sdbs/ (National Institute of Advanced Industrial Science and Technology of Japan, 14 July 2008) Even though there is just one C=O bond, the carbonyl stretch is often the strongest peak in the spectrum. That makes carbonyl compounds easy to identify by IR spectroscopy. If you look at the IR spectrum of butanal: • Source: SDBSWeb : http://riodb01.ibase.aist.go.jp/sdbs/ (National Institute of Advanced Industrial Science and Technology of Japan, 14 July 2008) The aldehyde C-H bond absorbs at two frequencies because it can vibrate in phase with the C=O bond (a symmetric stretch) and out of phase with the C=O bond (an asymmetric stretch), and these vibrations are of different energies. The probability of the symmetric stretch and the asymmetric stretch are about equal, so the two peaks are always about the same size. This unusual C-H peak can often be used to distinguish between an aldehyde and a ketone.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Spectroscopy/Vibrational_Spectroscopy/Infrared_Spectroscopy/Interpreting_Infrared_Spectra/IR5._Carbon_Oxygen_Single.txt
Sometimes more complicated heteroatomic functional groups, containing bonds to more than one heteroatom, have slightly different spectra. Carboxylic acids feature a hydroxyl group bonded to a carbonyl. Hexanoic acid, a carboxylic acid in a six-atom chain, is one example. If you look at the IR spectrum of hexanoic acid: • Source: SDBSWeb : http://riodb01.ibase.aist.go.jp/sdbs/ (National Institute of Advanced Industrial Science and Technology of Japan, 14 July 2008) At first, the O-H peak appears to be absent. The C-H stretch appears to be very broad. The wide peak between 3000 and 2600 cm-1 is really the usual C-H stretch with a broad O-H stretch superimposed on it. The low frequency vibration of this O-H bond is related to the partial dissociation of protons due to strong hydrogen bonding. IR9. Misleading Peaks There are some practical problems that can make IR interpretation in real life more difficult. Being aware of these problems may make you double-check your suspicions: • In addition, there are complications that you may run into based on the instrument or technique used to obtain the spectrum. What Does an IR Spectrum A spectrum is a graph in which the amount of light absorbed is plotted on the y-axis and frequency is plotted on the x-axis. An example is shown below. You can run your finger along the graph and see whether any light of a particular frequency is absorbed; if so, you will see a "peak" at that frequency. If not, you will see "the baseline" at that frequency. Figure IR1. IR spectrum of benzene. The x-axis labels are, from right to left, 500, 1000, 1500, 2000, 3000 and 4000 cm-1. Source: SDBSWeb : http://riodb01.ibase.aist.go.jp/sdbs/ (National Institute of Advanced Industrial Science and Technology of Japan, 14 July 2008) In IR spectra (spectra = plural of spectrum): • As you run your finger from left to right across an IR spectrum, you can see whether or not light is absorbed at particular frequencies. When the curve dips down, less light is transmitted. That means light is absorbed. The dip in the graph is called a peak. Different bonds absorb different frequencies of light, so the peaks tell you what kinds of bonds are present. The Fingerprint Region This page explains what the fingerprint region of an infra-red spectrum is, and how it can be used to identify an organic molecule. What is the fingerprint region This is a typical infra-red spectrum: Each trough is caused because energy is being absorbed from that particular frequency of infra-red radiation to excite bonds in the molecule to a higher state of vibration - either stretching or bending. Some of the troughs are easily used to identify particular bonds in a molecule. For example, the big trough at the left-hand side of the spectrum is used to identify the presence of an oxygen-hydrogen bond in an -OH group. The region to the right-hand side of the diagram (from about 1500 to 500 cm-1) usually contains a very complicated series of absorptions. These are mainly due to all manner of bending vibrations within the molecule. This is called the fingerprint region. It is much more difficult to pick out individual bonds in this region than it is in the "cleaner" region at higher wavenumbers. The importance of the fingerprint region is that each different compound produces a different pattern of troughs in this part of the spectrum. Using the fingerprint region Compare the infra-red spectra of propan-1-ol and propan-2-ol. Both compounds contain exactly the same bonds. Both compounds have very similar troughs in the area around 3000 cm-1 - but compare them in the fingerprint region between 1500 and 500 cm-1. The pattern in the fingerprint region is completely different and could therefore be used to identify the compound. To positively identify an unknown compound, use its infra-red spectrum to identify what sort of compound it is by looking for specific bond absorptions. That might tell you, for example, that you had an alcohol because it contained an -OH group. You would then compare the fingerprint region of its infra-red spectrum with known spectra measured under exactly the same conditions to find out which alcohol (or whatever) you had.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Spectroscopy/Vibrational_Spectroscopy/Infrared_Spectroscopy/Interpreting_Infrared_Spectra/IR8._More_Complicated_IR_.txt
Raman spectroscopy is a chemical instrumentation technique that exploits molecular vibrations. • Raman: Application If one can extract all of the vibrational information corresponds a molecule, its molecular structure can then be determined. In the field of spectroscopy, two main techniques are applied in order to detect molecular vibrational motions: Infrared spectroscopy (IR) and Raman spectroscopy. Raman Spectroscopy has its unique properties which have been used very commonly and widely. • Raman: Theory The phenomenon of Raman scattering of light was first postulated by Smekai in 1923 and first observed experimentally in 1928 by Raman and Krishnan. Raman scattering is most easily seen as the change in frequency for a small percentage of the intensity in a monochromatic beam as the result of coupling between the incident radiation and vibrational energy levels of molecules. A vibrational mode will be Raman active only when it changes the polariazbility of the moleculeat. • Resonant vs. Nonresonant Raman Spectroscopy In this section readers will be introduced to the theory behind resonance and non-resonance Raman spectroscopy. Each technique has its share of advantages and challenges. Each of these aspects will be explored. Raman Spectroscopy If one can extract all of the vibrational information corresponds a molecule, its molecular structure can then be determined. In the field of spectroscopy, two main techniques are applied in order to detect molecular vibrational motions: Infrared spectroscopy (IR) and Raman spectroscopy. Raman Spectroscopy has its unique properties which have been used very commonly and widely in Inorganic, Organic, Biological systems [1] and Material Science [2], [3], etc. Introduction Generally speaking, vibrational and rotational motions are unique for every molecule. The uniqueness to molecules are in analogous to fingerprint identification of people hence the term molecular fingerprint. Study the nature of molecular vibration and rotation is particularly important in structure identification and molecular dynamics. Two of the most important techniques in studying vibration/rotation information are IR spectroscopy and Raman spectroscopy. IR is an absorption spectroscopy which measures the transmitted light. Coupling with other techniques, such as Fourier Transform, IR has been highly successful in both organic and inorganic chemistry. Unlike IR, Raman spectroscopy measures the scattered light (Figure 2). There are three types of scattered lights: Rayleigh scattering, Stokes scattering, and anti-stokes scattering. Rayleigh scattering is elastic scattering where there is no energy exchange between the incident light and the molecule. Stokes scattering happens when there is an energy absorption from the incident light, while anti-stokes scattering happens when the molecule emites energy to the incident light. Thus, Stokes scattering results in a red shift, while anti-stokes scattering results in a blue shift. (Figure 1) Stokes and Anti-Stokes scattering are called Raman scattering which can provide the vibration/rotation information The intensity of Rayleigh scattering is around 107 times that of Stokes scattering. [4]According to the Boltzmann distribution, anti-Stokes is weaker than Stokes scattering. Thus, the main difficulty of Raman spectroscopy is to detect the Raman scattering by filtering out the strong Rayleigh scattering. In order to reduce the intensity of the Rayleigh scattering, multiple monochromators are applied to selectively transmit the needed wave range. An alternative way is to use Rayleigh filters. There are many types of Rayleigh filters. One common way to filter the Rayleigh light is by interference. Because of the weakness of Raman scattering, the resolving power of a Raman spectrometer is much higher than an IR specctrometer. A resolution of 105 is needed in Raman while 103 is sufficient in IR. [5] In order to achieve high resolving power, prisms, grating spectrometers or interferometers are applied in Raman instruments. Despite the limitations above, Raman spectroscopy has some advantages over IR spectroscopy as follows: 1. Raman Spectroscopy can be used in aqueous solutions (while water can absorb the infrared light strongly and affect the IR spectrum). 2. Because of the different selection rules, vibrations inactive in IR spectroscopy may be seen in Raman spectroscopy. This helps to complement IR spectroscopy. 3. There is no destruction to the sample in Raman Spectroscopy. In IR spectroscopy, samples need to disperse in transparent matrix. For example grind the sample in solid KBr. In RS, no such destructions are needed. 4. Glass vials can be used in RS (this should only work in the visible region. If in UV region, glass is not applicable because it can strongly absorb light too.) 5. Raman Spectroscopy needs relative short time. So we can do Raman Spectroscopy detection very quickly. After analysis of the advantages and disadvantages of Raman Spectroscopy technique, we can begin to consider the application of Raman Spectroscopy in inorganic, organic, biological systems and Material Science, etc. Applications Raman Spectroscopy application in inorganic systems X-ray diffraction (XRD) has been developed into a standard method of determining structure of solids in inorganic systems. Compared to XRD, it is usually necessary to obtain other information (NMR, electron diffraction, or UV-Visible) besides vibrational information from IR/Raman in order to elucidate the structure. Nevertheless, vibrational spectroscopy still plays an important role in inorganic systems. For example, some small reactive molecules only exist in gas phase and XRD can only be applied for solid state. Also, XRD cannot distinguish between the following bonds: –CN vs. –NC, –OCN vs. –NCO,–CNO vs. –ONC, -SCN vs. –NCS. [7] Furthermore, IR and Raman are fast and simple analytical method, and are commonly used for the first approximation analysis of an unknown compound. Raman spectroscopy has considerable advantages over IR in inorganic systems due to two reasons. First, since the laser beam used in RS and the Raman-scattered light are both in the visible region, glass (Pyrex) tubes can be used in RS. On the other hand, glass absorbs infrared radiation and cannot be used in IR. However, some glass tubes, which contain rare earth salts, will gives rises to fluorescence or spikes. Thus, using of glass tubes in RS still need to be careful. Secondly, since water is a very weak Raman scatter but has a very broad signal in IR, aqueous solution can be directly analyzed using RS. Raman Spectroscopy and IR have different selection rules. RS detects the polarizability change of a molecule, while IR detects the dipole momentum change of a molecule. Principle about the RS and IR can be found at Chemwiki Infrared Theory and Raman Theory. Thus, some vibration modes that are active in Raman may not be active IR, vice versa. As a result, both of Raman and IR spectrum are provided in the stucture study. As an example, in the study of Xenon Tetrafluoride. There are 3 strong bands in IR and solid Raman shows 2 strong bands and 2 weaker bands. These information indicates that Xenon Tetrafluoride is a planar molecule and has a symmetry of D4h. [8] Another example is the application of Raman Spectroscopy in homonuclear diatomic molecules. Homonuclear diatomic molecules are all IR inactive, fortunately, the vibration modes for all the homonuclear diatomic molecules are always Raman Spectroscopy active. Raman Spectroscopy Application in Organic Systems Unlike inorganic compounds, organic compounds have less elements mainly carbons, hydrogens and oxygens. And only a certain function groups are expected in organic specturm. Thus, Raman and IR spectroscopy are widely used in organic systems. Characteristic vibrations of many organic compounds both in Raman and IR are widely studied and summarized in many literature. [5] Qualitative analysis of organic compounds can be done base on the characteristic vibrations table. Table 1: Characteristic frequencies of some organic function group in Raman and IR Vibration Region(cm-1) Raman intensity IR intensity v(O-H) 3650~3000 weak strong v(N-H) 3500~3300 medium medium v(C=O) 1820~1680 strong~weak very strong v(C=C) 1900~1500 very strong~medium 0~weak “RS is similar to IR in that they have regions that are useful for functional group detection and fingerprint regions that permit the identification of specific compounds.”[1] While from the different selection rules of Raman Spectroscopy and IR, we can get the Mutual Exclusion rule [5], which says that for a molecule with a center of symmetry, no mode can be both IR and Raman Spectroscopy active. So, if we find a strong bond which is both IR and Raman Spectroscopy active, the molecule doesn't have a center of symmetry. Non-classical Raman Spectroscopy Although classical Raman Spectroscopy has been successfully applied in chemistry, this technique has some major limitations as follows[5]: 1. The probability for photon to undergo Raman Scattering is much lower than that of Rayleigh scattering, which causes low sensitivity of Raman Spectroscopy technique. Thus, for low concentration samples, we have to choose other kinds of techniques. 2. For some samples which are very easily to generate fluorescence, the fluorescence signal may totally obscure the Raman signal. We should consider the competition between the Raman Scattering and fluorescence. 3. In some point groups, such as C6 , D6 , D6h , C4h , D2h, there are some vibrational modes that is neither Raman or IR active. 4. The resolution of the classical Raman Spectroscopy is limited by the resolution of the monochromator. In order to overcome the limitations, special techniques are used to modify the classical Raman Spectroscopy. These non-classical Raman Spectroscopy includes: Resonance Raman Spectroscopy, surface enhanced Raman Spectroscopy, and nonlinear coherent Raman techniques, such as hyper Raman spectroscopy Resonance Raman Scattering (RRS) The resonance effect is observed when the photon energy of the exciting laser beam is equal to the energy of the allowed electronic transition. Since only the allowed transition is affected, (in terms of group theory, these are the totally symmetric vibrational ones.), only a few Raman bands are enhanced (by a factor of 106). As a result, RRS can increase the resolution of the classical Raman Spectroscopy, which makes the detection of dilution solution possible (concentrations as low as 10-3 M). RRS is extensively used for biological molecules because of its ability to selectively study the local environment. As an example, the Resonance Raman labels are used to study the biologically active sites on the bond ligand. RRS can also be used to study the electronic excited state. For example, the excitation profile which is the Raman intensity as a function of incident laser intensity can tell the interaction between the electronic states and the vibrational modes. Also, it can be used to measure the atomic displacement between the ground state and the excited state. Surface Enhanced Raman Scattering (SERS) At 1974, Fleischmann discovered that pyridine adsorbed onto silver electrodes showed enhanced Raman signals. This phenomenon is now called surface enhanced Raman Scattering (SERS). Although the mechanism of SERS is not yet fully understood, it is believed to result from an enhancement either of transition polarizability, α,or the electric field, E, by the interaction with the rough metallic support. Unlike RRS, SERS enhances every band in the Raman spectrum and has a high sensitivity. Due to the high enhancement (by a factor of 1010~11), the SERS results in a rich spectrum and is an ideal tool for trace analysis and in situ study of interfacial process. Also, it is a better tool to study highly diluted solutions. A concentration of 4x10-12 M was reported by Kneipp using SERS. [5] Nonlinear Raman Spectroscopy In a nonlinear process, the output is not linearly proportional to its input. This happens when the perturbation become large enough that the response to the perturbation doesn’t follows the perturbation’s magnitude. Nonlinear Raman Spectroscopy includes: Hyper Raman spectroscopy, coherent anti-Stokes Raman Spectroscopy, coherent Stokes Raman spectroscopy, stimulated Raman gain and inverse Ramen spectroscopy. Nonlinear Raman spectroscopy is more sensitive than classical Raman spectroscopy and can effectively reduce/remove the influence of fluorescence. The following paragraph will focus on the most useful nonlinear Raman spectroscopy---coherent anti-Stokes Raman Spectroscopy (CARS): References 1. Principles of Instrumental Analysis, fifth edition. Skoog, Holler and Nieman. 2. Infrared and Raman Spectra of Inorganic and Coordination Compounds, fifth edition. Kazuo Nakamoto. 3. Symmetry and Spectroscopy an introduction to vibrational and electronic spectroscopy. Daniel C. Harris, etc. 4. P. Bisson, G. Parodi, D. Rigos, J.E. Whitten, The Chemical Educator, 2006, Vol. 11, No. 2 5. B. Schrader, Infrared and Raman Spectroscopy, VCH, 1995, ISBN:3-527-26446-9 6. S.A. Borman, Analytical Chemistry, 1982, Vol. 54, No. 9, 1021A-1026A 7. K. Nakamoto, Infrared Spectra of Inorganic and Coordination Compounds, 3rd edition. Wiley Intrsc John Wiley & Sons, New York London Sydney Toronto, 1978 8. H.H. Claassen, C.L. Chernick, J.G. Malm, 1963 J. Am. Chem. Soc., 85, 1927 Problems 1. What are the advantages and disadvantages for Raman spectroscopy, comparing with IR spectroscopy? 2. Please briefly explain the mutual exclusive principle in Raman and IR spectroscopy.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Spectroscopy/Vibrational_Spectroscopy/Raman_Spectroscopy/Raman%3A_Interpretation.txt
The phenomenon of Raman scattering of light was first postulated by Smekai in 1923 and first observed experimentally in 1928 by Raman and Krishnan. Raman scattering is most easily seen as the change in frequency for a small percentage of the intensity in a monochromatic beam as the result of coupling between the incident radiation and vibrational energy levels of molecules. A vibrational mode will be Raman active only when it changes the polariazbility of the molecule. Introduction When monochromatic radiation with a wavenumber $\tilde{\nu}_{0}$ is incident on systems, most of it is transmitted without change, but, in addition, some scattering of the radiation occurs. If the frequency content of the scattered radiation is analyzed, there will be observed to be present not only the wavenumber $\tilde{\nu}_{0}$ associated with the incident radiation but also, in general, pairs of new wavenumbers of the type $\tilde{\nu} ^{'}=\tilde{\nu} _{0}+\tilde{\nu} _{M}$. In molecular systems, the wavenumbers $\tilde{\nu}_{M}$ are found to lie principally in the ranges associated with transitions between rotational, vibrational, and electronic levels. Such scattering of radiation with change of wavenumber is called Raman Scattering, after the Indian scientist C. V. Raman who, with K. S. Krishnan, first observed this phenomenon in liquids in 1928. The effect had been predicted on theoretical grounds in 1923 by A. Smekal. Due to its very low scattering efficiency, Raman spectroscopy did not become popular until powerful laser systems were available after the 1960s. Now, Raman spectroscopy has become one of the most popular approaches to study the vibrational structures of molecules together with infrared spectrum. The origin of the modified frequencies found in Raman scattering is explained in terms of energy transfer between the scattering system and the incident radiation. When a system interacts with radiation of wavenumber $\tilde{\nu}_{0}$, it makes an upward transition from a lower energy level E1 to an upper energy level E2. It must then acquire the necessary energy, ΔE= E2- E1, from the incident radiation. The energy ΔE is expressed in terms of a wavenumber $\tilde{\nu}_{M}$ associated with the two levels involved, where $\Delta E=hc\tilde{\nu} _{M}$ This energy requirement is regarded as being provided by the absorption of one photon of the incident radiation of energy $hc\tilde{\nu}_{0}$ and the simultaneous emission of a photon of smaller energy $hc\left (\tilde{\nu} _{0}-\tilde{\nu} _{M} \right )$, so that scattering of radiation of lower wavenumber, $\tilde{\nu} _{0}-\tilde{\nu} _{M}$, occurs. Alternatively, the interaction of the radiation with the system may cause a downward transition from a higher energy level E2 to a lower energy level E1, in which case it makes available energy $E_2-E_1=hc\tilde{\nu}_M$ Again a photon of the incident radiation of energy $hc\tilde{\nu}_{0}$ and the simultaneous emission of a photon of higher energy $hc\left (\tilde{\nu} _{0}+\tilde{\nu} _{M} \right )$, so that scattering of radiation of higher wavenumber, $\tilde{\nu} _{0}+\tilde{\nu} _{M}$, occurs. In the case of Rayleigh scattering, although there is no resultant change in the energy state of the system, the system still participates directly in the scattering act, causing one photon of incident radiation $hc\tilde{\nu}_{0}$ to be absorbed and a photon of the same energy to be emitted simultaneously, so that scattering of radiation of unchanged wavenumber, $\tilde{\nu}_{0}$, occurs. It is clear that, as far as wavenumber is concerned, a Raman band is to be characterized not by its absolute wavenumber, $\tilde{\nu} ^{'}=\tilde{\nu} _{0}\pm\tilde{\nu} _{M}$, but by the magnitude of its wavenumber shift $\tilde{\nu}_{M}$ from the incident wavenumber. Such wavenumber shifts are often referred to as Raman wavenumbers. Where it is necessary to distinguish Stokes and anti-Stokes Raman scattering we shall define $\Delta\tilde{\nu}$ to be positive for Stokes scattering and negative for anti-Stokes scattering, that is $\Delta \tilde{\nu} =\tilde{\nu}_{0} +\tilde{\nu}^{'}$ (see Fig. 1). The intensity of anti-Stokes relative to Stokes Raman scattering decreases rapidly with increase in the wavenumber shift. This is because anti-Stokes Raman scattering involves transitions to a lower energy state from a populated higher energy states. Classical theory According to the classical theory of electromagnetic radiation, electric and magnetic fields oscillating at a given frequency are able to give out electromagnetic radiation of the same frequency. One could use electromagnetic radiation theory to explain light scattering phenomena. For a majority of systems, only an induced electric dipole moment μ is taken into consideration. This dipole moment which is induced by the electric field E could be expressed by the power series $\mathbf{\mu} =\mathbf{\mu}^{\left (1 \right )}+\mathbf{\mu}^{\left (2 \right )}+\mathbf{\mu}^{\left (3 \right )}+\cdots$ where $\mathbf{\mu}^{\left (1 \right )}=\mathbf{\alpha} \cdot \mathit{\mathbf{E}}$ $\mathbf{\mu}^{\left (2 \right )}=\frac{1}{2}\mathbf{\beta} \cdot \mathit{\mathbf{E}}\mathit{\mathbf{E}}$ $\mathbf{\mu}^{\left (3 \right )}=\frac{1}{6}\mathbf{\gamma } \cdot\mathit{\mathbf{E}}\mathit{\mathbf{E}}\mathit{\mathbf{E}}$ α is termed the polarizability tensor. It is a second-rank tensor with all the components in the unit of CV-1m2. Typically, orders of magnitude for components in α, β, and γ are as follows, α, 10-40 CV-1m2; β, 10-50 CV-2m3; and γ, 10-61 CV-3m4. According to the values, the contributions of μ(2)and μ(3) are quite small unless electric field is very high. Since Rayleigh and Raman scattering are observed quite readily with very much lower electric field intensities, one may expect to explain Rayleigh and Raman scattering in terms of μ(1) only. We shall now consider the interaction of a molecular system with the harmonically oscillating electric field in the frequency ω0. To make the explanation easily, we shall ignore the rotation but just consider the vibration part. It is to be expected that the polarizability will be a function of the nuclear coordinates. The variation of components in polarizability tensor with vibrational coordinates is expressed in a Taylor series $\alpha_{ij} =\left (\alpha_{ij} \right )_{0} +\sum_{k} \left (\frac{\partial \alpha_{ij} }{\partial Q_{k}} \right )_{0}Q_{k}+\frac{1}{2}\sum_{k,l}\left (\frac{\partial^2 \alpha_{ij}}{\partial Q_{k}\partial Q_{l}} \right )_{0}Q_{k}Q_{l}+\cdots$ where (αij)0 is the αij value at the equilibrium configuration, Qk, Ql are normal coordinates of vibration at frequencies ωk, ωl. We shall make a harmonic approximation to neglect the terms which involve powers of Q higher than first. After initially fixing our attention on one normal mode, Qk, we could get $\alpha_{ij} =\left (\alpha_{ij} \right )_{0} +\left (\frac{\partial \alpha_{ij} }{\partial Q_{k}} \right )_{0}Q_{k}$ As for a harmonic vibration, $Q_{k}=Q_{k0} \cos \left ( \omega_k t + \delta _k \right )$ Then we could get the expression of α tensor resulting from k-th vibration, $\mathbf{\alpha}_{k}=\mathbf{\alpha }_{0}+\left (\frac{\partial \mathbf{\alpha}_{k}}{\partial Q_{k}} \right )_{0}Q_{k0}\cos\left (\omega_{k} t+\delta _{k} \right )$ Now, under the influence of electromagnetic radiation at frequency ω0, the induced electric dipole moment μ(1) is expressed, $\mathbf{\mu} ^{\left (1 \right )}=\mathbf{\alpha} _{k}\cdot \mathbf{E}_{0}\cos\omega_{0} t=\mathbf{\alpha} _{0}\cdot \mathbf{E}_{0}\cos\omega_{0} t+\left (\frac{\partial \mathbf{\alpha} _{k}}{\partial Q_{k}} \right )_{0}\cdot \mathbf{E}_{0}Q_{k0}\cos\omega_{0} t\cos\left (\omega_{k} t+\delta _{k} \right )t =$ $\mathbf{\alpha}_{0}\cdot \mathbf{E}_{0}\cos\omega_{0} t+\dfrac{1}{2}\left (\frac{\partial \mathbf{\alpha}_{k}}{\partial Q_{k}} \right )_{0}\cdot \mathbf{E}_{0}Q_{k0}\cos\left (\omega_{0} t+\omega_{k} t+\delta_k \right )+\dfrac{1}{2}\left (\dfrac{\partial \mathbf{\alpha}_{k}}{\partial Q_{k}} \right )_{0}\cdot \mathbf{E}_0 Q_{k0}\cos\left (\omega_{0}t-\omega_{k} t-\delta_{k} \right )$ We see that the linear induced dipole moment μ(1) has three components with different frequencies, $\mathbf{\alpha} _{0}\cdot \mathbf{E}_{0}\cos\omega_{0} t$ which gives rise to radiation at ω0 and accounts for the Rayleigh scattering; $\frac{1}{2}\left(\frac{\partial \mathbf{\alpha}_{k}}{\partial Q_{k}}\right)_{0}\cdot\mathbf{E}_{0}Q_{k0}\cos\left(\omega_{0}t+\omega_{k}t+\delta_{k}\right)$ which gives rise to radiation at ω0+ωk and accounts for the anti-Stokes Raman scattering; and $\frac{1}{2}\left(\frac{\partial \mathbf{\alpha}_{k}}{\partial Q_{k}}\right)_{0}\cdot\mathbf{E}_{0}Q_{k0}\cos\left(\omega_{0}t\omega_{k}t\delta_{k}\right)$ which gives rise to radiation at ω0-ωk and accounts for the Stokes Raman scattering. From these mathematical manipulations, there emerges a useful qualitative picture of the mechanisms of Rayleigh and Raman scattering in terms of classical radiation theory. Rayleigh scattering comes from the dipole oscillating at ω0 induced in the molecule by the electric field of the incident radiation at frequency ω0. Raman scattering arises from the dipole moment oscillating at ω0±ωk produced by the modulation of dipole oscillating at ω0 with molecular vibration at frequency ωk. In other words, the frequencies we observe in Raman scattering are beat frequencies of the radiation frequency ωand the molecular vibrational frequency ωk. Quantum mechanical treatment According to the quantum theory, radiation is emitted or absorbed as a result of a system making a downward or upward transition between two discrete energy levels. A quantum theory of spectroscopic processes should, therefore, treat the radiation and molecule together as a complete system, and explore how energy is transferred between the radiation and the molecule as a result of their interaction. A transition between energy levels of the molecular systems takes place with the emission or absorption of radiation, provided a transition moment associated with the initial and final molecular states is non-zero. The transition moment could be defined as $\mathbf{M}_{fi}=\left \langle \Psi _{f}\mid \mathbf{\mu} \mid \Psi _{i}\right \rangle$ in the Dirac bracket notation, where Ψi and Ψf are the wave function of initial and final states, respectively, and μ is the dipole moment operator. As we have discussed in the classical part, the linear induced dipole moment could be expressed by $\mathbf{\mu}^{\left (1 \right )}=\mathbf{\alpha} \cdot \mathit{\mathbf{E}}$ Therefore, in quantum mechanical treatment, if a transition from an initial state to a final state is induced by incident radiation at frequency ω0, the transition moment is given by $\mathbf{\mu} _{fi}^{\left (1 \right )}=\left \langle \Psi _{f}\mid \mathbf{\alpha } \mid \Psi _{i}\right \rangle\cdot \mathbf{E}$ Now we will examine in more detail the nature of a typical matrix element of the polarizability tensor, like [αxy]fi­, for Raman scattering. Just like in the classical theory, we will ignore the rotational wave function and consider the vibrational part only. $\left [\alpha _{xy} \right ]_{fi}=\left \langle \Phi _{f}\mid \alpha_{xy} \mid \Phi _{i}\right \rangle$ where Φ is the vibrational wave function. In classical theory, $\alpha_{xy} =\left (\alpha_{xy} \right )_{0} +\sum_{k} \left (\frac{\partial \alpha_{xy} }{\partial Q_{k}} \right )_{0}Q_{k}+\frac{1}{2}\sum_{k,l}\left (\frac{\partial^2 \alpha_{ij}}{\partial Q_{k}\partial Q_{l}} \right )_{0}Q_{k}Q_{l}+\cdots$ Introducing the quantum part, we obtain (just consider the first order term) $\left [\alpha _{xy} \right ]_{fi}=\left (\alpha _{xy} \right )_{0}\left \langle \Phi _{f}\mid \Phi _{i}\right \rangle+\sum_{k}\left (\frac{\partial \alpha _{xy}}{\partial Q_{k}} \right )_{0}\left \langle \Phi _{f}\mid Q_{k}\mid \Phi _{i}\right \rangle$ In the harmonic oscillator model, the total vibrational wave function is product of the harmonic oscillator wave functions for each of the normal modes of vibration. Thus, for Φ, $\Phi _{i}=\prod_{k}\Phi _{v_{k}^{i}}\left (Q_{k} \right )$ where $\Phi _{v_{k}^{i}}\left (Q_{k} \right )$ is the harmonic oscillator wave function associated with the normal coordinate Qk, which has a vibrational quantum number vki in a certain state. For harmonic oscillator functions, we have $\left \langle \Phi_{v_{k}^{f}}\left (Q_{k} \right )\mid \Phi_{v_{k}^{i}}\left (Q_{k} \right )\right \rangle=\left\{\begin{matrix} 0 \; \; \text{for} \; v_{k}^{f}\neq v_{k}^{i}\ 1 \; \; \text{for} \; v_{k}^{f}= v_{k}^{i} \end{matrix}\right.$ and $\left \langle \Phi _{v_{k}^{f}}\left (Q_{k} \right )\mid Q_{k}\mid \Phi _{v_{k}^{i}}\left (Q_{k} \right )\right \rangle=\left\{\begin{matrix} 0 \; \; for \; v_{k}^{f}= v_{k}^{i}\ \left (v_{k}^{i}+1 \right )^{\frac{1}{2}}b_{v_{k}}\; \; for \; v_{k}^{f}= v_{k}^{i}+1 \ \left (v_{k}^{i} \right )^{\frac{1}{2}}b_{v_{k}}\; \; for \; v_{k}^{f}= v_{k}^{i}-1 \end{matrix}\right.$ where $b_{v_{k}}=\sqrt{\frac{h}{8\pi^{2} v_{k}}}$ Selection rules We are now able to find out the conditions which have to be satisfied if the transition moment is non-zero. We will consider the zero order term which accounts for the Rayleigh scattering part first. This term is none-zero only if $v_{k}^{f}=v_{k}^{i}$ which means none of the vibrational quantum numbers change during this transition from initial state i to final state f. Thus, for Rayleigh scattering, the quantum mechanical treatment and the classical theory give the same results. Then we could go to the first order term $\sum_{k}\left (\frac{\partial \alpha _{xy}}{\partial Q_{k}} \right )_{0}\left \langle \Phi _{f}\mid Q_{k}\mid \Phi _{i}\right \rangle$ which accounts for the Raman scattering part. For the k-th summand, it is zero unless every term in the product is non-zero, and to achieve this the following conditions must be satisfied, for all modes except the k-th: the vibrational quantum numbers must be the same during the transition, i.e. $v_{j}^{f}=v_{j}^{i}$ where j≠k; and, for the k-th mode, and the vibrational quantum number must change by one unit, i.e. $v_{k}^{f}=v_{k}^{i}\pm 1$. The transition moment is associated with Stokes Raman scattering for Δvk=1, and with anti-Stokes Raman scattering for Δvk=-1. These conditions are a result of the properties of harmonic oscillator wave functions. It follows from these arguments that, in the harmonic approximation, only vibrational fundamentals, i.e., transitions with only one vibrational quantum number changes by one unit, can be observed in the Raman scattering. However, this Δvk=±1 restriction is only a necessary but not a sufficient condition for the occurrence of Raman scattering at the k­­-th vibrational mode. The vibrational mode should be Raman active, i.e., at least one of the elements of the derived polarizability tensor should be non-zero. It can be rigorously established by group theory that the elements of the derived polarizability will be non-zero only if they have the same symmetry with the second order terms, i.e., x2, y2, z2, xy, yz, xz. In other words, the irreducible representation of a certain vibrational mode should have a basis in x2, y2, z2, xy, yz or xz. Finally, we could establish a much better basis for determining selection rules for vibrational transitions in the Raman effect, if we consider the properties of the vibrational transition polarizability components, rather than the derived prolarizability tensor components. For fundamental vibrational transitions, where in the initial state all vibrational quantum numbers are zero and in the final state only the k-th vibrational quantum number has changed to unity, $\left [\alpha _{xy} \right ]_{fi}=\left \langle \Phi _{1}\mid \alpha_{xy} \mid \Phi _{0}\right \rangle$ According to the group theory, this integral will be non-zero, only if αxy and Φ1(Qk) belong to the same symmetry species, which implies that, under each symmetry operation of the molecule in question, αxy and Φ1(Qk) transform in the same way. This constitutes a general selection rule for the Raman activity of a fundamental transition. In its most general way, covering all types of transitions, the selection rule is as follows: a transition between two states, Ψiand Ψf, is Raman forbidden unless at least one of the triple products of the type ΨiαxyΨf belongs to a representation whose structure contains the totally symmetric species.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Spectroscopy/Vibrational_Spectroscopy/Raman_Spectroscopy/Raman%3A_Theory.txt
Raman spectroscopy is a chemical instrumentation technique that exploits molecular vibrations. It does not require large sample sizes and is non-destructive to samples. It is capable of qualitative analysis of samples and the intensity of spectral bands produced assist in quantitative analysis as well. Raman spectroscopy is even being used in areas outside of physical science (i.e. archeology and art preservation) due to the characteristics mentioned above. Introduction Raman spectroscopy is based on scattering of radiation (Raman scattering), which is a phenomenon discovered in 1928 by physicist Sir C. V. Raman. The field of Raman spectroscopy was greatly enhanced by the advent of laser technology during the 1960s.1 Resonance Raman also helped to advance the field. This technique is more selective compared to non-resonance Raman spectroscopy. It works by exciting the analyte with incident radiation corresponding to the electronic absorption bands.2 This causes an augmentation of the emission up to a factor of 106 in comparison to non-resonance Raman.2,3 In this section readers will be introduced to the theory behind resonance and non-resonance Raman spectroscopy. Each technique has its share of advantages and challenges. Each of these aspects will be explored. Theory Raman scattering is the basis of the two Raman techniques. A molecule must have polarizability to Raman scatter and its symmetry must be even (or gerade) for it to have polarizability2. Furthermore, the more electrons a molecule has gererally increases its polarizability. Polarizability ($\alpha$) is a measure of an applied electronic field’s (E) ability to generate a dipole moment (µ) in the molecule.5 In other words, it is an alteration of a molecule's electron cloud. Mathematically, this can be determined by the following equation: $\mu = \alpha E \label{1}$ To help provide a better visualization of how Raman spectroscopy works, a generic diagram can be seen in Figure 2. A sample is irradiated with monochromatic laser light; which is then scattered by the sample. The scattered light passes through a filter to remove any stray light that may have also been scattered by the sample.2 The filtered light is then dispersed by the diffraction grating and collected on the detector. This set-up works for both the non-resonance and resonance Raman techniques. Non-resonance Raman scattering occurs when the radiation interacts with a molecule resulting in polarization of the molecule’s electrons.4 The increase in energy from the radiation excites the electrons to an unstable virtual state; therefore, the interaction is almost immediately discontinued and the radiation is emitted (scattered) at a slightly different energy than the incident radiation.4 Resonance Raman scattering occurs in a similar fashion. However, the incident radiation is at a frequency near the frequency of an electronic transition of the molecule of interest. This provides enough energy to excite the electrons to a higher electronic state. Figure 1 provides a visual depiction of what non-resonance and resonance Raman scattering looks like in terms of energy levels. Advantages of Non-Resonance and Resonance Raman Instrumental techniques each have certain strengths that make them better suited for some jobs as oppose to others. Non-resonance is a good example of this notion. It is considered better suited for analyzing water containing samples due to water’s low polarizability. Non-resonance and resonance Raman each have the capability to analyze samples in the gaseous, liquid, or solid state. Their non-destructive nature makes it a great candidate for doing analysis of delicate materials. Archeologists and art historians even find resonance Raman spectroscopy useful for studying and authentication of artifacts and artwork.4 Monochromatic light in the ultraviolet or near-infrared regions is generally used for both resonance and non-resonance Raman spectroscopy. A tunable laser is preferred for resonance Raman and can be an advantage. That is because only one laser is necessary to do analyses of multiples samples in which each one requires a different excitation wavelength.4 This allows the user to switch out samples without having to switch out the lasers as well. It becomes a matter of just changing the setting on the tunable laser. If the laboratory is not equipped with a tunable laser, any laser that is available can be used to achieve the enhancement of the Raman signal. The only stipulation being that the laser available must have a frequency as near as possible to one of the analyte’s electronic transitions.2 Therefore, researchers conducting resonance Raman spectroscopy without a tunable laser are at the mercy of whatever laser they do have in the laboratory. Resonance Raman spectroscopy has greater sensitivity compared to its non-resonance counterpart. It is capable of analyzing samples with concentrations as low as 10-8 M. Non-resonance Raman can analyze samples with concentrations no lower than 0.1 M. Resonance Raman spectroscopy produces a spectrum with relatively few lines. The reason being that the technique only augments Raman signals affiliated with chromophores in the analyte.2,4 This makes the technique particularly useful for analysis of larger molecules like biomolecules. Fluorescence Disadvantage Fluorescence is a problem for both Resonance Raman techniques, particularly when using sources in the visible range.2 Non-resonance Raman signals are generally weak and can be easily overwhelmed by fluorescence signals.6 In addition, fluorescence has a longer excited state lifetime compared to Raman scattering, causing an inability to detect Raman signals.2,6 Even when the analyte is not a fluorescent molecule, the signal could be a result of the sample matrix content (i.e. solvent or contaminants). Resonance Raman is particularly at risk of inducing fluorescence because it uses sources at frequencies near to that of a molecule’s electronic transition. The radiation is more likely to absorb resulting in fluorescence as a possible mechanism for the electrons return to the ground state. Thus, highly fluorescent molecules should be avoided when using Raman spectroscopy; especially resonance Raman. Figure 3 is a general illustration of how a fluorescence signal can overwhelm Raman signals. Raman shift (cm-1) Figure 3: Two generic Raman spectra overlaid. The blue Raman spectrum represents one obtained via excitation source in the visible range. The black Raman spectrum represents one obtained via excitation source in the near-infrared range. The black Raman signals are free of fluorescence interference. There are techniques that spectroscopists use to avoid fluorescence interference. For instance, background subtraction could be done. Another example is to use near-infrared radiation to excite the sample as a means to overcome fluorescence.6 A more elaborate method was used by Matousek et al. They took advantage of the differences in excitation lifetimes for Raman and fluorescence. It required implementing shifted excitation Raman difference spectroscopy (SERDS) in conjunction with a device known as a Kerr gate to successfully obtain a resonance Raman spectrum of the rhodamine 6G dye.6 SERDS is a technique that uses two excitation wavelengths to produce two Raman spectra. The excitation wavelengths have a difference in value that corresponds to the bandwidth of the Raman signal.6 The two spectra are subtracted from each other and the difference spectra is recreated by means of mathematical processes.6 A Kerr gate can be used to remove fluorescence from a Raman signal based on their different lifetimes.6 The device consists of a couple of crossed polarizers, Kerr medium, and an additional laser to provide a gating pulse.7 Now consider a sample that has been irradiated resulting in fluorescence and Raman scattering. The fluorescence and Raman scatter would pass through the crossed polarizer and then through the Kerr medium (Matousek et al. used carbon disulfide as the Kerr medium). The Kerr gate is referred to as being open when a laser pulse (the gating pulse) strikes the Kerr medium as the fluorescence and Raman scattered light pass through.7 Furthermore, the gate remains open for a length of time corresponding to the lifetime of the Raman scatter,7 The interaction of the gating pulse with the Kerr medium causes the light to become anisotropic and transmit beyond the Kerr medium.7 The light then goes from being polarized in a linear direction to elliptical polarization. However, Raman scattered light can be selectively switched back to linear polarization by selecting the appropriate propagation length for Kerr medium transmission; or by altering the degree of anisotropy.7 The Raman scattered light is then allowed to pass through the second crossed polarizer and on to the spectrometer. Any fluorescence that passes through the Kerr medium is prevented from entering the spectrometer due to its inability to transmit through the second crossed polarizer on account of its new elliptical polarization. Figure 4 should assist with the visualization of such process. Conclusions This module was meant to provide an introduction to the similarities and differences between non-resonance and resonance Raman spectroscopy. Notice that they each have their own advantages making both of them powerful analytical techniques. Non-resonance Raman is more advantageous when compared to IR spectroscopy. However, resonance Raman appears to have the upper hand when compared to its non-resonance counterpart. The important thing is to choose the technique that is most appropriate for the work to be done. Problems 1. What does Dr. Nivens' quote mean and what can be done to avoid the problem? 2. Archeology and art preservation were mentioned as fields in which Raman spectroscopy is useful due to the following characteristics: non-destructive to sample, qualitative analysis, and quantitative analysis. Name an additional area/field that could benefit from Raman due to these characteristics and why. 3. Does non-resonance Raman or resonance Raman have a better limit of detection? 4. What is polarizability? 5. You are working on a project that is trying to determine if lycopene is absorbed better via supplements or diet. You only have a Raman spectrometer available to conduct your analysis. Thus you decide to determine the amount absorbed in the body by subtracting the amount excreted in urine from the total intake. In general, would it be better to detect lycopene in your biological sample using non-resonance or resonance Raman spectroscopy and why? Solutions 1. "Fluorescence is the enemy of Raman" refers to the fact that fluorescence induced during Raman spectroscopy will inhibit detection of Raman signals. One way to avoid such a problem is to use an excitation source that is in the ultra-violet or near-infrared range. 2. Answers will vary, but here is an example: Forensic science is a field that could benefit for the indicated characteristics of Raman spectroscopy. The non-destructive nature will insure that precious evidence is not damage or destroyed during analysis. The evidence can then be preserved for additional testing. The qualitative and quantitative factors could provide information to investigators as to what is present in their evidence (i.e. bodily fluids, drugs, accelerants, etc.). 3. Resonance Raman is capable of analyte detection at concentrations as low as 10-8 M. The limit of detection is much higher for non-resonance Raman. 4. Polarizability measures the capability of an applied electric field to cause a dipole moment. 5. The sample will most likely contain a multitude of things because it is biological. Resonance Raman would be the better choice because lycopene contains chromophores which are targeted for excitation in that technique. Thus the spectra will not be cluttered by all the other contents in the biological sample.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Spectroscopy/Vibrational_Spectroscopy/Raman_Spectroscopy/Resonant_vs._Nonresonant_Raman_Spectroscopy.txt
• Combination Bands, Overtones and Fermi Resonances Combination bands, overtones, and Fermi resonances are used to help explain and assign peaks in vibrational spectra that do not correspond with known fundamental vibrations. Combination bands and overtones generally have lower intensities than the fundamentals, and Fermi resonance causes a spilt and shift in intensity of peaks with similar energies and identical symmetries. Hot bands will also be briefly addressed. • Introduction to Vibrations IR spectroscopy which has become so useful in identification, estimation, and structure determination of compounds draws its strength from being able to identify the various vibrational modes of a molecule. A complete description of these vibrational normal modes, their properties and their relationship with the molecular structure is the subject of this article. • Isotope Effects in Vibrational Spectroscopy This page provides an overview of how an isotope can affect the frequencies of the vibrational modes of a molecule. Isotopic substitution is a useful technique due to the fact that the normal modes of an isotopically substituted molecule are different than the normal modes of an unsubstituted molecule, leading to different corresponding vibrational frequencies for the substituted atoms. • Mode Analysis • Normal Modes Normal modes are used to describe the different vibrational motions in molecules. Each mode can be characterized by a different type of motion and each mode has a certain symmetry associated with it. Group theory is a useful tool in order to determine what symmetries the normal modes contain and predict if these modes are IR and/or Raman active. Consequently, IR and Raman spectroscopy is often used for vibrational spectra. • Number of Vibrational Modes in a Molecule All atoms in a molecule are constantly in motion while the entire molecule experiences constant translational and rotational motion. A diatomic molecule contains only a single motion. Polyatomic molecules have more than one type of vibration, known as normal modes. • Symmetry Adapted Linear Combinations The construction of linear combinations of the basis of atomic movements allows the vibrations belonging to irreducible representations to be investigated. The wavefunction of these symmetry equivalent orbitals is referred to as Symmetry Adapted Linear Combinations, or SALCs. Vibrational Modes Combination bands, overtones, and Fermi resonances are used to help explain and assign peaks in vibrational spectra that do not correspond with known fundamental vibrations. Combination bands and overtones generally have lower intensities than the fundamentals, and Fermi resonance causes a spilt and shift in intensity of peaks with similar energies and identical symmetries. Hot bands will also be briefly addressed. Introduction Fundamental vibrational frequencies of a molecule corresponds to transition from v=0 to v=1. For a non-linear molecule there will by 3N-6 (where N is the number of atoms) number vibrations. The same holds true for linear molecules, however the equations 3N-5 is used, because a linear molecule has one less rotational degrees of freedom. (For a more detailed explanation see: Normal Modes). Figure 1 shows a diagram for a vibrating diatomic molecule. The levels denoted by vibrational quantum numbers v represent the potenital energy for the harmonic (quadratic) oscillator. The transition $0 \rightarrow 1$ is fundamental, transitions $0 \rightarrow n$ (n>1) are called overtones, and transitions $1 \rightarrow n$ (n>1) are called hot transitions (hot bands). Symmetry Requirements The symmetry requirement of vibrational transisition is given by the transition moment integral, $\mu =\int \psi ^{\ast } \boldsymbol{\mathbf{}\mu} \psi d\tau \neq 0$ where, $\boldsymbol{\mu }=\textbf{i}\mu _{x}+\textbf{j}\mu _{y}+\textbf{k}\mu _{z}$ These integrals can be separated into each component: x,y, and z. Because the ground state contains the totally symmetric representation, the coordinate x, y, or z and ψ* must belong to the same representation so that the direct product will contain the totally symmetric representation. Harmonic Oscillator Breakdown The harmonic oscillator approximation is convenient to use for diatomic molecules with quantized vibrational energy levels given by the following equation: $E_{v}(cm^{-1}) = \left (v + \frac{1}{2} \right) \omega_{e}$ A more accurate description of the vibrational energies is given by the anharmonic oscillator (also called Morse potential) with energy of $E_{v}(cm^{-1}) = \omega_{e} \left (v + \frac{1}{2} \right) - \omega_{e}x_{e} \left (v + \frac{1}{2} \right)^2 + \omega_{e}y_{e} \left (v + \frac{1}{2} \right)^3 +...$ where ωe is the vibrational frequency for the re internuclear separation and ωe >> ωexe >> ωeye. This accounts for the fact that as the higher vibrational states deviate from the perfectly parabolic shape, the level converge with increasing quantum numbers. It is because of this anharmoniticity that overtones can occur. While it may seem that the harmonic oscillator and the anharomic oscillator are closely related, this is in fact not the case. The differences in the wavefunctions lead to a breakdown of selection rules, specifically, Δv=±1 selection rule can not be applied, and higher order terms must be accounted in the energy calculations. There is only a small correction from the ground state to the first excited state for the anharmonic correction, but it becomes much larger for more highly excited states which are populated as the temperature increases. The deviation from the harmonic oscillator to the anharmonic oscillator results in expanding the energy function with additional terms and treating these terms with perturbation theory. The results in the correct vibrational energies and also relaxes the selection rules. A Δv=±1 is still most predominant, however, weaker overtones with Δv=±2, ±3,… can occur. It should be noted that a Δv=2 transition does not occur at twice the frequency of the fundamental transition, but at a lower frequency. Overtone transitions are not always observed, especially in larger molecules, because the transitions become weaker with increasing Δv. Overtones Overtones occur when a vibrational mode is excited from $v=0$ to $v=2$, which is called the first overtone, or v=0 to v=3, the second overtone. The fundamental transitions, $v=±1$, are the most commonly occurring, and the probability of overtones rapid decreases as the number of quanta ($Δv=±n$) increases. Based on the harmonic oscillator approximation, the energy of the overtone transition would be n times larger than the energy of the fundamental transition frequency, but the anharmonic oscillator calculations show that the overtones are less than a multiple of the fundamental frequency. This is demonstrated with the vibrations of the diatomic $\ce{HCl}$ in the gas phase: Table 1: HCl vibrational spectrum. Transition Term obs [cm-1] Harmonic [cm-1] Anharmonic [cm-1] $0 \rightarrow 1$ fundamental 2,885.9 2,885.9 2,885.3 $0 \rightarrow 2$ first overtone 5,668.0 5,771.8 5,665.0 $0 \rightarrow 3$ second overtone 8,347.0 8,657.7 8,339.0 $0 \rightarrow 4$ third overtone 10,923.1 11,543.6 10,907.4 $0 \rightarrow 5$ fourth overtone 13,396.5 14,429.5 13,370 We can see from Table 1, that the anharmonic frequencies correspond much better with the observed frequencies, especially as the vibrational levels increase. Special case If one of the symmetries is doubly degenerate in the excited state, a recursion formula is required to determine the symmetry of the vth wave function, given by, $\chi _{v}(R) = \frac{1}{2} \left [\chi (R) \chi_{v-1}(R) + \chi (R^{v}) \right]$ Where χv(R) is the character under the operation R for the vth energy level; χ(R) is the character under R for the degenerate irreducible representation; χv-1(R) is the character of the (v-1)th energy level; and χ(Rv) is the character of the operation Rv. This is demonstrated for the D3h point group below. Combination Bands Combination bands are observed when more than two or more fundamental vibrations are excited simultaneously. One reason a combination band might occur is if a fundamental vibration does not occur because of symmetry. This is comparable to vibronic coupling in electronic transitions in which a fundamental mode can be excited and allowed as a “doubly excited state.” Combination implies addition of two frequencies, but it also possible to have a difference band where the frequencies are subtracted. To determine if two states can be excited simultaneous the transition moment integral must be evaluated with the appropriate excited state wavefunction. For example, in the transition, $\psi ^{_{1}}\left( 0 \right )\psi ^{_{2}}\left ( 0 \right )\psi ^{_{3}}\left ( 0 \right )\rightarrow \psi ^{_{1}}\left ( 2 \right )\psi ^{_{2}}\left ( 0 \right )\psi ^{_{3}}\left ( 1 \right )$ the symmetry of the excited state will be the direct product of the irreducible representation for ψ1(2) and ψ3(1). For example, in the point group C4v, v1 has symmetry e and v3 has symmetry a2. By performing the calculations listed above, it is determined that ψ1(2) has (a1 + b1 + b2) symmetry: $\Gamma [\psi_{es}] = \Gamma [\psi _{1}(2)] \otimes \Gamma [\psi _{3}(1)] = (a_{1} + b_{1} + b_{2})\times a_{2} = a_{2} + b_{2} + b_{1}$ A practical use for understanding overtones and combination bands is applied to organic solvents used in spectroscopy. Most organic liquids have strong overtone and combination bands in the mid-infrared region, therefore, acetone, DMSO, or acetonitrile should only be used in very narrow spectral regions. Solvents such at CCl4, CS2 and CDCl3 can be used above 1200 cm-1. Hot Bands Hot bands are observed when an already excited vibration is further excited. For example an v1 to v1' transition corresponds to a hot band in its IR spectrum. These transitions are temperature dependent, with lower signal intensity at lower temperature, and higher signal intensity at higher temperature. This is because at room temperature only the ground state is highly populated (kT ~ 200 cm-1), based on the Boltzmann distribution. The Maxwell-Boltzmann distribution law states that if molecules in thermal equilibrium occupy two states of energy εj and εi, the relative populations of molecules occupying these states will be, $\large \dfrac{n_{j}}{n_{i}}=\dfrac{e^{-\varepsilon _{j}/RT}}{e^{-\varepsilon _{i}/RT}}=e^{-\Delta \varepsilon /RT}$ where, k is the Boltzmann constant and T is the temperature in Kelvin. In the harmonic oscillator model, hot bands are not easily distinguished from fundamental transitions because the energy levels are equally spaced. Because the spacing between energy levels in the anharmonic oscillator decrease with increasing vibrational levels, the hot bands occur at lower frequencies than the fundamentals. Also, the transition moment integrals are slightly different since the ground state will not necessarily be totally symmetric since it is not in v=0. $\psi ^{_{1}} \left( 0 \right )\psi ^{_{2}}\left ( 0 \right )\psi ^{_{3}}\left ( 1 \right )\rightarrow \psi ^{_{1}}\left ( 0 \right )\psi ^{_{2}}\left ( 0 \right )\psi ^{_{3}}\left ( 2 \right )$ Fermi Resonances Fermi resonance results in the splitting of two vibrational bands that have nearly the same energy and symmetry in both IR and Raman spectroscopies. The two bands are usually a fundamental vibration and either an overtone or combination band. The wavefunctions for the two resonant vibrations mix according to the harmonic oscillator approximation, and the result is a shift in frequency and a change in intensity in the spectrum. As a result, two strong bands are observed in the spectrum, instead of the expected strong and weak bands. It is not possible to determine the contribution from each vibration because of the resulting mixed wave function. If the symmetry requirements are fulfilled and the energies of the two states are similar, mixing occurs, and the resulting modes can be described by a linear combination of the two interacting modes. The effect of this interaction is to increase the splitting between the engery levels. The splitting will be larger if the original energy difference is small and the coupling energy is large. The mixing of the two states also equalized the intensities of the vibrations which allows a weak overtone or combination band to show significant intensity from the fundamental with which it has Fermi resonance with. Because the vibrations have nearly the same frequency, the interaction will be affected if one mode undergoes a frequency shift from deuteration or a solvent effect while the other does not.The molecule most studied for this type of resonance (even what Fermi himself used to explain this phenomena), is carbon dioxide, CO2. The three fundamental vibrations are v1= 1337 cm-1, v2=667 cm-1, v3=2349 cm-1. The first overtone of v2 is v1 + 2v2 with symmetries σg+ and (σg+ + δg+), respectively, and frequencies of 1337 cm-1 (v1) and 2(667) = 1334 cm-1 (v2). According to group theory calculations, CO2 belongs to the point group D∞h and should only have one Raman (symmetric stretching vibrations) and two IR active modes (asymmetric stretching and bending vibrations). CS2 is an analog to this system. Another typical example of Fermi resonance is found in the vibrational spectra of aldehydes, where the C-H bond in the CHO group interacts with the second harmonic level, 2δ(CHO), derived from the fundamental frequency of the deformation vibration of the CHO group (2*1400 cm-1). The result is a Fermi doublet with branches around 2830 and 2730 cm-1. It is important for Fermi resonance that the vibrations connected with the two interacting levels be localized in the same part of the molecule. When bands have non-negligible widths, Fermi resonance perturbation of localized levels cannot be applied. This broadening can be the result of a number of things, such as, intermolecular interaction, shortened excited state lifetimes, or interaction of vibrational modes with phonons. In place of perturbation theory, the distribution of interacting vibrational states can be approximated as a collection of discrete level. The influence from each level can be calculated. It is useful to understand Fermi resonance because it helps assign and identify peaks within vibrational spectra (ie. IR and Raman) that may not otherwise be accounted for, however it should not be used lightly when assigning spectra.It is easy to jump to the conclusion that an unidentifiable band is the result of Fermi resonance, however this explanation may not fully account for the inconsistency and further characterization may be required for the system being investigated. It is important to assign spectra before doing the normal mode (coordinate) calculations because doing these calculations beforehand often leads to incorrect assignments of the peaks in the spectra. Problems Q1. Given ν1 = 1151 cm-1, ν2 = 1361 cm-1, ν3 = 519 cm-1 for SO2, and the fact that there are 4 overtones and/or combination bands, predict the vibrational spectra and calculations. A1. v [cm-1] Assignment 519 v2 606 ν1 - v2 1151 v1 1361 v3 1871 v2 + v3 2305 2v1 2499 v1 + v3 Q2. What are the two main effects of Fermi resonance? A2. An overtone band can gain intensity from a nearby fundamental frequency with similiar symmetry. The energy levels of both bands are shifted away from one another. Q3. Explain the difference between a combination band and an overtone. An overtone is the result of Δv>1 from the ground state. A combination band is the result of a 2 fundamental frequencies being excited simultaneously so that the excitation is allowed by symmetry. The overtone is not subject to a symmetry requirement. Q4. Why are hot bands temperature dependent? A4. For a hot band to occur, a state other than the ground state must already be populated, and this requires >200cm-1 to over come the thermal energy of kT (Boltzmann constant times temperature). The more heat that it put into the system, the more likely a hot band is to occur, and the stronger the signal it will produce. Q5. Show the calculations for the values in Figure 1 for both the harmonic and anharmonic oscillators. A5. Equations to use: ṽ = 2885.90v, for harmonic and ṽ = 2990.9v – 52.82v(v+1) for anharmonic. For v=3, ṽH = 2885.90(3) = 8657.7 ṽAH = 2990.9(3) – 52.82(3)(3+1) = 8339.0
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Spectroscopy/Vibrational_Spectroscopy/Vibrational_Modes/Combination_Bands%2C_Overtones_and_Fermi_Resonances.txt
IR spectroscopy which has become so useful in identification, estimation, and structure determination of compounds draws its strength from being able to identify the various vibrational modes of a molecule. A complete description of these vibrational normal modes, their properties and their relationship with the molecular structure is the subject of this article. Introduction We are familiar with resolving a translational vector into its three components along the x-, y-, and z- axes. Similarly a rotational motion can also be resolved into its components. Likewise the same is true for vibrational motion. The complex vibration that a molecule is making is really a superposition of a number of much simpler basic vibrations called “normal modes”. Before we take up any further description of “normal modes” it is necessary to discuss the degrees of freedom. Degrees of Freedom Degree of freedom is the number of variables required to describe the motion of a particle completely. For an atom moving in 3-dimensional space, three coordinates are adequate so its degree of freedom is three. Its motion is purely translational. If we have a molecule made of N atoms (or ions), the degree of freedom becomes 3N, because each atom has 3 degrees of freedom. Furthermore, since these atoms are bonded together, all motions are not translational; some become rotational, some others vibration. For non-linear molecules, all rotational motions can be described in terms of rotations around 3 axes, the rotational degree of freedom is 3 and the remaining 3N-6 degrees of freedom constitute vibrational motion. For a linear molecule however, rotation around its own axis is no rotation because it leave the molecule unchanged. So there are only 2 rotational degrees of freedom for any linear molecule leaving 3N-5 degrees of freedom for vibration. Vibrational modes 1. A normal mode is a molecular vibration where some or all atoms vibrate together with the same frequency in a defined manner. 2. Normal modes are basic vibrations in terms of which any other vibration is derived by superposing suitable modes in the required proportion. 3. On the other hand, no normal mode is expressible in terms of any other normal mode. Each one is pure and has no component of any other normal mode (i.e. they are orthogonal to each other). Mathematically, the integral is $\int\psi_A\psi_B \;dR= 0$ (integration is done over the entire space) 1. The required number of “normal modes” is equal to the vibrational degree of freedom available so the number of modes for a nonlinear molecule is $3N-6$ and that for a linear molecule is $3N-5$. 2. Each mode has a definite frequency of vibration. Sometimes 2 or 3 modes may have the same frequency but that does not change the fact that they are distinct modes; these modes are called degenerate. 3. Sometimes some modes are not IR active but they exist all the same. We shall revert back to the problem of IR activity and selection rules later. The number of vibrational normal modes can be determined for any molecule from the formula given above. For a diatomic molecule, N = 2 so the number of modes is $3\times 2-5 = 1$. For a triatomic linear molecule (CO2), it is $3 \times 3-5 = 4$ and triatomic nonlinear molecule (H2O), it is $3 \times 3-6 = 3$ and so on. Example 1: Water 1. The Symmetric Stretch (Example shown is an H2O molecule at 3685 cm-1) 2. The Asymmetric Stretch (Example shown is an H2O molecule at 3506 cm-1) 3. Bend (Example shown is an H2O molecule at 1885 cm-1) A linear molecule will have another bend in a different plane that is degenerate or has the same energy. This accounts for the extra vibrational mode. Example 3: The Methylene Group It is important to note that there are many different kinds of bends, but due to the limits of a 2-dimensional surface it is not possible to show the other ones. The frequency of these vibrations depend on the inter atomic binding energy which determines the force needed to stretch or compress a bond. We discuss this problem in the next section. The determination of the nature of the relative displacement of each atom with respect to each other is more complicated and beyond the scope of this article. However, such motion can be seen in some common molecules as shown below. Energetics For studying the energetics of molecular vibration we take the simplest example, a diatomic heteronuclear molecule AB. Homonuclear molecules are not IR active so they are not a good example to select. Let the respective masses of atoms A and B be $m_A$ and $m_B$. So the reduced mass $\mu_{AB}$ is given by: $\mu_{AB}=\dfrac{m_A\, m_B}{m_A+m_B}$ The equilibrium internuclear distance is denoted by $r_{eq}$. However as a result of molecular vibrations, the internuclear distance is continuously changing; let this distance be called $r(t)$. Let $x(t)=r(t)-r_{eq}$. When $x$ is non-zero, a restoring force $F$ exists which tries to bring the molecule back to $x=0$, that is equilibrium. For small displacements this force can be taken to be proportional to $x$. $F=-kx$ where $k$ is the force constant. The negative sign arises from the fact that the force acts in the direction opposite to $x$. This is indeed a case of Simple Harmonic Motion where the following well known relations hold. $x(t)= A \sin \left( 2\pi \nu t \right)$ where $\nu=\dfrac{1}{2\pi} \sqrt{\dfrac{k}{\mu_{AB}}}$ The potential energy is given by $V=\frac{1}{2}kx^2$. The total energy $E$ (Kinetic+Potential) is obtained by solving the Schrödinger equation: $-\dfrac{h^2}{8\pi^2\mu_{AB}} \dfrac{d^2\psi}{dx^2}+\dfrac{1}{2} kx^2\psi = E\psi$ A set of wave functions $\psi_n$) and the corresponding Eigenvalues $E_n$ are obtained. $E_n=(n+(1/2))hv$ where $n$ is an integer (-1,0,1,2 etc.). The energy is quantized, the levels are equally spaced, the lowest energy is $(1/2)hv$, and the spacing between adjacent levels is $hv$. Interaction with Electromagnetic Radiation As show above, the energy difference between adjacent vibrational energy levels is hvvibration. On the other hand, the photon energy is hvphoton. Energy conservation requires that the first condition for photon absorption be, Hvvibration = hvphoton or vvibration = vphoton. Such photons are in IR region of the electromagnetic spectrum. In addition, two more conditions must be met. 1. For absorption of electromagnetic radiation, the dipole moment of the molecule must change with increasing internuclear separation resulting from the vibration (i.e, $d\mu/dD \neq 0$). 2. The probability of a transition from one state to another is large if one of the state is odd and another even. This is possible if nfinal – ninitial = +1 (for absorption). At room temperature, modes are predominantly in energy state n = 0, so this transition is from n = 0 to n = 1, and $\Delta{E} = h\nu$. Applications Spectroscopy in the IR region can determine the frequency and intensity of absorption. These frequencies are generally specific for a specific bonds such as c-c, c(double bond)c, c(triple bond)c, c-o, c(double bond)o, etc. So the IR absorption data is very useful in structure determination. The intensity depends on the concentration of the resposble spec. So it is useful for quantitative estimation and for identification. Questions 1. Find the number of vibrational modes for the following molecules: $NH_3$, $C_6H_6$, $C_{10}H_8$, $CH_4$, $C_2H_2$(linear). 2. State which of the following vibrations are IR active: $N_2$, $CO$, $CO_2$ (stretching), $HCl$ 3. Calculate the vibrational frequency of $CO$ given the following data: mass of C = 12.01 amu, mass of O = 16 amu, the force constant $k = 1.86 \times 10^3\; kg\cdot s^{-2}$. 4. Calculate the vibrational energy in Joules per mole of a normal mode in question 3, in its ground state of $n=0$. 5. Assuming the force constant to be the same for $H_2O$ and $D_2O$. A normal mode for $H_2O$ is at $3650\; cm^{-1}$. Do you expect the corresponding $D_2O$ wave number to be higher or lower? Answers 1.) NH3 – 6 C6H6 –30 C10H8 –48 CH4 –9 C2H2 – 7 2.) N2 – IR inactive C0 – active C02 (stretching) – inactive HCl – active 3.) mAB = mAxmB/(mA+mB) = 11.395x10-27 v = (1/2pi)(k/mAB).5 = 2143.3 cm-1 4.) Energy of the mode for n = 0 E0 = (1/2)hv = 2.13x10-20J Energy per mole = 2.13x10-20x6.022x1023 = 12.8KJ/mole 5.) v for D2O will be lower because v is inversely proportional to 1/(m.5), where m is the reduced mass.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Spectroscopy/Vibrational_Spectroscopy/Vibrational_Modes/Introduction_to_Vibrations.txt
This page provides an overview of how an isotope can affect the frequencies of the vibrational modes of a molecule. Isotopic substitution is a useful technique due to the fact that the normal modes of an isotopically substituted molecule are different than the normal modes of an unsubstituted molecule, leading to different corresponding vibrational frequencies for the substituted atoms. Vibrational spectroscopy is done in the infrared region of the electromagnetic spectrum, which ranges from around 10-6 to 10-3 meters. IR and Raman spectroscopy observe the vibrations of molecules, displaying the normal and local modes of the molecule in the spectra. Isotopes are atoms that share the same number of protons but differ in the number of neutrons contained in the nucleus, thus giving these atoms different mass numbers. The specific mass of each atom will affect the reduced mass of the overall molecule, therefore changing the vibrational frequencies of that molecule. Diatomics A diatomic molecule, as seen in Figure $1$, contains two atoms, which can either be composed of the same or different elements. It is easier to focus on these types of molecules when analyzing and calculating vibrational frequencies because they are simpler systems than polyatomic molecules. Whether or not the diatomic consists of the same or different elements, a diatomic molecule will have only one vibrational frequency. This singular normal mode is because of the diatomic's linear symmetry, so the only vibration possible occurs along the bond connecting the two atoms. Normal Modes Normal modes describe the possible movements/vibrations of each of the atoms in a system. There are many different types of vibrations that molecules can undergo, like stretching, bending, wagging, rocking, and twisting, and these types can either be out of plane, asymmetric, symmetric, or degenerate. Molecules have 3n possible movements due to their 3-dimensionality, where n is equal to the number of atoms in the molecule. Three movements are subtracted from the total because they are saved for the displacement of the center of mass, which keeps the distance and angles between the atoms constant. Another 3 movements are subtracted from the total because they are for the rotations about the 3 principle axes. This means that for nonlinear molecules, there are 3n – 6 normal modes possible. Linear molecules, however, will have 3n – 5 normal modes because it is not possible for internuclear axis rotation, meaning there is one less possible rotation for the molecule. This explain why diatomic molecules only have 1 vibrational frequency, because 3(2) – 5 = 1. Molecular vibrations are often thought of as masses attached by a spring (Figure $2$), and Hook’s law can be applied $F=-kx$ where • $F$ is the resulting force, • $x$ is the displacement of the mass from equilibrium $(x = r – r_{eq})$, and • $k$ is the force constant, defined as $k=\left (\dfrac{\partial^2V(r)}{\partial r^2} \right)_{r_{eq}}$ in which $V(r)=\dfrac{1}{2}k(r-r_{eq})$, which comes from incorporating Hook’s law to the harmonic oscillator. The diatomic molecule is thought of as two masses (m1 and m2) on a spring, they will have a reduced mass, µ, so their vibrations can be mathematically analyzed. $\mu=\dfrac{m_{1}m_{2}}{m_{1}+m_{2}}$ When an atom in a molecule is changed to an isotope, the mass number will be changed, so $µ$ will be affected, but $k$ will not (mostly). This change in reduced mass will affect the vibrational modes of the molecule, which will affect the vibrational spectrum. Vibrational energy levels, $\nu_{e}$, are affected by both k and µ, and is given by $\nu_e=\dfrac{1}{2\pi}\sqrt{\dfrac{k}{\mu}}$ These vibrational energy levels correspond to the peaks which can be observed in IR and Raman spectra. IR spectra observe the asymmetric stretches of the molecule, while Raman spectra observe the symmetric stretches. Effects on Experimental Results When an atom is replaced by an isotope of larger mass, µ increases, leading to a smaller $\nu_{e}$ and a downshift (smaller wavenumber) in the spectrum of the molecule. Taking the diatomic molecule HCl, if the hydrogen is replaced by its isotope deuterium, µ is doubled and therefore $\nu_{e}$ will be decreased by $\sqrt{2}$. Deuterium substitution leads to an isotopic ratio of 1.35-1.41 for the frequencies corresponding to the hydrogen/deuterium vibrations. There will also be a decrease by $\sqrt{2}$ in the band width and integrated band width for the vibrational spectra of the substituted molecule. Isotopic substitution will affect the entire molecule (to a certain extent), so it is not only the vibrational modes for the substituted atom that will change, but rather the vibrational modes of all the atoms of the molecule. The change in frequency for the atoms not directly invovled in the substitution will not display as large a change, but a downshift can still occur. When polyaniline (Figure $4$) is fully deuterated, the vibrational peaks will downshift slightly. The following data was summarized from Quillard et al. Type of vibration Nondeuterated (frequency, cm-1) Deuterated (frequency, cm-1) C-C stretch 1626 1599 C-C stretch 1581 1560 C-H bend – benzenoid ring 1192 876 C-H bend – quinoid ring 1166 856 N-H bend 1515 1085 Changing hydrogen to deuterium leads to the largest effect in a vibrational spectrum since the mass is doubled. Other isotopic substitutions will also lead to a shift in the vibrational energy level, but because the mass change is not as significant, µ will not change by much, leading to a smaller change in $\nu_{e}$. This smaller change in vibrational frequency is seen in the sulfur substitution of sulfur hexafluoride (Figure $5$:), from 32S to 34S. The frequencies as reported by Kolomiitsova et al. are shown below. Vibration assignment 32SF6 (frequency, cm-1) 34SF6 (frequency, cm-1) $\nu_{3}$ 939.3 922.2 $\nu_{4}$ 613.0 610.3 These two examples show the consistency of downshifted vibrational frequencies for atoms substituted with an isotope of higher mass. Applications Substituting atoms with isotopes has been shown to be very useful in determining normal mode vibrations of organic molecules. When analyzing the spectrum of a molecule, isotopic substitution can help determine the vibrational modes specific atoms contribute to. Those normal modes can be assigned to the peaks observed in the spectrum of the molecule. There are specific CH3 rocks and torsions, as well as CH bends that can be identified in the spectrum upon deuterium substitution. Other torsion bands from hydroxyl and amine groups can also be assigned when hydrogen is replaced with deuterium. Experimental data has also shown that using deuterium substitution can help with symmetry assignments and the identification of metal hydrides. Isotopic substitution can also be used to determine the force constants of the molecule. Calculations can be done using the frequencies of the normal modes in determining these values, based on both calculated frequencies and experimental frequencies. Researchers have also attempted to contribute peak shape changes and splits in peaks of vibrational spectra to naturally occurring isotopes in molecules. It has been shown, however, that the shape of a peak is not related to the size of the atom, so substitution to an atom of larger mass will not affect the peak shape in the molecule's spectrum. As previously stated, isotopic substitution of atoms of higher mass will not have a significant enough effect on the shifts in frequencies for the corresponding vibrations, so analyzing the frequency shifts of smaller mass isotopes, like deuterium and 13C is necessary. As depicted in the rough representation of the vibrational spectra of the molecule tetrachlorinated dibenzodioxin (TCDD), the 13C substituted TCDD spectrum is slightly downshifted compared to the unsubstituted TCDD spectrum. Although the shifts and split peaks do occur in the spectra of isotopically substituted molecules, not all observed peaks can be attributed to the isotope. This is because the intensities of the peaks shown are not large enough to relate to the natural abundance of the 13C isotope, and not all peaks can be accounted for by the substitution. Problems 1. How many normal modes would be found in CO2? What are the different types of vibrational modes for this molecule? 2. If the diatomic molecule HCl, with 1H and 35Cl were substituted with 37Cl, what change occurs to the reduced mass? 3. For a nondeuterated hydrofluoric acid diatomic, HF, the vibrational frequency of this molecule is found at 845 cm-1. If the hydrogen atom of this molecule was substituted with deuterium, where would you expect to now find the vibrational frequency? Answers 1. 4; symmetric stretch, asymmetric stretch, and two degenerate bends 2. With 37Cl the reduced mass increases from 0.97222 to 0.97368 3. DF would have a calculated band at 597.5 cm-1
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Spectroscopy/Vibrational_Spectroscopy/Vibrational_Modes/Isotope_effects_in_Vibrational_Spectroscopy.txt
ΓTotal = ΓStretch + ΓBend + ΓTranslation + ΓRotation and ΓVibration = ΓStretch + ΓBend For easier understanding, there are steps immediately followed by examples using the D3h. Finding Γ Total First, add all the x, y and z rows on the Character Tables of Symmetry groups. If x, y or z are in () on the far right then only count them once, otherwise count the row a second time (Keep the column separated). This is called Γx,y,z. Next, move the molecule with the designated column symbols and if an atom does not move then it is counted. Finally, multiply Γx,y,z and UMA and it will equal Γtotal. D3h E 2C2 3C2 σh 2S3 v IR Raman A1' 1 1 1 1 1 1   x2+y2, z2 A2' 1 1 -1 1 1 -1 Rz E' 2 -1 0 2 -1 0 (x,y) (xy, x2-y2) A1" 1 1 1 -1 -1 -1 A2" 1 1 -1 -1 -1 1 z E" 2 -1 0 -2 1 0 (Rx,Ry) (xz, yz) Γx,y,z 2+1=3 -1+1=0 0-1=-1 2-1=1 -1-1=-2 0+1=1 UMA (unmoved atoms) 6 3 2 4 1 4 Γtotal (3)(6)=18 (0)(3)= 0 (-1)(2)=-2 (1)(4)=4 (-2)(1)=-2 (1)(4)=4 D3h molecule -need to find the irreducible representation of Gamma total. Take Γtotal multiply by the number in front of the symbols (the order) and multiply by each number inside of the character table. Add up each row, and divide each row by the total order. For the D3h the order is 12. D3h 1E 2C2 3C2 σh 2S3 v add the up and divide by 12 A1' (1x1x18)=18 (2x1x0)=0 (3x1x-2)=-6 (1x1x4)=4 (2x1x-2)=-4 (3x1x4)=12 (24/12)=2 A2' (1x1x18)=18 (2x1x0)=0 (3x-1x-2)=-6 (1x1x4)=4 (2x1x-2)=-4 (3x-1x4)=-12 (0/12)=0 E' (1x2x18)=36 (2x-1x0)=0 (3x0x-2)=0 (1x2x4)=8 (2x-1x-2)=4 (3x0x4)=0 (48/12)=4 A1" (1x1x18)=18 (2x1x0)=0 (3x1x-2)=-6 (1x-1x4)=-4 (2x-1x-2)=4 (3x-1x4)=-12 (0/12)=0 A2" (1x1x18)=18 (2x1x0)=0 (3x-1x-2)=6 (1x-1x4)=-4 (2x-1x-2)=4 (3x1x4)=12 (36/12)=3 E" (1x2x18)=36 (2x-1x0)=0 (3x0x-2)=0 (1x-2x4)=-8 (2x1x-2)=-4 (3x0x4)=0 (24/12)=2 Γtotal= 2A1' + A2' + 4E' + 3A2" + 2E" Γ Translation The irreducible form of Γtrans, one needs to look at the second to last column. look at the row that the x, y and z and take the irreducible representation. For instance the D3h would be Γtrans= E'+A2" Γ Rotation One can find Γrot the same way as Γtrans. Instead of looking at x,y,z, one would look at the Rx, Ry, Rz. For the D3h. The Γrot = A2'+E" Γ Vibration To find ΓVibration, just take Γtottransrot= Γvibration. D3h example Γtot 2A1' + A2' + 4E' + 3A2" + 2E" Γtrans - E' - A2" Γrot - A2' -E" Γvibration 2A1' + 3E' + 2A2" + E" Number of Vibrational Active IR Bands Only Rx, Ry, Rz, x, y, and z can be ir active. which means only A2', E', A2", and E" can be IR active bands for the D3h. Next add up the number in front of the irreducible representation and that is how many IR active bonds. For instance for the same problem there are 3E'+2A2". There are 5 bands, three of them (meaning the E) are two fold degenerate. Number of Vibrational Active Raman bands Only x2+y2, z2, xy, xz, yz, x2-y2 can be Raman active. which means only A1', E', and E" can be raman active for the D3h. Next add up the number in front of the irreducible representation, and that is how many Raman active bonds there are. For instance for the same problem there are 3E' + E". There are four bands, 4 of which are two fold degenerate. Finding ΓStretch Looking at the molecules point group, do each of the symmetry representation and count the number of unmoved bonds. Γstretch = Γσrad . next multiply unmoved bonds by the symmetry operations and then the numbers inside the character tables. Then add the rows up and divide by the order of the point group. D3h E 2C2 3C2 σh 2S3 v IR Raman A1' 1 1 1 1 1 1   x2+y2, z2 A2' 1 1 -1 1 1 -1 Rz E' 2 -1 0 2 -1 0 (x,y) (xy, x2-y2) A1" 1 1 1 -1 -1 -1 A2" 1 1 -1 -1 -1 1 z E" 2 -1 0 -2 1 0 (Rx,Ry) (xz, yz) Γx,y,z 2+1=3 -1+1=0 0-1=-1 2-1=1 -1-1=-2 0+1=1 UMB (unmoved bonds) 5 2 1 3 0 3 D3h 1E 2C2 3C2 σh 2S3 v add the up and divide by 12 A1' (1x1x5)=5 (2x1x2)=4 (3x1x1)=3 (1x1x3)=3 (2x1x0)=0 (3x1x3)=9 (24/12)=2 A2' (1x1x5)=5 (2x1x2)=4 (3x-1x1)=-3 (1x1x3)=3 (2x1x0)=0 (3x-1x3)=-9 (0/12)=0 E' (1x2x5)=10 (2x-1x2)=-4 (3x0x1)=0 (1x2x3)=6 (2x-1x0)=0 (3x0x3)=0 (12/12)=1 A1" (1x1x5)=5 (2x1x2)=4 (3x1x1)=3 (1x-1x3)=-3 (2x-1x0=0 (3x-1x3)=-9 (0/12)=0 A2" (1x1x5)=5 (2x1x2)=4 (3x-1x1)=-3 (1x-1x3)=-3 (2x-1x0)=0 (3x1x3)=9 (12/12)=1 E" (1x2x5)=10 (2x-1x2)=-4 (3x0x1)=0 (1x-2x3)=-6 (2x1x0)=0 (3x0x3)=0 (24/12)=0 Γstretch = Γσ = Γrad = 2A1' + 1E'+ 1A2" Γπ= Γtan =
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Spectroscopy/Vibrational_Spectroscopy/Vibrational_Modes/Mode_Analysis.txt
Normal modes are used to describe the different vibrational motions in molecules. Each mode can be characterized by a different type of motion and each mode has a certain symmetry associated with it. Group theory is a useful tool in order to determine what symmetries the normal modes contain and predict if these modes are IR and/or Raman active. Consequently, IR and Raman spectroscopy is often used for vibrational spectra. Overview of Normal Modes In general, a normal mode is an independent motion of atoms in a molecule that occurs without causing movement to any of the other modes. Normal modes, as implied by their name, are orthogonal to each other. In order to discuss the quantum-mechanical equations that govern molecular vibrations it is convenient to convert Cartesian coordinates into so called normal coordinates. Vibrations in polyatomic molecules are represented by these normal coordinates. There exists an important fact about normal coordinates. Each of these coordinates belongs to an irreducible representation of the point the molecule under investigation. Vibrational wavefunctions associated with vibrational energy levels share this property as well. The normal coordinates and the vibration wavefunction can be categorized further according to the point group they belong to. From the character table predictions can be made for which symmetries can exist. The irreducible representation offers insight into the IR and/or Raman activity of the molecule in question. Degrees of Freedom 3N where N represents the number of nuclei present in the molecule is the total number of coordinates needed to describe the location of a molecule in 3D-space. 3N is most often referred to as the total number of degrees of freedom of the molecule being investigated. The total number of degrees of freedom, can be divided into: • 3 coordinates to describe the translational motion around the center of mass; these coordinates are called the translational degrees of freedom • 3 coordinates to describe the rotational motion in non-linear molecules; for linear molecules only 2 coordinates are required; these coordinates are called the rotational degrees of freedom • the remaining coordinates are used to describe vibrational motion; a non-linear molecule has 3N - 6 vibrational degrees of freedom whereas a linear molecule has 3N -5 degrees of freedom. Total Degree of Freedom Translational degrees of freedom Rotational degrees of freedom Vibrational degrees of freedom Table 1: Overview of degrees of freedom Nonlinear Molecules 3N 3 3 3N -6 Linear Molecules 3N 3 2 3N - 5 Example 1: Ethane vs. Carbon Dioxide Ethane, $C_2H_6$ has eight atoms (\N=8\) and is a nonlinear molecule so of the $3N=24$ degrees of freedom, three are translational and three are rotational. The remaining 18 degrees of freedom are internal (vibrational). This is consistent with: $3N -6 =3(8)-6=18$ Carbon Dioxide, $CO_2$ has three atoms ($N=3$ and is a linear molecule so of the $3N=9$ degrees of freedom, three are translational and two are rotational. The remaining 4 degrees of freedom are vibrational. This is consistent with: $3N - 5 = 3(3)-5 = 4$ Mathematical Introduction to Normal Modes If there is no external field present, the energy of a molecule does not depend on its orientation in space (its translational degrees of freedom) nor its center of mass (its rotational degrees of freedom). The potential energy of the molecule is therefore made up of its vibrational degrees of freedom only of $3N-6$ (or $3N-5$ for linear molecules). The difference in potential energy is given by: \begin{align} \Delta V &= V(q_1,q_2,q_3,...,q_n) - V(0,0,0,...,0) \tag{1} \[4pt] &= \dfrac{1}{2} \sum_{i=1}^{N_{vib}} \sum_{j=1}^{N_{vib}} \left(\dfrac{\partial^2 V}{\partial q_i\partial q_j} \right) q_iq_j \tag{2} \[4pt] &= \dfrac{1}{2}\sum_{i=1}^{N_{vib}} \sum_{j=1}^{N_{vib}} f_{ij} q_iq_j \tag{3} \end{align} where • $q$ represents the equilibrium displacement and • $N_{vib}$ the number of vibrational degrees of freedom. For simplicity, the anharmonic terms are neglected in this equation (consequently there are no higher order terms present). A theorem of classical mechanics states that the cross terms can be eliminated from the above equation (the details of the theorem are very complex and will not be discussed in detail). By using matrix algebra a new set of coordinates {Qj} can be found such that $\Delta{V} = \dfrac{1}{2} \sum_{j=1}^{N_{vib}}{F_jQ_j^2} \tag{4}$ Note that there are no cross terms in this new expression. These new coordinates are called normal coordinates or normal modes. With these new normal coordinates in hand, the Hamiltonian operator for vibrations can be written as follows: $\hat{H}_{vib} = -\sum_{j=1}^{N_{vib}} \dfrac{\hbar^2}{2\mu_i} \dfrac{d^2}{dQ_j^2} + \dfrac{1}{2} \sum_{j=1}^{N_{vib}}F_jQ_j^2 \tag{5}$ The total wavefunction is a product of the individual wavefunctions and the energy is the sum of independent energies. This leads to: $\hat{H}_{vib} = \sum_{j=1}^{N_{vib}} \hat{H}_{vib,j} = \sum_{j=1}^{N_{vib}} \left( \dfrac{-\hbar^2}{2 \mu_j}\dfrac{d^2}{dQ_i^2} + \dfrac{1}{2}\sum_{j=1}^{N_{vib}} F_jQ_j^2 \right) \tag{6}$ and the wavefunction is then $\psi_{vib} = Q_1,Q_2, Q_3 ..., Q_{vib} = \psi_{vib,1}(Q_1) \psi_{vib,2}(Q_2) \psi_{vib,3}(Q_3) , ..., \psi_{vib,N_{vib}}(Q_{N_{vib}}) \tag{7}$ and the total vibrational energy of the molecule is $E_{vib} = \sum_{j=1}^{N_{vin}} h\nu_j \left (v_j + \dfrac{1}{2}\right) \tag{8}$ where $v_j= 0,1,2,3...$ The consequence of the result stated in the above equations is that each vibrational mode can be treated as a harmonic oscillator approximation. There are $N_{vib}$ harmonic oscillators corresponding to the total number of vibrational modes present in the molecule. In the ground vibrational state the energy of the molecule is equal to (1/2)hνj. The ground state energy is referred to as zero point energy. A vibration transition in a molecule is induced when it absorbs a quantum of energy according to E = hv. The first excited state is separated from the ground state by Evib = (3/2)hν since vj = 1, the next energy level separation is (5/2)hν, etc... The harmonic oscillator is a good approximation, but it does not take into account that the molecule, once it has absorbed enough energy to break the vibrating bond, does dissociate. A better approximation is the Morse potential which takes into account anharmonicity. The Morse potential also accounts for bond dissociation as well as energy levels getting closer together at higher energies. Pictorial description of normal coordinates using CO The normal coordinate q is used to follow the path of a normal mode of vibration. As shown in Figure 2 the displacement of the C atom, denoted by Δro(C), and the displacement of the O atom, denoted by Δro(O), occur at the same frequency. The displacement of atoms is measured from the equilibrium distance in ground vibrational state, ro. Description of vibrations • ν = stretching is a change in bond length; note that the number of stretching modes is equal to the number of bonds on the molecule • δ = bending is a change in bond angle • ρr = rocking is change in angle between a group of atoms • ρw = wagging is change in angle between the plane of a group of atoms • ρt = twisting is change in angle between the planes of two groups of atoms • π= out of plane In direct correlation with symmetry, subscripts s (symmetric), as (asymmetric) and d (degenerate) are used to further describe the different modes. A normal mode corresponding to an asymmetric stretch can be best described by a harmonic oscillator: As one bond lengthens, the other bond shortens. A normal mode that corresponds can be best described by a Morse potential well: As the bond length increases the potential energy increases and levels off as the bond length gets further away from the equilibrium. The use of Symmetry and Group Theory Symmetry of normal modes It is important to realize that every normal mode has a certain type of symmetry associated with it. Identifying the point group of the molecule is therefore an important step. With this in mind it is not surprising that every normal mode forms a basis set for an irreducible representation of the point group the molecule belongs to. For a molecule such as water, having a structure of XY2, three normal coordinates can be determined. The two stretching modes are equivalent in symmetry and energy. The figure below shows the three normal modes for the water molecule: Figure 3: Three normal modes of water By convention, with nonlinear molecules, the symmetric stretch is denoted v1 whereas the asymmetric stretch is denoted v2. Bending motions are v3. With linear molecules, the bending motion is v2 whereas asymmetric stretch is v3. The water molecule has C2v symmetry and its symmetry elements are E, C2, σ(xz) and σ(yz). In order to determine the symmetries of the three vibrations and how they each transform, symmetry operations will be performed. As an example, performing C2 operations using the two normal mode v2 and v3 gives the following transformation: Once all the symmetry operations have been performed in a systematic manner for each modes the symmetry can be assigned to the normal mode using the character table for C2v: C2v E C2 σ (xz) σ (yz) Table 2: Character table for the C2v point group ν1 1 1 1 1 = a1 ν2 1 1 1 1 = a1 ν3 1 -1 -1 1 = b2 Water has three normal modes that can be grouped together as the reducible representation $Γ_{vib}= 2a_1 + b_2.$ Determination of normal modes becomes quite complex as the number of atoms in the molecule increases. Nowadays, computer programs that simulate molecular vibrations can be used to perform these calculations. The example of [PtCl4]2- shows the increasing complexity. The molecule has five atoms and therefore 15 degrees of freedom, 9 of these are vibrational degrees of freedom. The nine normal modes are exemplified below along with the irreducible representation the normal mode belongs to (D4h point group). A1g, b1g and eu are stretching vibrations whereas b2g, a2u, b2u and eu are bending vibrations. Determining if normal modes are IR and/or Raman active Transition Moment Integral A transition from $\ce{v -> v'}$ is IR active if the transition moment integral contains the totally symmetric irreducible representation of the point group the molecule belongs to. The transition moment integral is derived from the one-dimensional harmonic oscillator. Using the definition of dipole moment the integral is: $M\left(v \rightarrow v^{\prime}\right)=\int_{-\infty}^{\infty} \psi^{*}\left(v^{\prime}\right) \mu \psi(v) d x$ If μ, the dipole moment, would be a constant and therefore independent of the vibration, it could be taken outside the integral. Since v and v' are mutually orthogonal to each other, the integral will equal zero and the transition will not be allowed. In order for the integral to be nonzero, μ must change during a vibration. This selection rule explains why homonuclear diatomic molecules do not produce an IR spectrum. There is no change in dipole moment resulting in a transition moment integral of zero and a transition that is forbidden. For a transition to be Raman active the same rules apply. The transition moment integral must contain the totally symmetric irreducible representation of the point group. The integral contains the polarizability tensor (usually represented by a square matrix): $M\left(v \rightarrow v^{\prime}\right)=\int_{-\infty}^{\infty} \psi^{*}\left(v^{\prime}\right) \alpha \psi(v) d x$ $α$ must be nonzero in order for the transition to be allowed and show Raman scattering. Character Tables For a molecule to be IR active the dipole moment has to change during the vibration. For a molecule to be Raman active the polarizability of the molecule has to change during the vibration. The reducible representation Γvib can also be found by determining the reducible representation of the 3N degrees of freedom of H2O, Γtot. By applying Group Theory it is straightforward to find Γx,y,z as well as UMA (number of unmoved atoms). Again, using water as an example with C2v symmetry where 3N = 9, Γtot can be determined: C2v E C2 σ (xz) σ (yz) Τx,y,z 3 -1 1 1 UMA 3 1 1 3 Γtot 9 -1 1 3 =3a1 + a2 + 2b1 + 3b2 Note that Γtot contains nine degrees of freedom consistent with 3N = 9. Γtot contains Γtranslational, Γrotational as well as Γvibrational. Γtrans can be obtained by finding the irreducible representations corresponding to x,y and z in the right side of the character table, Γrot by finding the ones corresponding to Rx, Ry and Rz. Γvib can be obtained by Γtot - Γtrans - Γrot. Γvib (H2O) = (3a1 + a2 + 2b1+ 3b2) - (a1 + b1 + b2) - (a2 + b1 + b2) = 2a1 + b2 In order to determine which modes are IR active, a simple check of the irreducible representation that corresponds to x,y and z and a cross check with the reducible representation Γvib is necessary. If they contain the same irreducible representation, the mode is IR active. For H2O, z transforms as a1, x as b1 and y as b2. The modes a1 and b2 are IR active since Γvib contains 2a1 + b2. In order to determine which modes are Raman active, the irreducible representation that corresponds to z2, x2-y2, xy, xz and yz is used and again cross checked with Γvib. For H2O, z2 and x2-y2 transform as a1, xy as a2, xz as b1 and yz as b2.The modes a1 and b2 are also Raman active since Γvib contains both these modes. The IR spectrum of H2O does indeed have three bands as predicted by Group Theory. The two symmetric stretches v1 and v2 occur at 3756 and 3657 cm-1 whereas the bending v3 motion occurs at 1595 cm-1. In order to determine which normal modes are stretching vibrations and which one are bending vibrations, a stretching analysis can be performed. Then the stretching vibrations can be deducted from the total vibrations in order to obtain the bending vibrations. A double-headed arrow is drawn between the atom as depicted below: Then a determination of how the arrows transform under each symmetry operation in C2v symmetry will yield the following results: C2v E C2 σ (xz) σ (yz) Γstretch 2 0 0 2 = a1 + b2 Γbend = Γvib - Γstretch = 2a1 + b2 -a1 - b2 = a1 H2O has two stretching vibrations as well as one bending vibration. This concept can be expanded to complex molecules such as PtCl4-. Four double headed arrows can be drawn between the atoms of the molecule and determine how these transform in D4h symmetry. Once the irreducible representation for Γstretch has been worked out, Γbend can be determined by Γbend = Γvib - Γstretch. Fundamental transition, overtones and hot bands The transition from v=0 (ground state) -> v=1 (first excited state) is called the fundamental transition. This transition has the greatest intensity. The transition from v=0 --> v=2 is is referred to as the first overtone, from v=0 --> v=3 is called the second overtone, etc. Ovetones occur when a mode is excited above the v = 1 level. The harmonic oscillator approximation supports the prediction that the transition to a second overtone will be twice as energetic as a fundamental transition. Most molecules are in their zero point energy at room temperature. Therefore, most transitions do originate from the v=0 state. Some molecules do have a significant population of the v=1 state at room temperature and transitions from this thermally excited state are called hot bands. Combination bands can occur if more than one vibration is excited by the absorption of a photon. The overall energy of a combination band is the result of the sum of individual transitions. Problems 1. Chlorophyll a is a green pigment that is found in plants. Its molecular formula is C55H77O5N4Mg. How many degrees of freedom does this molecule possess? How many vibrational degrees of freedom does it have? 2. CCl4 was commonly used an as organic solvent until its severe carcinogenic properties were discovered. How many vibrational modes does CCl4 have? Are they IR and/or Raman active? 3. The same vibrational modes in H2O are IR and Raman active. WF6- has IR active modes that are not Raman active and vice versa. Explain why this is the case. 4. How many IR peaks do you expect from SO3? Estimate where these peaks are positioned in an IR spectrum. 5. Calculate the symmetries of the normal coordinates of planar BF3. Answers to Problems 1. chlorophyll has 426 degrees of freedom, 420 vibrational modes 2. The point group is Td, Tvib = a1 + e + 2t2, a1 and e are Raman active, t2 is both IR and Raman active 3. For molecules that possess a center of inversion i, modes cannot be simultaneously IR and Raman active 4. Point group is D3h, one would expect three IR active peaks. Asymmetric stretch highest (1391 cm-1), two bending modes (both around 500 cm-1). The symmetric stretch is IR inactive 5. T3N = A1' + A2' + 3E' + 2 A2" + E" and Tvib= A1' + 2E' + A2"
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Spectroscopy/Vibrational_Spectroscopy/Vibrational_Modes/Normal_Modes.txt
The Heisenberg uncertainty principle argues that all atoms in a molecule are constantly in motion (otherwise we would know position and momentum accurately). For molecules, they exhibit three general types of motions: translations (external), rotations (internal) and vibrations (internal). A diatomic molecule contains only a single motion., while polyatomic molecules exhibit more complex vibrations, known as normal modes. Molecular Vibrations A molecule has translational and rotational motion as a whole while each atom has it's own motion. The vibrational modes can be IR or Raman active. For a mode to be observed in the IR spectrum, changes must occur in the permanent dipole (i.e. not diatomic molecules). Diatomic molecules are observed in the Raman spectra but not in the IR spectra. This is due to the fact that diatomic molecules have one band and no permanent dipole, and therefore one single vibration. An example of this would be $\ce{O2}$ or $\ce{N2}$. However, unsymmetric diatomic molecules (i.e. $\ce{CN}$) do absorb in the IR spectra. Polyatomic molecules undergo more complex vibrations that can be summed or resolved into normal modes of vibration. The normal modes of vibration are: asymmetric, symmetric, wagging, twisting, scissoring, and rocking for polyatomic molecules. Symmetric Stretching Asymmetric Stretching Wagging Twisting Scissoring Rocking Figure $1$: Six types of Vibrational Modes. Images used with permission (Public Domain; Tiago Becerra Paolini). Calculate Number of Vibrational Modes Degree of freedom is the number of variables required to describe the motion of a particle completely. For an atom moving in 3-dimensional space, three coordinates are adequate so its degree of freedom is three. Its motion is purely translational. If we have a molecule made of N atoms (or ions), the degree of freedom becomes 3N, because each atom has 3 degrees of freedom. Furthermore, since these atoms are bonded together, all motions are not translational; some become rotational, some others vibration. For non-linear molecules, all rotational motions can be described in terms of rotations around 3 axes, the rotational degree of freedom is 3 and the remaining 3N-6 degrees of freedom constitute vibrational motion. For a linear molecule however, rotation around its own axis is no rotation because it leave the molecule unchanged. So there are only 2 rotational degrees of freedom for any linear molecule leaving 3N-5 degrees of freedom for vibration. The degrees of vibrational modes for linear molecules can be calculated using the formula: $3N-5 \label{1}$ The degrees of freedom for nonlinear molecules can be calculated using the formula: $3N-6 \label{2}$ $n$ is equal to the number of atoms within the molecule of interest. The following procedure should be followed when trying to calculate the number of vibrational modes: 1. Determine if the molecule is linear or nonlinear (i.e. Draw out molecule using VSEPR). If linear, use Equation \ref{1}. If nonlinear, use Equation \ref{2} 2. Calculate how many atoms are in your molecule. This is your $N$ value. 3. Plug in your $N$ value and solve. Example $1$: Carbon dioxide How many vibrational modes are there in the linear $\ce{CO_2}$ molecule ? Answer There are a total of $3$ atoms in this molecule. It is a linear molecule so we use Equation \ref{1}. There are $3(3)-5 = 4 \nonumber$ vibrational modes in $\ce{CO_2}$. Would $\ce{CO_2}$ and $\ce{SO_2}$ have a different number for degrees of vibrational freedom? Following the procedure above, it is clear that $\ce{CO_2}$ is a linear molecule while $\ce{SO_2}$ is nonlinear. $\ce{SO_2}$ contains a lone pair which causes the molecule to be bent in shape, whereas, $\ce{CO_2}$ has no lone pairs. It is key to have an understanding of how the molecule is shaped. Therefore, $\ce{CO_2}$ has 4 vibrational modes and $\ce{SO_2}$ has 3 modes of freedom. Followup ($\ce{SO_2}$) Would $\ce{CO_2}$ and $\ce{SO_2}$ have a different number for degrees of vibrational freedom? Following the procedure above, it is clear that $\ce{CO_2}$ is a linear molecule while $\ce{SO_2}$ is nonlinear. $\ce{SO_2}$ contains a lone pair which causes the molecule to be bent in shape, whereas, $\ce{CO_2}$ has no lone pairs. It is key to have an understanding of how the molecule is shaped. Therefore, $\ce{CO_2}$ has 4 vibrational modes and $\ce{SO_2}$ has 3 modes of freedom. Example $2$: Methane How many vibrational modes are there in the tetrahedral $\ce{CH_4}$ molecule ? Answer In this molecule, there are a total of 5 atoms. It is a nonlinear molecule so we use Equation \ref{2}. There are $3(5)-6 = 9\nonumber$ vibrational modes in $\ce{CH_4}$. Example $3$: Buckyballs How many vibrational modes are there in the nonlinear $\ce{C_{60}}$ molecule ? Answer In this molecule, there are a total of 60 carbon atoms. It is a nonlinear molecule so we use Equation \ref{2}. There are $3(60)-6 = 174\nonumber$ vibrational modes in $\ce{C_60}$.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Spectroscopy/Vibrational_Spectroscopy/Vibrational_Modes/Number_of_Vibrational_Modes_in_a_Molecule.txt
The construction of linear combinations of the basis of atomic movements allows the vibrations belonging to irreducible representations to be investigated. The wavefunction of these symmetry equivalent orbitals is referred to as Symmetry Adapted Linear Combinations, or SALCs. SALCs (Symmetry Adapted Linear Combinations) are the linear combinations of basis sets composed of the stretching vectors of the molecule. The SALCs of a molecule can help determine binding schemes and symmetries. The procedure used to determine the SALCs of a molecule is also used to determine the LCAO of a molecule. The LCAO, Linear Combination of Atomic Orbitals, uses the basis set of atomic orbitals instead of stretching vectors. The LCAO of a molecule provides a detailed description of the molecular orbitals, including the number of nodes and relative energy levels. Symmetry adapted linear combinations are the sum over all the basis functions: $\phi_{i} =\displaystyle\sum_{j} c_{ij} b_{j} \label{1}$ $ϕ_i$ is the ith SALC function, bj is the jth basis function, and $c_{ij}$ is a coefficient which controls how much of $b_j$ appears in $ϕ_i$. In method two, the projection operator is used to obtain the coefficients consistent with each irreducible representation.1 The SALCs of a molecule may be constructed in two ways. The first method uses a basis set composed of the irreducible representation of the stretching modes of the molecule. On the other hand, the second method uses a projection operator on each stretching vector. When determining the irreducible representations of the stretching modes, the reducible representations for all the vibrational modes must first be determined. Basis vectors are assigned characters and are treated as individual objects. A Background In order to understand and construct SALCs, a background in group theory is required. The identification of the point group of the molecule is essential for understanding how the application of operations affects the molecule. This allows for the determination of the nature of the stretching modes. As a review, let’s first determine the stretching modes of water together. Water has the point group C2v. Table 1 is the character table for the C2v point group. Table $1$: C2v Character Table C2v E C2 σv(xz) σv'(yz) A1 1 1 1 1 z x2, y2, z2 A2 1 1 −1 −1 Rz xy B1 1 −1 1 −1 x, Ry xz B2 1 −1 −1 1 y, Rx yz The first step in determining stretching modes of a molecule is to add the characters contained in the x, y, and z rows to obtain the total reducible representation of the xyz coordinates, ΓXYZ. ΓXYZ can also be found by applying the symmetry operations to the three vectors (x, y, and z) of the coordinate system of the molecule. The next step involves the investigation of the atoms that remain unchanged when an operation is applied, ΓUMA. This step refers to the unmoved atoms (UMA). Multiplying ΓXYZ and ΓUMA gives the reducible representation for the molecule referred to as ΓTOTAL. The ΓTOTAL is the reducible representation for all the modes of the molecule (vibrational, rotational, and translational) and can also be determined by applying the symmetry operations to each coordinate vector (x, y, and z) on each atom. Table $2$: C2v Reducible Representation for H2O C2v E C2 σv(xz) σv'(yz) ΓXYZ 3 -1 1 1 ΓUMA 3 1 1 3 ΓTOTAL 9 -1 1 3 ΓTOTAL is then reduced to later give the stretching modes that are unique to the molecule. First, the reduction formula is applied to decompose the reducible representation: $a_{i}=\dfrac{1}{h}\displaystyle\sum_{R}(X^{R}X_{i}^{R}C^{R}) \label{2}$ Here, ai is the number of times the irreducible representation will appear in the initial reducible representation. The order of the point group is represented by h; R is an operation of the group; XR is a character of the operation R in the reducible representation; XiR is a character of the operation R in the irreducible representation, and CR is the number of members in the class to which R belongs. Applying this formula and subtracting the representations obtained from the basis functions x, y, z, Rx, Ry, and Rz (for the translations and rotations of the molecule) gives the irreducible representation that corresponds to the vibrational states of the molecule: ΓVibration = 2a1 + b2 A simple check can be performed to determine that the right number of modes was obtained. For linear molecules (3N-5) gives the correct number of normal modes. For molecules with any other shape otherwise known as non-linear molecules, the formula is (3N-6). N represents the number of atoms in a molecule. Let’s double check the above water example: $(3N-6) N=3 \label{3}$ $[3(3)-6] = 3$ Water should have three vibrational modes. When the irreducible representation was obtained, it was seen that water has two a1 modes and a b2mode for a total of three. When double checking that you have the correct number of normal modes for other molecules, remember that the irreducible representation E is doubly degenerate and counts as two normal modes. T is triply degenerate and counts for three normal modes, etc. Constructing SALCs Method 1 There are multiple ways of constructing the SALCs of a molecule. The first method uses the known symmetries of the stretching modes of the molecule. To investigate this method, the construction of the SALCs of water is examined. Water has three vibrational states, 2a1+b2. Two of these vibrations are stretching modes. One is symmetric with the symmetry A1, and the other is antisymmetric with the symmetry B2. While looking for the SALC of a molecule, one uses vectors represented by bj as the basis set. The vectors demonstrate the irreducible representations of molecular vibrations. The SALCs of water can be composed by creating a linear combination of the stretching vectors. $\phi(A_{1})=b_{1}+b_{2}$ and $\phi(B_{1})=b_{1}-b_{2} \label{4}$ Normalization The final step in constructing the SALCs of water is to normalize expressions. To normalize the SALC, multiply the entire expression by the normalization constant that is the inverse of the square root of the sum of the squares of the coefficients within the expression. $\phi_{i}=N\displaystyle\sum_{j}c_{ij}b_{j} \label{5}$ $N=\dfrac{1}{\sqrt{\displaystyle\sum_{j=1}^{n}c_{ij}^{2}}} \label{6}$ $\phi(A_{1})=\dfrac{1}{\sqrt{2}}(b_{1}+b_{2})$ and $\phi(B_{1})=\dfrac{1}{\sqrt{2}}(b_{1}-b_{2})$ Normalizing the SALCs ensures that the magnitude of the SALC is unity, and therefore the dot product of any SALC with itself will equal one. Method 2 The other method for constructing SALCs is the projection operator method. The SALC of a molecule can be constructed in the same manner as the LCAO, Linear Combination of Atomic Orbitals, however the basis set differs. While looking for the SALCs of a molecule, one uses vectors represented by bj, on the other hand, while looking for the LCAO of a molecule, one uses atomic orbitals as the basis set. The vectors demonstrate the possible vibration of the molecule. While constructing SALCs, the basis vectors can be treated as individual vectors. Example $1$: Water Let’s take a look at how to construct the SALC for water. The first step in constructing the SALC is to label all vectors in the basis set. Below are the bond vectors of water that will be used as the basis set for the SALCs of the molecule. Next, the basis vector, v, is transformed by Tj, the jth symmetry operation of the molecule’s point group. As the vector of the basis set is transformed, record the vector that takes its place. Water is a member of the point group C2V. The Symmetry elements of the C2V point group are E, C2, σv, σv’. The ith SALC function, ϕiis shown below using the vector v=b1. Once the transformations have been determined, the SALC can be constructed by taking the sum of the products of each character of a representation within the point group and the corresponding transformation. The SALCs functions are the collective transformations of the basis sets represented by ϕiwhere Xi(j) is the character of the ith irreducible representation and the jth symmetry operation. $\phi_{i}=\displaystyle\sum_{j}X_{i}(j)T_{j}\nu \label{7}$ Table $3$: Projection Operator method for C2v The final step in constructing the SALCs of water is to normalize expressions. Table $4$: Normalized SALCs of H2O There are two SALCs for the water molecule, ϕ1(A1) and ϕ1(B2). This demonstrates that water has two stretching modes, one is a totally symmetric stretch with the symmetry, A1, and the other is an antisymmetric stretch with the symmetry B2. Interpreting SALCs Both methods of construction result in the same SALCs. Only irreducible representations corresponding to the symmetries of the stretching modes of the molecule will produce a SALC that is non-zero. Method 1 only utilized the known symmetries of the vibrational modes. All irreducible representations of the point group were used, but the representations that were not vibrational modes resulted in SALCs equal to zero. Therefore, with the SALCs of a molecule given, all the symmetries of the stretching modes are identified. This allows for a clearer understanding of the spectroscopy of the molecule. Even though vibrational modes can be observed in both infrared and Raman spectroscopy, the SALCs of a molecule cannot identify the magnitude or frequency of the peak in the spectra. The normalized SALCs can, however, help to determine the relative magnitude of the stretching vectors. The magnitude can be determined by the equation below. $a \cdot b= |a||b|cos \theta \label{8}$ The resulting A1 and B1 symmetries for the above water example are each active in both Raman and IR spectroscopies, according to the C2v character table. If the vibrational mode allows for a change in the dipole moment, the mode can be observed through infrared spectroscopy. If the vibrational mode allows for a change in the polarization of the molecule, the mode can be observed through Raman spectroscopy. Both stretching and bending modes are seen in the spectra, however only stretching modes are expressed in the SALCs. Example $2$: Difluorobenzene The SALCs of a molecule can also provide insight to the geometry of a molecule. For example, SALCs can aid in determining the differences between para-difluorobenzene and ortho-difluorobenzene. The SALCs for these two molecules are given below. $\phi (A_{g})= \dfrac{1}{2} (b_{1} + b_{2} + b_{3} +b_{4})$ $\phi (B_{1g})= \dfrac{1}{2} (b_{1} - b_{2} + b_{3} -b_{4})$ $\phi (B_{2u})= \dfrac{1}{2} (b_{1} - b_{2} - b_{3} +b_{4})$ $\phi (B_{3u})= \dfrac{1}{2} (b_{1} + b_{2} - b_{3} -b_{4})$ $\phi _{1} (A_{1})= \dfrac{1}{\sqrt{2}} (b_{1} + b_{2} + b_{3} +b_{4})$ $\phi _{2} (A_{1})= \dfrac{1}{\sqrt{2}} (b_{1} - b_{2} + b_{3} -b_{4})$ $\phi _{1} (B_{1})= \dfrac{1}{\sqrt{2}} (b_{1} - b_{2} - b_{3} +b_{4})$ $\phi _{2} (B_{1})= \dfrac{1}{\sqrt{2}} (b_{1} + b_{2} - b_{3} -b_{4})$ From the SALCs, it is seen that para-difluorobenzene has four stretching modes and ortho-difluorobenzene has only two. Therefore, it is no surprise that the vibrational spectroscopy of the para-difluorobenzene shows more peaks than the ortho-difluorobenzene. Applications The SALCs of a molecule can be used to understand the stretching modes and binding schemes of a molecule. More information can also be interpreted when applying the projection operator used in SALCs on the atomic orbitals of the molecule. This results in the determination of the linear combination of atomic orbitals (LCAO), which gives information on the molecular orbitals of the molecule. The molecular orbitals (MO) of a molecule are often constructed as LCAOs. Each MO is a solution to the Schrödinger equation and is an eigenfunction of the Hamiltonian operator. The LCAOs can be determined in the same manner as the SALCs of a molecule, with the use of a projection operator. The difference is that the basis set is no longer stretching vectors, but instead the atomic orbitals of the molecule. Hydrogen only has s orbitals, but oxygen has s and p orbitals, where the px, py, and pz all transform differently and therefore must be treated differently. Once the LCAOs of the molecule have been determined, the expressions can be interpreted into images of the orbitals bonding. If two orbitals are of the same sign in the expression, the electrons in the orbitals are in phase with each other and are bonding. If two orbitals are of the opposite sign in the expression, the electrons in the orbitals are out of phase with each other and are antibonding. The image below shows the atomic orbitals' phases (or signs) as red or blue lobes. Any separation between two antibonding atomic orbitals is a planar node. As the number of nodes increases, so does the level of antibonding. This allows for the LCAO to place the molecular orbitals in order of increasing energy, which can be used in constructing the molecular orbital (MO) diagram of the molecule. The irreducible representation used to construct the LCAO is used to describe the MOs. The LCAOs for water are shown below with red dotted lines showing the nodes. Notice the nodes for the px orbitals are in a different plane than the s orbitals of the hydrogens, so these are degenerate and nonbonding. Information from the LCAO of water can also be used to analyze and anticipate the adsorption of water onto various surfaces. Evarestov and Bandura used this technique to identify the water adsorption on Y-doped BaZrO3 and TiO2 (Rutile) respectively.2,3 2 Applying a combination of Methods 1 and 2, the SALCs for CBr2H2 can be determined. The point group of this molecule is C2v, making it similar to the determination of SALCs for water. However, the central carbon contains more than one type of attached atom; therefore, the stretching analysis must be performed in pieces. First, the C-H stretches are examined, followed by the C-Br stretches: Table 5: Irreducible Representations for C-H and C-Br stretches in CBr2H2. C2V E C2 σV σV’ Irreducible Representation ΓC-H 2 0 0 2 ΓC-H = A1 + B2 ΓC-Br 2 0 2 0 ΓC-Br = A1 + B1 Applying the projection operator method to C-Br and C-H stretches individually, the SALCs are obtained in the same fashion as before. Table 6: SALCs for CBr2H2. ΓC-Br E C2 σV σV’ SUM A1Tj(b1) b1 b2 b1 b2 2(b1 + b2) B1Tj(b1) b1 -b2 b1 -b2 2(b1 - b2) Table 7: SALCs for CBr2H2. ΓC-H E C2 σV σV’ SUM A1Tj(a1) a1 a2 a2 a1 2(a1 + a2) B2Tj(a1) a1 -a2 -a2 a1 2(a1 - a2) The results are normalized and the following SALCs are obtained for the C2v molecule CBr2H2: $\phi CBr(A_{1})=\frac{1}{\sqrt{2}}(b_{1}+b_{2})$ $\phi CBr(B_{1}) =\frac{1}{\sqrt{2}} (b_{1}-b_{2})$ $\phi CH(A_{1}) =\frac{1}{\sqrt{2}} (a_{1}+a_{2})$ $\phi CH(B_{2}) =\frac{1}{\sqrt{2}} (a_{1}-a_{2})$ 4 To obtain the SALCs for PtCl4, the same general method is applied. However, even though the point group of the molecule is D4h, the cyclic subgroup C4 may be used (this is a more simplified character table used for spherically symmetrical molecules). Some manipulation is required in order to use this cyclic subgroup and will be discussed. Below is the C4 cyclic character table. Table $8$: C4 cyclic character table. C4 E C41 C42 C43 A 1 1 1 1 B 1 -1 1 -1 E1 E2 1 1 i -i -1 -1 -i i Notice, there are two rows for E, each singly degenerate. To solve for the characters of E, one must take the sum and difference of the two rows. Then, a reduction can be applied to obtain the easiest possible characters by dividing each row by a common factor (removing the common factor is not necessary, but it does simplify the problem as well as remove any imaginary terms): Sum = [ (1+1) (i-i) (-1-1) (-i+i) ] = (2 0 -2 0) ÷ 2 = E1 (1 0 -1 0) Difference = [ (1-1) (i+i) (-1+1) (-i-i) ] = (0 2i 0 -2i) ÷ 2i = E2 (0 1 0 -1) Using the above cyclic group, and the newly obtained characters for E, the projection operator can be applied using Method 2 for the construction of SALCs. Table $9$: SALCs for PtCl4 using Method 2. C4 E C41 C42 C43 SUM ATj(b1) b1 b2 b3 b4 b1 + b2 + b3 + b4 BTj(b1) b1 -b2 b3 -b4 b1 - b2 + b3 - b4 E1Tj(b1) b1 0 -b3 0 b1 - b3 E2Tj(b1) 0 b2 0 -b4 b2 - b4 Normalizing the sum as mentioned in Method 1, the following SALCs are obtained for the D4h molecule PtCl4: $\phi (A) =\frac{1}{2} (b_{1}+b_{2}+b_{3}+b_{4})$ $\phi (B)=\frac{1}{2} (b_{1}-b_{2}+b_{3}-b_{4})$ $\phi (E^{1})=\frac{1}{\sqrt{2}} (b_{1}-b_{3})$ $\phi (E^{2})=\frac{1}{\sqrt{2}} (b_{2}-b_{4})$ 3 Applying a combination of Methods 1 and 2, the SALCs for the C-H stretches of PF2H3 can be determined. The point group of this molecule is Cs. The central carbon contains more than one type of attached hydrogen; therefore, the stretching analysis must be performed in pieces. First, the C-HA stretches are examined, followed by the C-HB stretches: Table $10$: Irreducible Representations for C-Ha and C-Hb stretches in PF2H3. Cs E σh Irreducible Representation ΓC-Ha 2 0 ΓC-Ha = A’ + A” ΓC-Hb 1 1 ΓC-Hb = A’ Applying the projection operator method to C-HA and C-HB stretches individually, the SALCs are obtained in the same fashion as before. Table $11$: SALCs for PF2H3 using Method 2. ΓC-HB E σh SUM A’ Tj(b1) b1 b2 b1 + b2 A” Tj(b1) b1 -b2 b1 - b2 A’ Tj(a1) a1 a1 a1 + a1 Table $12$: SALCs for PF2H3 using Method 2. ΓC-HA E C2 σV σV’ SUM A1Tj(a1) a1 a2 a2 a1 2(a1+ a2) B2Tj(a1) a1 -a2 -a2 a1 2(a1- a2) The results are normalized and the following SALCs are obtained for the Cs molecule PF2H3: $\phi_{1} A'=\frac{1}{\sqrt{2}} (b_{1} +b_{2} )+a_{1} = \frac{1}{sqrt{3}} (b_{1} +b_{2} +a_{1})$ $\phi_{2} A'=\frac{1}{\sqrt{2}} (b_{1} +b_{2} )-a_{1} = \frac{1}{sqrt{3}} (b_{1} +b_{2} -a_{1})$ $\phi A''=\frac{1}{\sqrt{2}} (b_{1} -b_{2} )$ Problems 1. Construct the SALCs for C-H stretches of ortho-difluorobenzene. 2. Construct the SALCs for ammonia. 3. Draw the nodes for the MOs of BeH2 (determined by used of LCAOs) and rank the MOs in order of increasing energy. Solutions to Practice Problems 1. A1Tj(b1)= \frac{1}{\sqrt{2}} (b_{1} +b_{2})\) B2Tj(b1)= \frac{1}{\sqrt{2}} (b_{1} -b_{2})\) A1Tj(b3)= \frac{1}{\sqrt{2}} (b_{3} +b_{4})\) B2Tj(b3)= \frac{1}{\sqrt{2}} (b_{3} -b_{4})\) Then add and subtract (for in phase and out of phase) the individual linear combinations found by the projection operator to give the SALCs. ϕ1(A1) = \frac{1}{2} (b_{1} +b_{2} +b_{3} +b_{4} )\) ϕ2(A1) = \frac{1}{2} (b_{1} +b_{2} -b_{3} -b_{4} )\) ϕ1(B2) = \frac{1}{2} (b_{1} -b_{2} +b_{3} -b_{4} )\) ϕ2(B2) = \frac{1}{2} (b_{1} -b_{2} -b_{3} +b_{4} )\). 2. ATj(b1)= \frac{1}{\sqrt{3}} (b_{1} +b_{2} +b_{3} )\) E1Tj(b1)= \frac{1}{\sqrt{6}} (2b_{1} -b_{2} -b_{3} )\) E2Tj(b1)= \frac{1}{\sqrt{2}} (b_{2} -b_{3} )\) 3.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Spectroscopy/Vibrational_Spectroscopy/Vibrational_Modes/Symmetry_Adapted_Linear_Combinations.txt
X-ray Spectroscopy is a broadly used method to investigate atomic local structure as well as electronic states. Very generally, an X-ray strikes an atom and excites a core electron that can either be promoted to an unoccupied level, or ejected from the atom; both of these processes will create a core hole. • EXAFS - Theory • X-Rays Like light, X-rays are electromagnetic radiation with very short wavelengths. Thus, X-ray photons have high energy, and they penetrate opaque material, but are absorbed by materials containing heavy elements. • XANES X-ray Absorption Near Edge Structure (XANES), also known as Near edge X-ray Absorption Fine Structure (NEXAFS), is loosely defined as the analysis of the spectra obtained in X-ray absorption spectroscopy experiments. It is an element-specific and local bonding-sensitive spectroscopic analysis that determines the partial density of the empty states of a molecule. • XANES: Application XANES, short for X-ray Absorption Near-Edge Structure, is a subset of X-ray Absorption Spectroscopy. The absorption edge corresponding to the liberation of a core electron from an element will exhibit several identifiable features which change depending on the chemical environment of the element being probed. The study and modelling of the characteristics of near-edge features helps answer questions about the oxidation state, coordination, and spin state of the probed element. • XAS - Theory XAS, or X-ray Absorption Spectroscopy, is a broadly used method to investigate atomic local structure as well as electronic states. Very generally, an X-ray strikes an atom and excites a core electron that can either be promoted to an unoccupied level, or ejected from the atom. Both of these processes will create a core hole. If the electron dissociates, this produces an excited ion as well as photoelectron and is studied by X-ray Photoelectron Spectroscopy (XPS). X-ray Spectroscopy EXAFS (Extended X-ray Absorption Fine Structure) and XANES (X-Ray Absorption Near Edge structure) are regions of the spectrum obtained from XAS(X-ray Absorption Spectroscopy). EXAFS corresponds to the oscillating part of the spectrum to the right of the absorption edge(appearing as a sudden, sharp peak), starting at roughly 50 eV and extending to about 1000 eV above the edge (shown in Figure 1). Through mathematical analysis of this region, one can obtain local structural information for the atom in question. Introduction The ability of EXAFS to provide information about an atom's local environment has widespread application, particularly to the geometric analysis of amorphous crystalline solids. Over time, EXAFS has become more applicable to quantitative analysis of noncrystalline materials. When analyzing a single atom within a material, properties analyzed include coordination number, disorder of neighboring atoms, and distance of neighboring atoms. Of these three properties, radial distance is the only property reliably measured. Theoretically obtained structural information becomes more accurate the further down the EXAFS region, ~0.02 Angstroms or better. Experimentally it has also been shown that application of the EXAFS technique is most accurate for systems of low thermal or static disorder. Qualitative understanding of the EXAFS region X-Ray spectroscopy involves a process in which an X-ray beam is applied to an atom and causes the ejection of an electron, usually a core electron. This leaves a vacancy in the core shell, and results in relaxation of an outer electron to fill that vacancy. This phenomenon is only observed when the energy of the X-ray exceeds the ionization energy of the electrons in that shell. We can relate this occurrence to the X-ray Absorption coefficient, which becomes the basis of EXAFS theory. The X-ray Absorption coefficient, or μ, describes the relationship between the intensity of an X-ray beam going into a sample and its intensity leaving the sample after traveling a distance x within the sample. The absorption coefficient is given by $\mu =-\frac{d lnI}{dx}$ In this expression, dx is the traversed distance, and I is the intensity. In a typical EXAFS spectrum, various sharp peaks will be observed when energy(usually in eV) is plotted against absorbance. These peaks(called edges), which vary by atom, correspond to the ionization of a core orbital, K-edges describing the excitation of the innermost 1s electron, and L-edges and M-edges referring to the same for higher energy orbitals. A qualitative diagram of these energies, as well as their dependence on atomic number, is shown in Figure 2 below. After each edge, a series of downward oscillations will be observed. These oscillations correspond to wave interactions between the ejected photoelectron and electrons surrounding the absorbing atom. The neighboring atoms are called backscattering atoms, since the waves emitted from the absorbing atom change paths when they hit these neighboring atoms, returning to the original atom. Maxima in the oscillations result from constructive interference between these waves, and minima result from destructive interference. These oscillations are also characteristic of the surrounding atoms and their distances from the central atom, since varying distances will result in different backscattering paths, and as a result different wave interactions. The EXAFS fine structure begins at roughly 30 eV past each edge, where oscillations begin to decay. In addition, this wave interaction will depend on the mechanism of scattering, since the path taken by a wave sometimes involves collision with an intermediate atom, or even multiple atoms, before it reverts to the absorbing atom. Extraction of Data using the EXAFS Equation To represent the EXAFS region from the whole spectrum, a function χ can be roughly defined in terms of the absorption coefficient, as a function of energy: $\chi (E)=\frac{\mu (E)-\mu _{o}(E)}{\Delta \mu _{o}}$ Here, the subtracted term represents removal of the background and the division by Δμo represents the normalization of the function, for which the normalization function is approximated to be the sudden increase in the absorption coefficient at the edge. When interpreting data for the EXAFS, it is general practice to use the photoelectron wave vector, k, which is an independent variable that is proportional to momentum rather than energy. We can solve for k by first assuming that the photon energy E will be greater than E0 (the initial X-ray absorption energy at the edge). Since energy is conserved, excess energy given by E - E0 is conserved by being converted into the kinetic energy of the photoelectron wave. Since wavelengths are dependent on kinetic energies, the photoelectron wave (de Broglie wavelength) will propagate through the EXAFS region with a velocity of ν where the wavelength of the photoelectron will be scanned. This gives the relation, (E - E0) = meν2/2. One of the identities for the de Broglie wavelength is that it is inversely proportional to the photoelectrons momentum (meν): λ=h/meν. Using simple algebraic manipulations, we are able to obtain the following: $k = \dfrac{2\pi}{\lambda} = \dfrac{2\pi m_{e}\nu}{h} = \left[\dfrac{8\pi^2m_{e}(E-E_o)}{h^2}\right]^{\frac{1}{2}}$ To amplify the oscillations graphically, k is often plotted as k3 Now that we have an expression for k, we begin to develop the EXAFS equation by using Fermi's Golden Rule. This rule states that the absorption coefficient is proportional to the square of the transition moment integral, or |<i|H|f>|2 , where i is the unaffected core energy level before it interferes with the neighboring atoms, H is the interaction, and f is the final state in which the core energy level has been affected and a photoelectron has been ejected. This integral can be related to the total wavefunction Ψk, which represents the sum of all interacting waves, from the backscattering atoms and the absorbing atom. The integral is proportional to the total wavefunction squared, which refers to the probability that the photoelectron is found at the atom where the photon is absorbed, as a function of radius. This wavefunction describes the constructive/destructive nature of the wave interactions within it, and varies depending on the phase difference of the waves interacting. This phase difference can be easily expressed in terms of our photoelectron wave number and R, the distance to the innershell from the wave, as 2k/R. In addition, another characteristic of this wave interaction is the amplitude of waves backscattering to the central atom. This amplitude is can provide coordination number as well since it is thought to be directly proportional to the number of scatters. The physical description of all these properties is given in a final function for χ(k), called the EXAFS equation: $\chi (k)=\sum_{j}\frac{N_{j}f_{j}(k)exp[-2k^{2}\sigma _{j}^{2}]exp[-2R_{j}/\lambda ]}{kR_{j}^{2}}sin[2kR_{j}+\delta _{j}(k)]$ Looking at the qualitative relationships between the contributions to this equation gives an understanding of how certain factors can be extracted from this equation. In this equation, f(k) represents the amplitude and δ(k) represents the phase shift. Since these two parameters can be calculated for a certain k value, R, N, and σ are our remaining unknowns. These unknown values represent the information we can obtain from this equation: radius, coordination number(number of scattering atoms), and the measure of disorder in neighboring atoms, respectively. These are all also properties of the scattering atom. We can also see the terms in this equation represented in a typical EXAFS spectrum: the sine term gives the origin of the sinusoid shape, with a greater phase shift making the oscillations greater. In addition, this oscillation depends on the energy and on the radial distance. The Debye-Waller Factor explains the decay with increasing energy, as well as increasing disorder. This factor is partially due to thermal effects. In addition, we deduce the reason why EXAFS does not work over long distances (up to 4-5 Å): the term Rj-2 causes the expression to decrease exponentially over large values of Rj (larger distances between absorbing and scattering atoms), making EXAFS much weaker over long-distances as opposed to short ranged neighboring atoms. The last step in order to complete the data extraction is to take a Fourier transform of this expression into frequency space, which results in a radial distribution function where peaks correspond to the most likely distances of the nearest-neighbors. Multiple Scattering The cases thus far have only dealt with single scattering pathways. Figure 4 shows a scenario in which two scattering atoms (denoted by s) are present with one photoelectron wave source. In this case, what would happen? There is a clear dependence on the distance and angles of the scattering atoms. Thus, it is no longer a simple matter of 1-D space distance relations, but rather a 2-D polar coordinate system. In the case of Cu-Cu (shown by Kroneck et al) the copper is not only the absorbing atom, it is also the scattering atom. Given this paradox, what would happen to the amplitude, phase, and frequency with respect to the two coppers, and with respect to its neighboring atoms? The effect of multiple scattering is a powerful tool, which is used to determine a variety of information on the local structure. Each individual atom will produce more scattering or absorbing atoms depending on the type of atom and what type of wave hits it. The angle at which the wave scatters back and its distance from the scattering atom will also affect the overall intensity of the EXAFS spectrum. In scenario (a), shown in figure 4, the absorbing atom is hit by an X-ray where it produces an absorbing wave which will scatter off S1 and hit S2. It will then send a scattering wave back to (a), which allows us to obtain further information on the local structure of this molecule. However, this will only work if the absorbing atom is within 4-5 Å away from the scattering atoms. Otherwise, the wave becomes very dampened and the information retrieved becomes very unreliable. In short, multiple scattering effects can be observed, but for the most part, they will only affect the EXAFS spectrum a miniscule (but noticeable) amount. When an absorbing wave goes through another absorbing atom as mentioned in Kroneck et al, it will produce a dampened absorbing wave (due to destructive interference) and give a lower signal than normal. kroneck et al. tested this on two types of systems, the first was a copper single scattering onto a neighboring atom, the second was two coppers scattering onto a scattering atom (all lined up linearly). When both of the radial distances were collected, the second test showed only a small difference (0.02 Å) compared to the first test, suggesting experimentally that multiple scattering waves do have an effect on the determination of a molecular system. Problems Consider the crystal lattice zinc sulfide, whose unit cell is pictured below. Zn = Purple, S = Blue. 1. An X-Ray Beam is applied to a sample of ZnS, and excites a core sulfur electron. Sketch the raw spectrum you might expect to see, and indicate where the EXAFS/XANES regions may occur. Be sure to indicate where the K-edge is, and include units. 2. For the phenomenon described, explain what happens to the energy associated with the ejection of the photoelectron? 3. Upon completing data extraction, and transforming the final EXAFS equation into frequency space, describe what you would expect the radial distribution to look like for ZnS. 4. How would the amplitude for the oscillations of a Zn connected to a single sulfur compare to the amplitude for the oscillations of a Zn from ZnS? 5. When a photoelectron is emitted, spherical waves are produced. Redraw this unit cell, and sketch two possible paths these waves may take, taking into account nearest-neighbor atoms. Label the absorbing atom, backscattering atom, and intermediate atom, if applicable, in each diagram. Answers 1. Sulfur's K-edge occurs at about 2.472 keV(this can be found using online sources). If energy is plotted against the absorption coefficient, we would expect to see a constant flat line, then a sudden, sharp increase at roughly 2.472 keV. After this peak the spectrum would begin to decay, and there would be a few slight peaks(XANES), then steady downward oscillations(EXAFS). 2. When the core sulfur electron is excited, this means the energy of the photon from the X-ray source exceeds the bonding energy of the electron. Its energy is consumed, and the electron becomes excited. This causes the ejection of the energy as a photoelectron wave. 3. There would be expected to be a series of peaks, with four being prominent. These correspond to the four radial distances more likely to be occupied around the sulfur atom. Since in this unit cell each sulfur atom's coordination number is four, we would expect to see four main peaks, though in practice other peaks would probably interfere. 4. Amplitude of backscattering is believed to be proportional to coordination number. Therefore, we would expect the amplitude of the oscillations to be about four times as high for this structure than for a single Zn-S bond. 5. X-Rays Learning Objectives • Explain X-rays. • Interpret the symbols used in the Bragg equation. Like light, X-rays are electromagnetic radiation with very short wavelengths. Thus, X-ray photons have high energy, and they penetrate opaque material, but are absorbed by materials containing heavy elements. X-ray Diffraction When light passes through a series of equal-spaced pinholes, it gives rise to a pattern due to wave interference, and such a phenomenon is known as diffraction. X-rays have wavelengths comparable to the interatomic distances of crystals, and the interference patterns are developed by crystals when a beam of X-rays passes a crystal or a sample of crystal powder. The phenomena are known as diffraction of X-rays by crystals. More theory is given in Introduction to X-ray Diffraction. X-ray diffraction, discovered by von Laue in 1912, is a well established technique for material analysis. This link is the home page of Lambda Research, which provide various services using X-ray diffractions. For example: • Residual Stress Measurement • Qualitative Phase Analysis • Quantitative Phase Analysis • Precise Lattice Parameter Determination In 1913, the father and son team of W.H. Bragg and W.L. Bragg gave the equation for the interpretation of X-ray diffraction, and this is known as the Bragg equation. $2\, d\, \sin \theta = n\, \lambda$ where d is the distance between crystallographic planes, $\theta$ is half the angle of diffraction, n is an integer, and $\lambda$ is the wavelength of the X-ray. A set of planes gives several diffraction beams; each is known as the nth order. Example 1 The X-ray wavelength from a copper X-ray is 154.2 pm. If the inter-planar distance from $\ce{NaCl}$ is 286 pm, what is the angle $\theta$? Solution \begin{align*} \sin \theta &= \dfrac{\lambda}{2 d}\ &= \dfrac{154}{2\times282}\ &= 0.273 \end{align*} $\theta = 15.8^\circ$ Example 2 An X-ray of unknown wavelength is used. If the inter-planar distance from $\ce{NaCl}$ is 286 pm, and the angle $\theta$ is found to be 7.23°, what is $\lambda$? Solution \begin{align*} \lambda &= 2\, d\, \sin\theta\ &= 2\times282\times\sin(7.23^\circ)\ &= \mathrm{71\: pm} \end{align*} Example 3 The X-ray of wavelength 71 pm is used. If the inter-planar distance from $\ce{KI}$ is 353 pm, what is the angle $\theta$ for the second order diffracted beam? Solution The calculation is shown below: \begin{align*} \sin \theta &= \dfrac{\lambda}{2 d}\ &= \dfrac{71}{2\times353}\ &= 0.100\ \theta &= 5.8^\circ \end{align*} These examples illustrate some example of the applications of X-ray diffraction for the study of solids. Exercise $1$ If the wavelength is 150 pm and the interplanar distance d is 300 pm, what is the angle $\theta$ in the Bragg equation, for n = 2? Answer 30 degrees
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Spectroscopy/X-ray_Spectroscopy/EXAFS_-_Theory.txt
XANES, short for X-ray Absorption Near-Edge Structure, is a subset of X-ray Absorption Spectroscopy. The absorption edge corresponding to the liberation of a core electron from an element will exhibit several identifiable features which change depending on the chemical environment of the element being probed. The study and modelling of the characteristics of near-edge features helps answer questions about the oxidation state, coordination, and spin state of the probed element. Introduction X-ray Absorption Near-Edge Structure (XANES), though less-developed or practiced than Extended X-ray Absorption Fine Structure (EXAFS), may provide valuable information about the oxidation state, coordination environment, and bonding characteristics of specific elements in a sample. The less common term Near-Edge X-ray Absorption Fine Structure (NEXAFS) is used generally in the context of solid-state studies and is synonymous with XANES. The technique in practice requires a mix of qualitative and quantitative analysis to interpret the data and draw conclusions. The mechanisms and terminology in XANES are a subset of X-ray Absorption spectroscopy (XAS) and will only be summarized here. The absorption edges of a material are the element-specific sudden increase in the absorption coefficient due to the promotion of a core-level electron to unoccupied orbitals or unbound state. The core electron may be a member of the s, p, d, or higher-order orbitals, provided it is not a valence (outer shell) electron. "XAS" refers to the nature of the electron that is excited rather than the energy range of the exciting photon. Edges resulting from the excitation of an n=1 (1s) electron are termed "K-edges", n=2 are "L-edges", n=3 "M-edges", and so on. The figure below labels these shells and corresponding allowed bound-state absorption levels that are seen as pre- and on-edge features in XANES. Dotted lines correspond to observed emission lines. When an orbital at the top of a dotted line is not full, the dotted line also indicates an absorption from a lower shell (a bound-state transition). For example, the K-edge of Fe has an on-edge bound-state feature corresponding to 1s -> 4p absorption, which would be the dotted line in the figure between K1 and N2,3. These follow selection rules ∆L = +/- 1. XANES is the study of the features immediately before and after the edge, within approximately 1% on either side of the main edge energy. Features include the edge position (a primary indicator of oxidation state), presence/shape of small features just before the main edge ("pre-edge" features), and intensity, number, position and shape of peaks at the top of the main edge. The Fe K-edge shown below from Carpenter (2010) labels some of these features. The pre-edge feature is weak as it involves a forbidden transition (which is partially allowed due to mixing of ligand p-character), while the first line on the main edge is due to the allowed 1s to 4p bound-state transition. In some compounds this transition is much stronger than the rest of the edge (see Sulfur K-edge in next figure) and was historically called the "white line" due to its saturated appearance on the photographic film used to record the spectrum. The power of XANES lies in the sensitivity of edge features to the chemical environment. The sensitivity varies among elements, from just-detectable to pronounced. George et. al provide an excellent illustration of the range of the sulfur K edge among various organic compounds. [3] Instrumentation Unlike visible, infrared, or microwave spectroscopy, no single off-the-shelf commercial instrument exists for XAS. The equipment must be assembled from multiple components to suit the needs of the experiment and may involve a significant amount of custom engineering. Laboratory X-ray sources were used before the advent of synchrotron radiation sources, but today they are rarely considered. The typical XAS experiment at a synchrotron may be broken down into two general parts: 1. The "beam line": the set of components (bend magnet or undulator/wiggler, mirrors, monochromator(s), slits, diagnostics) that produce and deliver a controllable high-intensity monochromatic X-ray beam. 2. The "end station": the set of instruments specific to the type of measurement, including sample handling (gas cells, fluid cells, cryostats, positioning stages), detectors, diagnostics and support systems, and radiation protection enclosure. XANES measurements use essentially the same equipment and setup as EXAFS, though as the energy range covered is much smaller the emphasis may be on high-resolution over wide-range abilities. A typical beam line schematic is shown below. Measurement Methods Transmission Electron Yield Fluorescence Yield Experimental considerations for different measurement techniques Sample Thickness Thin Thick/Any Thick/Any Background High Moderate Low Sensitivity Bulk Surface Bulk Sample Concentration High High Low Transmission As the name implies, this method involves passing X-rays through the sample and comparing the incident to the transmitted intensities. The sample thickness must be considered before measurement, as absorption coefficients vary greatly across the X-ray energy range and among materials. For too-thick samples the beam may be totally attenuated at the edge, while too-thin results in poor signal-to-noise ratio. For ultra-soft measurements the sample may need to be less than 1 micron thick, while high-Z K-edges may need many centimeters of sample. Reference databases such as the LBNL Center for X-ray Optics database give attenuation data and calculation tools to help estimate the proper thickness, which will depend on the concentration of the measured species and the attenuation of other species present. Measurement of the intensity may be made with gas ionization chambers, photodiodes, PN-junctions, metal grids, or from scattering off optics or windows. The incident beam (I0) must be only partially sampled, typically by an ion chamber or grid, while the transmitted beam (I1) may be blocked and absorbed completely. A helpful calibration technique is to place a reference standard and detector after the I1 detector, so that the beam path looks as in the following diagram so the reference spectrum is measured simultaneous with the sample. Transmission measurements may be performed on any type of sample (gas, liquid, solid) provided the thickness and density is controllable. For dilute measurements the signal-to-noise ratio is typically poor. Electron Yield Absorption of X-rays results in emission of electrons from the sample proportional to the absorption coefficient, from both photoelectrons and Auger electrons. Photoelectrons are those that are ejected from core orbitals and will have a kinetic energy that is the difference of the X-ray energy and their binding energy. Auger electrons are emitted as part of the relaxation process as a higher-orbital electron fills the hole left behind by the photoelectron. The Auger electron energy is characteristic of the element and core level being occupied and is analogous to a fluorescence photon. Collecting all produced electrons is known as Total Electron Yield (TEY) and is measured with electron multipliers electrical current through a lead (in vacuum), or via gas ionization collected by a grid close to the sample surface (non-vacuum). Alternately, an electron energy analyzer may be used to discriminate between photo and Auger electrons, which helps reject background from other species at the expense of throughput. This method is Partial Electron Yield (PEY). This method is only sensitive to the first ~10s of nm of the sample surface due to electron scattering, and only works for solid samples. This may be an advantage for thin films or monolayers on a substrate where other methods would suffer high background issues. Auger electrons and fluorescence are competing processes and the ratio between the two changes depending on the element. Auger yield predominates for light elements like C and N, but gradually decreases in favor of fluorescence as Z increases. The crossover where fluorescence is greater than Auger yield is Z=30.[27] Fluorescence Yield As with electron yield, X-ray fluorescence is proportional to the absorption coefficient as valence electrons emit photons to fill the core holes left by absorption. As with electron yield, fluorescence yield may be measured in total or partial modes, the later requiring an energy-discriminating detector. For partial-yield, solid-state detectors such as Germanium and drifted Silicon detectors are often used due to their high efficiency and moderate energy resolution (enough to separate emission from different species). Dispersive spectrometers with gratings or Bragg crystals may also be used when higher resolution is needed to separate species emissions that are close-together. The detector is typically oriented perpendicular to the incident beam polarization to suppress the elastic scattering peak and improve signal-to-noise. Total yield may be measured with any X-ray-sensitive detector not immediately in the incident or transmitted beam. Fluorescence yield is especially well-suited for dilute samples due to its selectivity and bulk-sensitivity. However, when used for concentrated samples, a phenomenon called "self-absorption" can lower the apparent absorption coefficient at high levels of absorption. This is partially due to non-negligible reduction in the penetration depth as the absorptivity increases (directly impacting the coefficient due to Beer's law), and re-absorption of the fluorescence photons by the same species before the photons can leave the sample. Hard X-ray Instrumentation Where "Hard" X-rays begin varies widely, typically between 2 and 10 keV photon energy. They are the X-rays that are high enough in energy to penetrate significant distances through materials. This penetrating capacity is a blessing and a curse: many materials may be used as windows, the sample can be in a variety of environments, and measurements are less sensitive to thickness tolerances; however, optics must take this penetration into account, and complete shielding is needed to protect the user from radiation. The K-edges of first-row transition metals and L-edges of rare earth elements fall in this range. Solid samples may be contained in a metal or plastic holder with low-Z material windows. Liquids and gases may require custom sample alignment which is usually automated with motorized stages. The endstation is shielded from the user by a protective hutch. Radiation-sensitive samples may be cooled to low temperatures (liquid Nitrogen or Helium temperature) in a cryostat modified with X-ray windows. The X-ray beam often passes from the high-vacuum beamline through a Beryllium window and travels moderate distances through air or an exchange gas such as Helium before impinging the sample. Transmission and fluorescence yields are the primary measurement methods. Soft X-ray Instrumentation "Soft" X-rays are those that are readily absorbed by most materials and air with very short attenuation lengths, roughly in the range of ~100eV to 2-5 keV. The entire experiment (sample, detector, diagnostics) must be contained in vacuum up to Ultra-High Vacuum (UHV). Window materials are few and must be thin. Sample containment must take into account in the vacuum and window materials. Most often, the sample is inserted directly into vacuum as a solid (as a powder, crystal or amorphous solid), or dried a substrate. Gas cells are also used for appropriate samples and for reaction/catalysis experiments. Electron and fluorescence yields predominate in Soft X-rays. Data handling and Analysis The treatment of data depends on the compound complexity and nature of the problem solved. Very complicated molecules which are difficult to simulate with software may be compared to simpler model compounds to determine coordination electronic structure. Recent advances in theoretical calculations and software allow general users to fit the spectrum based on ionization state, coordination group, and various molecular parameters such as crystal field splitting and degree of orbital hybridization. The post-edge region dominated by multiple-scattering may provide structural information directly similar to EXAFS, though by a different approach. While simple-scattering in EXAFS is derived from the Fourier transform of the post-edge oscillations, multiple-scattering involves fully modeling the quantum-mechanical spherical wave scattering from local neighbors via Green's functions.[16] Other software packages utilizing Density Functional Theory (DFT) or Charge-Transfer Multiplet Theory (CTM) are used to derive the manifold of atomic or molecular orbital energies from bound-state transitions.[11] Figure 1 from Metzler et. al[9] illustrates the assignment of components from simulation which may be correlated with the observed spectrum. The assignments of the features by Metzler et al. are as follows: "peak 1 at 285 eV, corresponds to the C ls → ∏* transition in C=C double bonds; peak 2 at 287 eV is the C-H C ls → ∑*, mostly due to PLL side chains; peaks 3 and 4 at ~288 eV are both associated with C=0 C ls → ∏*; peak 5 at ~290 eV, corresponds to the C ls → ∏* transition in C=0 double bonds in carbonates." They include a dotted feature for the absorption edge due to ionization (core electron -> unbound continuum states). Investigation of multiple edges of the same element may also offer insight, as different features are emphasized at different edges. The lower-energy edges are usually more highly-resolved than the K-edge due to longer core-hole lifetimes (by the Heisenberg uncertainty principle, longer-lived states are less broad in energy) and allow the ionization state to be determined more accurately. A comparison of the K, L, and M-edges of Molybdenum is shown below in which the analogous feature (bound state transitions to d orbitals) is lined up at 0 eV in each spectrum. Applications Biomolecules The understanding of large molecules such as proteins can benefit greatly from XANES. The active-site clusters often contain one or several metal ions embedded in bulk proteins. The presence of large amounts of organic backbone may interfere with UV-Vis spectroscopy of the active site, while XANES, being element-specific, allows single atoms to be measured in even large proteins. Preparation of the enzyme in resting and active states before the measurement and careful handling during the measurement may elucidate the oxidation and coordination state. Depending on the substrate, the bound-state may be measured from both the metal and ligand XAS. A literature example of the sort of information which can be derived from XANES is the paper by Ralston et. al [20] to determine the spin state of Ni in the active site of CO-Dehydrogenase (CODH). Through a careful study of multiple model compounds of known oxidation states and spin configurations, from Ni(I), to low- and high-spin Ni(III) up to Ni(IV), a relationship is derived between the position of the L3 edge and the ratio of the integrals of the L3 and L2 edges. A large ratio between the two edges indicates a high-spin complex while lower ratio indicates low-spin, with the turning point occurring at a ratio of 0.71 L3:L2 intensity. This scheme is applied to CODH prepared as a film in its resting state, CO-bound state, and dithionate-reduced state to determine the Ni oxidation and spin states. Solid State physics Edge features corresponding to bound-state transitions will have the symmetries of the immediate coordination environment (that surrounding the probed element only). For solid state samples in a single-crystal form, this may be combined with the fact that synchrotron radiation is typically linearly polarized in the plane of the storage ring to probe specific orbitals of the element. Electronic structure and band gap measurements are therefore possible. [21,25] Materials science Materials scientists often care about the behavior of materials under extreme conditions. High-pressure diamond cells combined with powerful lasers allow the probing of materials in the regions of 5000K and 100s of GPa.[24] Small X-ray beam spot sizes on the order of microns allow so-called "Micro-XANES" measurements to probe changes along pressure gradients of materials in these cells. Temperature-dependent XANES measurements can capture phase transitions in materials and give insight into structural changes.[22] Development of novel materials such as high-Tc superconductors and advanced scintillators can also benefit from XANES.[23] The low-concentration of dopants in scintillators is perfectly suited to fluorescence-yield XAS to determine concentration and ionization state. The plot below depicts the M4 and M5 edges of europium doped at 1% measured by partial fluorescence yield. Catalysis Specially-designed cells combined with XANES may provide insight into reaction dynamics and changes in electronic configuration and oxidation state. Performing the reaction under different temperatures and conditions gives insight into dynamics. With the development of fast, continuous-scan beamlines with the capability to scan an entire edge in less than a minute, time-resolved studies are now possible as well. Leoferti et. al followed the Cu oxidation state during oxychlorination of ethylene catalyzed by copper chloride,[26] which was built-upon later by Lamberti et. al with time-resolved XANES with 30-second resolution.[13] Further experiments are possible with the development of reaction cells which allow multiple simultaneous measurements, such as XANES, UV-Vis and electro-chemistry.[14] Surface Science The shallow penetration depth of soft X-rays combined with the probing depth of electron yield measurement allows for very sensitive studies on films as thin as a monolayer.[15] When deposited on a single-crystal surface in controlled conditions, the monolayer species can be made to bond with a particular orientation to that surface allowing for polarization-dependent studies. As the X-rays emitted by bend magnets and standard wigglers or undulators is linearly polarized in the plane of the ring, changing the orientation of the crystal to the incident beam can suppress or enhance features in the absorption edge. The monolayer bonding of the sample to substrate may also alter the characters of the edge independent of the polarization[17] which opens up further possibilities for understanding the system. Sample Diagnostics Experiments which benefit from synchrotron radiation, such as X-ray crystallography, Scanning Transmission X-ray Microstopy (STXM), and X-ray Photo-emission Electron Microscopy (XPEEM), typically need large doses of radiation to be effective. For radiation-sensitive samples the dose may be an important consideration as to the validity of the data: for crystallography, the high-energy electrons liberated from atoms after ionization may distort the very structure one is trying to measure, while STXM and PEEM may damage or destroy the large bio-structures often probed with these methods. XANES, combined with an appropriate model for the absorption of radiation and its affects on the edge features, may be used to both set limits and parameters for the experiment[18,19] and monitor the sample condition as the experiment progresses. References 1. Frank de Groot and Akio Kotani, "Core Level Spectroscopy of Solids", CRC Press, Boca Raton, FL (2008). 2. Joachim Sthöhr, "NEXAFS Spectroscopy", Second Printing, Springer-Verlag, Heidelberg, Germany (2003). 3. Graham N. George, Martin L. Gorbaty, "Sulfur K-edge x-ray absorption spectroscopy of petroleum asphaltenes and model compounds"; J. Am. Chem. Soc. 111 (9), pp 3182–3186 (1988). 4. C. R. Natoli, D. K. Misemer, S. Doniach, and F. W. Kutzler, "First-principles calculation of x-ray absorption-edge structure in molecular clusters"; Phys. Rev. A 22, 1104–1108 (1980). 5. Owen B. Drury, "Development of High Resolution X-Ray Spectrometers for the Investigation of Bioinorganic Chemistry in Metalloproteins"; Ph.D. Thesis, University of California, Davis (2007). 6. H. Oyanagi, Z. H. Sun, Y. Jiang, M. Uehara, H. Nakamura, K. Yamashita, L. Zhang, C. Lee, A. Fukanoa, and H. Maeda, "In situ XAFS experiments using a microfluidic cell: application to initial growth of CdSe nanocrystals"; J. Synchrotron Radiaion 18, 272–279 (2011). 7. Simon J. George, Owen B. Drury, Juxia Fu, Stephan Friedrich, Christian J. Doonan, Graham N. George, Jonathan M. White, Charles G. Young, Stephen P. Cramer, "Molybdenum X-ray absorption edges from 200 to 20,000 eV: The benefits of soft X-ray spectroscopy for chemical speciation"; J. Inog. Biochem. 103, 157–167 (2009). 8. Matthew H. Carpenter, "Helium Atmosphere Chamber for Soft X-ray Spectroscopy of Biomolecules", MS Thesis, University of California, Davis (2010). 9. Rebecca A. Metzler, Ronke M. Olabisi, Mike Abrecht, Daniel Ariosa, Christopher J. Johnson, Benjamin Gilbert, Bradley H. Frazer, Susan N. Coppersmith, and P.U.P.A Gilbert, "XANES in Nanobiology"; Proceedings of X-ray Absorption Fine Structure—XAFSU 13, American Institute of Physics, 51 (2007). 10. S. Della Longa, A. Arcovito, M. Girasole, J. L. Hazemann, and M. Benfatto, "Quantitative Analysis of X-Ray Absorption Near Edge Structure Data by a Full Multiple Scattering Procedure: The Fe-CO Geometry in Photolyzed Carbonmonoxy-Myoglobin Single Crystal"; Phys. Rev. Lett. 87, 15, 155501 (2001). 11. E. Stavitski and F.M.F. de Groot, "The CTM4XAS program for EELS and XAS spectral shape analysis of transition metal L edges"; Micron 41, 687 (2010). 12. Takashi Fujikawa, "Basic Features of the Shrort-Range-Order Multiple Scattering XANES Theory"; J. Phys. Soc. Japan 62, 6, pp. 2115 (1993). 13. Carlo Lamberti, Carmelo Prestipino, Francesca Bonino, Luciana Capello, Silvia Bordiga, Giuseppe Spoto, Adriano Zecchina, Sofia Diaz Moreno, Barbara Cremaschi, Marco Garilli, Andrea Marsella, Diego Carmello, Sandro Vidotto, and Giuseppe Leofanti, "The Chemistry of the Oxychlorination Catalyst: an In Situ, Time-Resolved XANES Study"; Angew. Chem. Int. Ed. 41, No. 13, pp 2341 (2002). 14. Walkiria S. Schlindwein, Aristea Kavvada, Roger G. Linford, Roger J. Latham and J. Günter Grossmann, "Combined XAS/SAXS/Electrochemical studies on the conformation of poly(vinylferrocene) under redox conditions"; Ionics 8, 1-2, 85-91 (2002). 15. A. Biancoli, "Surface X-ray absorption spectroscopy: Surface EXAFS and surface XANES"; Applications of Surface Science 6, 3-4, 392-418 (1980). 16. M. Benfatto, C. R. Natoli, A. Bianconi, J. Garcia, A. Marcelli, M. Fanfoni, and I. Davoli, "Multiple-scattering regime and higher-order correlations in x-ray-absorption spectra of liquid solutions"; Phys. Rev. B 34, 5774–5781 (1986). 17. E. E. Doomes, P. N. Floriano, R. W. Tittsworth, R. L. McCarley, and E. D. Poliakoff, "Anomalous XANES Spectra of Octadecanethiol Adsorbed on Ag(111)"; Phys. Chem. B 107 (37), 10193–10197 (2003). 18. J.Wang, C. Morin, L. Li, A.P. Hitchcock, A. Scholl, A. Doran, "Radiation damage in soft X-ray microscopy"; Journal of Electron Spectroscopy and Related Phenomena 170, 25–36 (2009). 19. James W. Murray, Enrique Rudiño-Piñera, Robin Leslie Owen, Martin Grininger, Raimond B. G. Ravelli, and Elspeth F. Garmana, "Parameters affecting the X-ray dose absorbed by macromolecular crystals"; J. Synchrotron Rad. 12, 268-275 (2005). 20. C. Y. Ralston, Hongxin Wang, S. W. Ragsdale, M. Kumar, N. J. Spangler, P. W. Ludden, W. Gu, R. M. Jones, D. S. Patil, and S. P. Cramer, "Characterization of Heterogeneous Nickel Sites in CO Dehydrogenases from Clostridium thermoaceticum and Rhodospirillum rubrum by Nickel L-Edge X-ray Spectroscopy"; J. Am. Chem. Soc. 122, 10553-10560 (2000). 21. O. Seifarth, J. Dabrowski, and P. Zaumseil, S. Müller and D. Schmeißer, H.-J. Müssig, and T. Schroeder."On the band gaps and electronic structure of thin single crystalline praseodymium oxide layers on Si(111)"; J. Vac. Sci. Technol. B 27, 271 (2009). 22. B. Ravel and E.A. Stern, "Temperature and Polarization Dependent XANES Measurements on Single Crystal PbTiO3"; J. Phys IV France 7, 1223 (1997). 23. Stephan Friedrich, Owen B. Drury, Shaopang Yuan, Piotr Szupryczynski, Merry A. Spurrier, and Charles L. Melcher, "A 36-Pixel Tunnel Junction Soft X-Ray Spectrometer for Scintillator Material Science"; IEEE Transactions on Applied Superconductivity 17, 2, 351 (2007). 24. Giuliana Aquilanti, Sakura Pascarelli, Olivier Mathon, Manuel Muñoz, Olga Naryginac, and Leonid Dubrovinsky, "Development of micro-XANES mapping in the diamond anvil cell"; J. Synchrotron Rad. 16, 376–379 (2009). 25. V. L. Mazalova and A. V. Soldatov, "Geomtrical and Electronic Structure of Small Copper Nanoclusters: XANES and DFT Analysis"; Journal of Structural Chemistry 49, Supplement, S107-S115 (2008). 26. G. Leofanti, A. Marsella, B. Cremaschi, M. Garilli, A. Zecchina, G. Spoto, S. Bordiga, P. Fisicaro, C. Prestipino, F. Villain, and C. Lamberti, "Alumina-Supported Copper Chloride: 4. Effect of Exposure to O2 and HCl"; Journal of Catalysis 205, 375–381 (2002). 27. Krause, M.O., and Oliver J.H., "Natural Widths of Atomic K and L Levels, Ka XRay Lines and Several KLL Auger Lines"; Journal of Chemical and Physical Reference Data 8(2), 329-337 (1979). Problems 1. As the oxidation state of an atom increases, which direction does the absorption edge shift (higher or lower in energy)? 2. Name 2 characteristcs which may be determined by XANES. 3. What measurement method is most appropriate for measuring the Cu K-edge of the protein Stellacyanin? What would be most appropriate for the Ni L-edge of NiO? 4. List the energies of the K- and L-edges of Si, Fe, and Zn, and the K-edges of C, N, P and S. 5. Why are pre-edge features corresponding to forbidden transitions sometimes observed on metal K-edges? Solutions 1. The edge shifts higher in energy. The lesser amount of shielding of the nucleus by surrounding electrons increases its effective charge Zeff; the more tightly-bound electrons require more energy to liberate. 2. Possible answers: oxidation state, valence (number of ligands), coordination geometry, near-neighbor distances (through multiple-scattering). 3. Stellacyanin is a 20 kDa protein with 1 Cu, putting it in the "dilute" regime; it should be measured by Partial Fluorescence Yield. NiO is stochiometrically 1/2 Ni; due to the concentration, the high fluorescence-yield of Ni, and the fact that the Ni L-edge lies in the soft X-ray region means it should be measured by Total or Partial Electron Yield. 4. Si: K=1849eV; L1=149.7eV; L2=99.8eV; L3=99.4eV; Fe: K=7112eV; L1=844.6eV; L2=719.9eV; L3=706.8eV; Zn: K=9659eV; L1=1196.2eV; L2=1044.9eV; L3=1021.8eV; C: K= 284.2eV; N: K=409.9eV; P: K=2145.5eV; S: K=2472eV. 5. The 1s → 3d transition is quantum mechanically forbidden; however, in the view of molecular orbital theory, the metal 3d mix with ligand 2p or 3p orbitals and gain some "p orbital character", weakly alloing the transition.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Spectroscopy/X-ray_Spectroscopy/XANES%3A_Application.txt
X-ray Absorption Near Edge Structure (XANES), also known as Near edge X-ray Absorption Fine Structure (NEXAFS), is loosely defined as the analysis of the spectra obtained in X-ray absorption spectroscopy experiments. It is an element-specific and local bonding-sensitive spectroscopic analysis that determines the partial density of the empty states of a molecule. Introduction X-rays are ionizing electromagnetic radiation that have sufficient energy to excite a core electron of an atom to an empty below the ionization threshold called an excitonic state, or to the continuum which is above the ionization threshold. Different core electrons have distinct binding energies; consequently, if one plots the X-ray absorbance of a specific element as a function of energy1, the resulting spectrum will appear similar to Figure $1$. Theory of XANES Core hole As stated above, a core hole is the space a core electron occupied before it absorbs an X-ray photon and ejected from its core shell. core holes are extremely energetic (electronegative) which leads to their unstable nature. the average life span of a core hole is close to 1 femtosecond. A core holes is created through processes in which either a core electron absorbs an X-ray photon (X-ray absorption) or absorbs part of the an X-ray photon's kinetic energy (X-ray Raman scattering). The successor process is the decay of a core hole which can take place either through Auger electron ejection of X-ray Fluorescence. Absorption edge As the energy of X-ray radiation is scanning through the binding energy regime of a core shell, a sudden increase of absorption appears, and such phenomenon corresponds to absorption of the X-ray photon by a specific type of core electrons (eg. 1s electrons of Cu). This gives rise to a so-called absorption edge in the XAS spectrum due to its vertical appearance. The name of the absorption edges are given according to the principle quantum number, n, of the excited electrons (Table $1$). Table $1$: Absorption edges.​​​ K edge 1s L edge 2s 2p M edge 3s 3p 3d N edge 4s 4p 4d 4f Energies of absorption edges in X-ray absorption spectra reveal the identity of the corresponding absorbing elements. However, more useful information can be obtained by a closer examination of a giving absorption edge (Figure $1$ ). Illustrated by Figure $2$, the absorption edge is often much more complex than simply an abrupt increase in absorption illustrated in figure 1. There are weak transitions below the absorption edge, namely pre-edge structures, as well as significant absorption features in the immediate neighborhood of the absorption edge and well above the edge. The structure found in the immediate neighborhood of the absorption edge, conventionally within 50 eV of the absorption edge, is referred to as X-ray Absorption Near Edge Structure (XANES). Beyond XANES, the oscillatory structure cause by the interference between the outgoing and the back-scattered photoelectron waves is referred to as Extended X-ray absorption Fine Structure (EXAFS), which can extend to 1000 eV or more above the absorption edges3. Dipole selection rules XANES directly probes the angular momentum of the unoccupied electronic states: these states can be bound states (excitonic) or unbound states (continuum), discrete or broad, atomic or molecular. The dipole selection rule for transition determination is: $Δl= ±1$, $Δj= ±1$, $Δs= 0$. Commonly observed ALLOWED transitions are tabulated in Table $2$. Table $2$: Spin and Orbitally allowed transitions. Initial state Final state s p p s, d d p, f f d,g White Line In certain XANES spectra, the rising absorption edge might lead to a sharp intense peak referred as “white line”. The reason is that in the past, X-ray absorption spectra were recorded using photographic plates, and the strong absorption of certain wavelength leads to an unexposed band on the photographic plates which after develops in negative, appear as a white vertical stripe, hence the term “white line”. X-ray absorption Measurements X-ray fluorescence As a core electron absorbs an X-ray photon and is ejected from its core shell, the absence of the electron left in the core shell leads to a core hole state. Core hole states are highly excited and can relax in mainly two ways: Auger electron emission or X-ray fluorescence. For higher-energy excitation (e.g., for the K edges of elements with atomic numbers greater than 40), X-ray fluorescence is the primary relaxation process. The scheme of an X-ray fluorescence is illustrated in Figure $3$. The intensity of X-ray fluorescence is described by Equation \ref{1}. $A=\left( \dfrac{I_F}{I_1} \right) \label{1}$ The intensity of X-ray fluorescence is directly proportional to the X-ray absorption cross-section of the sample. However, in practice as a beam of X-ray is shined on a sample, a variety of X-rays are emitted, they can be fluorescent X-ray from the sample and background X-ray from the sample scattering. In order to improve the sensitivity, energy-resolving solid-state fluorescence detectors2 are used to selectively distinguish background radiation from signal of interests. Transmittance of X-ray flux As X-ray was transmitted through a sample, it becomes attenuated. The intensity ratio of the incoming X-ray and the outgoing X-ray is proportional to the exponential of the absorption coefficient times the thickness. The spectrum will show a sudden decrease in transmittance as the beam of scanning X-ray meets the absorption edge. The obtained transmittance spectra are usually converted to absorption spectra afterward. X-ray Absorption is described by Equation \ref{2}. $A=\ln \left( \dfrac{I_{0}}{I_1} \right) \label{2}$ This method is limited to moderately concentrated samples (eg. greater than 500 ppm). In cases of certain samples or solvents, the incoming X-ray photons could be nearly completely absorbed, and leaves the detector little signal to detect. For example, dichloromethane (CH2Cl2) is nearly opaque to low energy X-rays, and this will cause complication in data interpretation if dichloromethane is largely present in the sample or used as the solvent. Interpretation of XANES Oxidation state sensitivity As the oxidation-state of the absorption site increases, the absorption edge energy increases correspondingly. This observation can be explained using an electrostatic model: atoms with a higher oxidation state require more energetic X-ray to excite its core electron because the nucleus is less-shielded and carries a higher effective charge. However, an alternative interpretation of edge energies is more suitable. This interpretation treats the edge features as continuum resonances. A continuum resonance refers to a short lived the excitation process in which a core electron is excited into a higher energy state that is usually above the continuum. An example is the potential well created by the absorbing and scattering (between nearest neighboring) atoms. As the absorber–scatterer distance gets shorter, the energy of the continuum state increases as 1/R2. Since higher oxidation-states implies shorter bond lengths in molecules, the edge energies increases as the oxidation-states increases. As stated above, XANES is oxidation sensitive. Moreover, multiple scattering is particularly important in the XANES region. In principle, one can argue that it is possible to determine the three-dimensional structure of the absorbing atom to its environment from analysis of the XANES features. Experimentally, this has been proven true. The XANES region is quite sensitive to small structural variations. For instance, two sites with identical EXAFS spectra can nevertheless have distinct XANES spectra. Such sensitivity is intuitively, at least in part, due to the fact that geometrical differences between sites alter the multiple scattering pathways, and thus the detailed structure in the immediate vicinity of the absorption edge. Bound state transitions Weak pre-edge structures usually result from bound state transitions. The pre-edge structures prior to K edges of first row transition metals arise from 1s to 3d transition. These pre-edge structures are observed for every first row transition metal as long as its 3d orbital is not fully occupied. Although the 1s to 3d transition is forbidden by dipole selection rules, it is nevertheless observed due to 3d to 4p orbital mixing and as well as direct quadrupolar coupling. As the 3d to 4f mixing improves, the 1s to 3d transition increases, which means that such a trend can be utilized as tool to probe the molecular geometric properties of the absorption sites. As the 1s to 3d transition increases, the geometry of the absorption site distorts away from a centrosymmetric geometry. Example $1$ Examining the X-ray absorption spectrum of a first row transition metal closely, one would notice that the "L edge" contains three edges namely L1, L2 and L3 edge in a energy decreasing order. L3 edge intensity is twice as much as of the L2 and L3 intensity. Rationalize this observation. Solution The so called L1 edge corresponds to the excitation of a 2s electron which requires more energy than a 2p electron. The 2p electron excitation is split into two edges namely L2 and L3, as a 2p electron gets excited, an open shell 2p5 electronic configuration forms, consequently, spin–orbit coupling of such a system occurs. A 2p5 excited states corresponds to two terms, 2P1/2 which has higher energy and give rise to the L2 edge, and 2P3/2 which has lower energy and give rise to the L3 edge. Due to degeneracy, the L3 edge has twice the edge jump of the L2 and L1 edges. XANES XANES, x-ray absorption near edge structure, is sometimes also called NEXAFS, near edge x-ray absorption fine structure. It is defined as partial analysis of x-ray absorption, and the range of XANES is between the threshold and where EXAFS begins. It can be used to determine the oxidation state and coordination number of the metal center in a complex; as well as the covalency and site symmetry of the molecule. Introduction XAS, x-ray abosorption spectroscopy, is a widely used technique for atomic local structure determination. A crystalline monochromator is usd to select the certain radiation with correspond energy that can excite a core electron. And there are mainly two region in the XAS spectrum, XANES and EXAFS. The difference between XANES and EXAFS is loosely defined as space difference in terms of which regime they represent in a XAS spectrum. Apparently the conceptual principles of these two systems are technically the same, which is one of the limitations of this definition, because there is no clear principle definition to distinguish these two systems. In terms of energy level, people usually think the XANES is in the range where the potential is 50 eV above the edge; the EXAFS is in where the potential between 50 - 1000 eV above the edge. Based on this, Bianconi also suggested that when a core electron gets excited at XANES, the wavelength of this photoelectron should be the distance between the absorbing atom and the closest atom/ligand. Theory During scanning, since the purpose is to excite a core electron, there will be no absorption until the energy of the radiation matches the ionization energy of the core electron. Then a hole will be left in the inner shell. A electron from higher energy state will fall into this vacancy, which cases releasing of energy. This released energy can either undergo a fluorescence emission to release a photon, or it can further excite and eject an electron from a higher energy state. This ejected electron is called auger electron. Unlike traditional photoemission spectroscopy, in which the photoelectron is directly measured; however, in XANES, the intensities of the light come through samples are measured. Besides, the effect of scattering from auger electrons, photoelectrons, and even emitted photon are all included. In order to undstand how XANES works in the XAS and be able to extract useful information from it, learning how the scattering happens and works in the x-ray absorption is very important. Absorption edge XANES in spectrum includes three portions: pre-edge, which is stroingly affected by bond lengths of molecules because of the exponentially decay of wavefunctions, edge, which is the big jump at 0 eV that indicate the ionization energy of core electrons, and XANES, which is the region from 0 to 50 eV above the threshold. As Figure 2 shows, the binding energy regime of a core shell is scanned by the radiation at certain frequency correspond to certain energy. There will be no absorption until the energy of the radiation matches the ionization energy of one core electron. There will be a big jump in the spectrum. What need to be notice here is that there is no selection rule for transition, because when the photon hit the core electron, it not only excites the electron, but also give kinetic energy to that electron to form a photoelectron. This photoelectron will be ejected and leave the atom. The most common edges are shown in Figure 1, K-edge (1s), L-edge (2s and 2p), M-edge (3s, 3p and 3d). These edges are corresponding to the priciple quantum number, n=1 (K-edge), n=2 (L-edge) and n=3 (M-edge), which indicate the order of electron shells. K, L and M are all x-ray notation. Scattering From the other perspective, the x-ray absorption spectroscopy can be used to determine the coordination number of a metal center and structure of a molecule based on how the scattering of photoelectrons or auger electrons affect the intensity of the radiation that come through the sample. When energy of the photon is high, in EXAFS, at continuum states which are high in energy, the atom can only have very weak effect on the system because the photoelectron that is ejected from the core orbitals is only weakly scattered by the surrounding atoms which are further from the absorbed atom. So all the electronic properties, or the interest in electron structure of the absorbed atom are centered on the low-lying extended states. The theoretical analysis of the electron structure theory is to solve Schrödinger equation. Based on scattering theory, one can say that at high energy, EXAFS region, for the scattering of the photoelectron or auger electron are very weak, the main contribution of the scattering to the wave function of the final state is from the path that the excited electron is scattered only once. This process can be referred as single scattering. Since when the excited electron is scattering at one atom, there must be another scattering happening on the other atom on the opposite position, so the scattering in EXAFS can be treated as geometrical and the information is much easier to extract. In the XANES region, when the energy is low, since neighboring atoms are not only close to the center atom, but also relatively close to each other, the excited electrons tend to bounce between the neighboring atoms before it hits back to the absorbed atom. So people call it multiple scattering in XANES regime compare to the single scattering in the EXAFS regime. The information details between the absorbing atoms and its neighbors, in terms of spatial arrangement, such as, radial distance, bond angles, and orientations relative to one another, are highly based on the multiple scattering. The multiple scattering expression is shown below, $\tau^{ij}_{LL'}=t^{i}_{l}\delta^{ij}\delta_{LL'}+t^{i}_{l}G^{ij}_{LL'}t^{j}_{l'}(1-\delta^{ij})+\sum_{k,L''}t^{i}_{l}G^{ik}_{LL''}t^{k}_{l''}G^{kj}_{L''L'}t^{j}_{l'}+\cdots$ L stands for the pair of angular momentum quantum number. $t^{i}_{l}$ is the l-wave t-matrix of atom i given in terms of the phase shift $\delta^{i}_{l}$ by $t^{i}_{l}(\epsilon)=i/2k[e^{2i\delta^{i}_{l}}(\epsilon)-1]$ and $G^{ij}_{LL'}$ is a real space structure constant. Each term in the expression represent an indvidual scattering path. This XANES cross-section is expressed as the sum of the sum of individual scattering path. The scattering of excited electrons in XANES regime is very sensitive. Changing of charge distribution leads to changing of the chemical environment of the absorbed atom. And the absorption edge will shift in the XANES regime because the core-level energies are changed. Every molecule have their distinct charge distributions, and this is how one is able to figure out their structure and other spatial information based on XANES. However, there are a lot of physical effects evolved in the near-edge region, which may cause some problems when trying to extract specific information out of it. Figure 5. Illustration of different oxdation states provides different x-ray absorption spectra in XANES regime, which due to different chemical environment.-figure from Wikipedia One-electron approximation When there are more than one electron in a system,for example, in a hydrogen molecule, the interelectron term in the Hamiltonian is shown as below $H=-\frac{\hbar^{2}}{2m}\nabla^{2}_{1}-\frac{\hbar^{2}}{2m}\nabla^{2}_{2}-\frac{e^{2}}{4\pi\epsilon_{0}r_{a1}}-\frac{e^{2}}{4\pi\epsilon_{0}r_{a1}}-\frac{e^{2}}{4\pi\epsilon_{0}r_{b1}}-\frac{e^{2}}{4\pi\epsilon_{0}r_{b2}}+\frac{e^{2}}{4\pi\epsilon_{0}R}+\frac{e^{2}}{4\pi\epsilon_{0}r_{12}}$ the last six terms, which are expression of potential energy of the four particles (two protons and two electrons) in the system, make impossibility to solve the Schrödinger equation. Then an approximation is made that instead of consider all these potential energy, the Haniltonian is expressed as the one electron which contains potential energy as the average of all the interactions. $f_{i}\phi_{i}=\epsilon_{i}\phi_{i}$ where $f_{i}$ is a sum of one-electron function, and $\phi_{i}$ is a one-electron wavefunction. This one-electron approximation, which is defined as the product of wave functions of many particles can be represented by a wave function of a single particle, is employed in the XANES theory. Even though density functional theory with local density approximation (LDA), which includes the exchange-correlation effect, can be used to figure out the electronic structure, it can be only applied in the ground state. In excited state, people need a quasiparticle description, and apparently one-electron state can give a pretty good approximation to the quasiparticle states. One thing about the one-electron eigenvalues in density functional theory is that instead of being the quasiparticle energies, they actually help the central object to obtain the correct total energy. From this, people know that excitation energy of local density approximationone-electron state cannot be determined, which, in XANES, means that LDA core eigenvalue can not accurately determine the core electron’s binding energy. However, the main spectra feature of interest in XANES regime is in the relative energy region above the threshold, instead of the threshold energy. So calculating the x-ray absorption spectra by using LDA one-electron eiganstate can give a pretty good approximation. Advantages and Disadvantages The most significant advantage of XANES, which also refers to as XAS, is that this technique is specific to elements. The signals in a XANES spectrum, llike fingerprints, represent only specific elements. Metal ions in molecules may be "silent" to some of spectroscopic techniques, such as EPR; however, XAS spectra can be taken of samples with a metal of interest, it provides detailed information about the oxidation state of the metal atoms. The sensitivity to multiple scattering makes it possible to extract three dimensional structure information from XANES spectra. Even though XANES can be used to determine local structure and many characteristics of atoms in a molecule, no technique is perfect. XANES also has its drawbacks. X-radiaiton can be destructive to samples, causing damage to the sample during the measurement process. In XAS spectra, espeically in XANES regime, oxidation state and the identity of scattering atom are difficult to determine because of the effect of mutiple scatterings by photoelectrons and auger electrons. • Shuai Wang
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Spectroscopy/X-ray_Spectroscopy/XANES/XANES_-_Theory_II.txt
XAS, or X-ray Absorption Spectroscopy, is a broadly used method to investigate atomic local structure as well as electronic states. Very generally, an X-ray strikes an atom and excites a core electron that can either be promoted to an unoccupied level, or ejected from the atom. Both of these processes will create a core hole. If the electron dissociates, this produces an excited ion as well as photoelectron and is studied by X-ray Photoelectron Spectroscopy (XPS). Introduction The electrons that are excited are typically from the 1s or 2p shell, so the energies are on the order of thousands of electron volts. XAS therefore requires high-energy X-ray excitation, which occurs at synchrotron facilities. X-ray energy is about 104 eV (where "soft x-rays" are between 100 eV- 3 keV and "hard x-rays" are above 3 keV) corresponding to wavelengths around 1 Angstrom. This wavelength is on the same order of magnitude as atom-atom separation in molecular structures, so XAS is a useful tool to deduce local structure of atoms. XAS is also utilized in analyzing materials based on their characteristic X-ray absorption "fingerprints." It is possible to deduce local atomic environments of each separate type of atom in a compound. XAS is particularly convenient because it is a non-destructive method to examine samples directly. Structures can be determined from samples that are both heterogeneous and amorphous. When an X-ray strikes an atom, one of the core electrons is either excited to a higher energy unoccupied state (a transition studied by XAS) or into an unbound state, called the continuum. When electrons are ejected from an atom of a solid material, this is essentially the photoelectric effect and is studied by X-ray Photoelectron Spectroscopy. Below is a diagram illustrating an atom absorbing an x-ray with the resultant ejection of a core electron into the continuum. Note that the level K refers to the n=1 level, L refers to the n=2 level, and M refers to the n=3 level. The wave vector of a photoelectron Determining the wave vector of the electron dictates which subset of XAS will be used (EXAFS, NEXAFS, etc.). The kinetic energy $E_k$ and threshold energy $E_0$ (amount of energy the photoelectron needed to promote the electron into the continuum) of the photoelectron can be addressed, after considering the de Broglie equation: $\lambda = \dfrac{h}{p}$ where • $p$ is the momentum of the electron, • $\lambda$ is the wavelength, and • $h$ is Planck’s constant. The wave vector of the electron is denoted as k, and can be defined as $k = \dfrac{2\pi}{\lambda} = \dfrac{2\pi p}{h}$ and thus the electron kinetic energy is written $\mathrm{E}_{\mathrm{k}}=\frac{1}{2} \mathrm{mv}^{2}=\frac{1}{2}\left(\frac{\mathrm{p}^{2}}{\mathrm{m}}\right)=\left(\frac{\mathrm{h}}{2 \pi}\right)^{2} \frac{\mathrm{k}^{2}}{2 \mathrm{m}}$ In terms of the incident x-ray energy, $E$, and the threshold energy $E_0$, $\mathrm{E}_{\mathrm{k}}=\mathrm{E}-\mathrm{E}_{0}=\mathrm{h} \nu-\mathrm{E}_{0}$ The expression for the wave vector $k$ of the photoelectron can thus be written as $k=\sqrt{\left(\frac{2 \pi}{h}\right)^{2} 2 m\left(h v-E_{o}\right)}$ Depending on values of k, different subsets of XAS will be used. For example, if $k=0$, or if $0<k<2/R$ (where $R$ is the distance between the x-ray absorbing atom and it’s nearest neighbor), low-energy Near Edge X-ray Absorption Fine Structure (NEXAFS) and NEXAFS will be used, respectively. If $k>k_c$, where $k_c$ is equal to $2/R$, Extended X-ray Absorption Fine Structure (EXAFS) is used. It is apparent from the diagram below that there are three different types of excited photoelectrons upon absorption of an x-ray quantum. For the sake of completeness, these are briefly discussed below. The first type of photoelectron transition to an unoccupied valence state, as it does not have enough energy to completely leave the atom. The second type of photoelectron actually has just enough kinetic energy to be able to escape into the continuum. Multiple scattering processes occur here, between multiple surrounding atoms that neighbor the absorbing atom. Lastly, the third type of photoelectron has a very high kinetic energy, and weak back-scattering occurs in a single scattering process between only one neighbor atom. Returning to the formula for the kinetic energy of an electron $\mathrm{E}_{\mathrm{k}}=\mathrm{E}-\mathrm{E}_{0}=\mathrm{h} v-\mathrm{E}_{0}$ can be rewritten as $E_{k}=h v-E_{B}-\phi$ Here, $E_B$ is the binding energy of the core electron to the atom, while the expression $E_B + \phi$ is the ionization energy. A more in-depth analysis of the ionization energy can be found in the section regarding Koopman's theorem. Beer’s Law and the relation to XAS Beer's Law can be applied to XAS for an x-ray beam that is both narrow and monochromatic, striking the absorber at a 90 degree angle. The absorber has a uniform composition and thickness. The important result of this derivation will be an expression describing the absorption coefficient, $\mu_M$, of X-ray absorption. Beginning with the basic model of Beer’s law can be written $\log \left(\frac{I_{1}}{I_{2}}\right)=k \Delta m=k_{d} \Delta d$ where m and d refer to the mass and thickness of the sample, respectively. The terms k and $k_d$ are proportionality constants whose units are dependent on the units of m or d, the wavelength, and also the elements comprising the sample. The previous equation can be written in terms of natural logarithms, in which case $\log \left(\frac{I_{1}}{I_{2}}\right)=\mu_{M} \Delta m=\mu_{M} \rho \Delta d$ where the symbol $\mu_M$ has been introduced, and is termed the mass absorption coefficient. Units of $\mu_M$ are cm2/g, and amounts are not dependent on the physical or chemical state of the element as X-ray absorption is primarily an atomic property. The term $\Delta m$ is the mass difference of two samples in units of g/cm2 of irradiated sample area, and $\Delta d$ is the thickness of the sample in units of centimeters. Density of the sample is $\rho$ in units of g/cm3. This equation is limited to the sample area receiving uniform radiation by the X-ray beam. In thickness measurements, the linear absorption coefficient ?l is introduced, and the previous equation can be written $\ln \left(\frac{l_{1}}{l_{2}}\right)=\mu_{b} \Delta d$ The linear absorption coefficient has a characteristic value depending on both the element and the quantity of atoms that are in the beam path. The value of $\Delta d$ is the thickness of the sample that is irradiated in units of centimeters. It is obvious that $\mu_{1}=\mu_{\mathrm{M}} \rho$ Furthermore, one can define the atomic mass absorption coefficient that measures the “cross-section” of the absorbing atom as $\mu=\mu_{\mathrm{M}} \frac{\mathrm{M}}{\mathrm{N}}$ where M is the absorbing atom’s atomic mass and N is Avogadro’s number. Values of $\mu_M$ for a sample can be calculated using $\mu_{M}=W_{A} \mu_{A}+W_{B} \mu_{B}+W_{C} \mu_{C}+\ldots$ where the sample contains elements A, B, and C of weight fractions $W_A$, $W_B$, and $W_C$, and mass absorption coefficients of $\mu_A$, $\mu_B$, and $\mu_C$. Values of $\mu_A$, $\mu_B$, and $\mu_C$ at different wavelengths are available in many sources. As stated previously, the value of the mass absorption coefficient is primarily an atomic property. It is not dependent on the state of the atom; for instance, a bromine atom in the form of vapor, potassium bromide or sodium bromate, liquid or solid bromide, has the same chance of absorbing an x-ray quantum in all of these forms. More on the absorption coefficient, $\mu_M$ At higher photon energies, $\mu_M$ decreases steadily. However, when the necessary energy for a core electron transition is achieved, $\mu_M$ increases sharply and causes an absorption edge. This occurs when the photon energy just matches the energy needed to either promote a core electron into an empty valence level, or to completely eject the electron from the atom (electron is promoted into the continuum). Each edge occurs at its own critical absorption wavelength. The associated energies are the electron binding energy in the K, L, and M… shells (based on the Bohr atomic model, $n=1$ for $K$ edges, $n=2$ for $L$ edges, $n=3$ for $M$ etc.) of the particular atom absorbing the X-rays. The resultant spectrum of a sample will show these edges at the X-ray photon energies that equal the ionization potentials of the bound electrons in the component atoms. Seen in the figure below are the K-edge and three L-edges of a typical X-ray absorption spectrum. This nomenclature indicates which core orbital the electron originated from. The K-edge is a consequence of the 1s – 3p transition, where the L-edges are from 2s – 5p ($L_I$), $2p_{1/2} – 5d_{3/2}$ ($L_{II}$), and $2p_{3/2} – 5d_{5/2}$ ($L_{III}$) transitions. Elements have characteristic edges, a few of which can be seen in Table 1. There are a few factors that directly influence the exact energy of the edge. The charge density of the absorbing atom is very important, and is itself influenced by the valence or oxidation state. The energy of the X-ray absorbing edge is proportional to the oxidation state of the absorbing atom; the edge will occur at a higher energy the more positive the oxidation state of the atom. This can be simply explained by considering that it is increasingly difficult to remove an electron from an atom that bears a higher positive charge. Another factor influencing the edge energy is the atomic number of the absorbing atom. As atomic number increases, so does the corresponding edge energy. A plot of the energy of the edge as a function of atomic number can be seen below. The X-ray photon energy region of approximately 2-30 keV limits the accessible $K$ edges. Because current synchrotron sources do not produce high intensity X-ray energies greater than 30 keV, elements P to Sn can be analyzed, but anything past this cannot. The energy required to separate a 1s electron from elements of atomic numbers greater than $Sn$ is too high for current synchrotron sources. However, because L edges occur at lower energies than $K$ edges, it is possible to examine the rest of the elements using XAS. Lr, the heaviest element, has an $L_{III}$ edge occurring at 22 keV, an energy that synchrotrons can indeed produce. Koopman's Theorem and the relation to x-ray absorption As stated previously, x-ray absorption is capable of exciting and dissociating a core electron from a neutral atom. This aspect of x-ray absorption is studied by XPS, but we will discuss this here for completeness. Calculating the energy for this ionization potential is of interest. Koopman’s Theorem (KT) is a method that relates experimental ionization potentials to the molecular orbital energy levels. Considering an orbital ?i of energy $E_i$, Koopman’s Theorem says that the value of $–E_i$ is equal to the ionization potential needed to remove an electron from ?i. Stated differently, the ionization potential is the negative value of the HOMO energy. Koopman’s Theoroem is an application of the Hartree-Fock approximation, as the value of $E_i$ is calculated via the HF approximation. This is the simplest method to evaluate ionization potentials. Koopman’s theorem assumes that a multi-electron atom has an electronic wavefunction which can be described using a Slater determinant. The Slater determinant is a set of one-electron wavefunctions, where each wavefunction in the determinant is the eigenfunction of the related Fock operator. Another assumption made is that when an electron is added or removed from the system, the remaining electrons’ corresponding Fock operators do not change. Thus the Koopman’s theorem predicted value of a final wavefunction has a higher energy than the actual final wavefunction. This is because in reality an addition or removal of an electron to or from the initial wavefunction will change the system’s Fock operator. This would result in a re-organizing of the one-electron wavefunctions, which Koopman’s theorem disregards. Koopman’s theorem uses the Hartree-Fock theory in describing the value of $E_i$, where the Hartee-Fock theory describes ionization or electron-gain of a system. A more accurate method would be electron correlation, which is typically calculated by post Hartree-Fock methods. A review of the Hartree-Fock Approximation Assume that a single Slater determinant can describe the closed-shell system having $N$ electrons. The Slater determinant, $|\psi_i\rangle$, can be written as a set of $N$ orthonormal spin orbitals, $\chi_1$, $\chi_2$,…$\chi_N$. $\left|\psi_{0}\right\rangle=\left|\chi_{1}, \chi_{2}, \ldots, \chi_{N}\right\rangle$ Recall that the Hartree-Fock method seeks to find the spin orbital $Y_0$ that minimizes $E_0$, using the electrostatic Hamiltonian, H, operating on $|\psi_0\rangle$. $E_{0}=\left\langle\psi_{0}|H| \psi_{0}\right\rangle=\sum_{i=1}^{N}\langle i|h| i\rangle+\frac{1}{2} \sum_{i=1}^{N} \sum_{j=1}^{N}\{[i i | j j]-[i j \beta j]\}$ where $\langle i|h|i \rangle = \langle\chi_i|h|\chi_j\rangle = \int dx_1\chi_1*(x_1)h(x_1)\chi_j(x_1)$ $\left[ ij|kl\right] = \left[\chi_i\chi_j|\chi_k\chi_l\right] = \int dx_1dx_2\chi_i^*(x_1)\chi_j(x_1)\frac{1}{r_{12}}\chi_k^*(x_2)\chi_l(x_2)$ and $h(x_1)$ is the usual one-electron core-Hamiltonian. Based on the minimization condition on $E_0$, the following expression can be recognized, where the occupied spin orbitals ? span a subspace of the Fock operator, $f$. $f\left|\chi_{n}\right\rangle=\sum_{j=1}^{N} \varepsilon_{\omega}\left|\chi_{3}\right\rangle$ This holds true for any a from 1,2,3….N. Also, the Fock operator matrix elements between occupied spin orbitals and virtual spin orbitals must be zero. If there is a set of N spin orbitals that satisfies the previous equation, and which has a unitary transformation performed on it, then a new set of spin orbitals will be produced that satisfy the same equation. The unitary transformation is chosen such that the matrix will have diagonal elements of $\epsilon_{ij}=\langle\chi_i|f|\chi_j\rangle$ The corresponding spin orbitals are referred to as "canonical spin orbitals." Also, the Fock operator is defined as $\langle\chi_i|f|\chi_j\rangle = \langle\chi_i|h|\chi_j\rangle + \sum^N_{k=1}{[ij|kk]-[ik|kj]}$ Evaluating the Ionization Potential The energy needed to remove an electron from an atom can now be evaluated. To get a close approximation of the ionization potential, the neutral and ionic states require a good description. The neutral state is said to be at the HF level. A rough assumption can be made with respect to the electron density of the ionic system. The ionic system has an electron density of the neutral system minus the density of the orbital from which the electron has been removed. A “frozen orbital” approximation has been used here, in that it is assumed the other N-1 electrons do not change their spatial distributions upon ionization. Thus the ionic molecule can be described using the neutral molecule’s Hartree-Fock spin orbitals. To summarize thus far, 1. The value of $E_0$ is the HF energy of an N-electron system. 2. The HF canonical spin orbitals of the N-1 electron system are used to build up the mean value of H over an N-1 electron single determinant. Koopman's theorem thus says that the difference between $E_0$ and the mean value of H is a good approximation of the ionization potential of the corresponding N-electron system. If the ionization system is described by the N-1 -electron single determinant, and the canonical HF spin orbitals of the neutral system are ?$\chi_i$, then it is possible to write an expression for the ionization potential as $-\epsilon_{cc}={}^{(N-1)}E_c - {}^NE_0 = \left\langle{}^{(N-1)}\Psi_c|H|{}^{(N-1)}\Psi_c\right\rangle - \left\langle{}^N\Psi_0|H|{}^N\Psi_0\right\rangle$ where $|{}^{N-1}\Psi_c\rangle = |\chi_1\chi_2,\dots,\chi_{c-1},\chi_{c+1},\dots,\chi_N\rangle$ The matrix element $\epsilon_{cc}$ is an eigenvalue of the Fock matrix if one uses the canonical spin orbitals in the calculation. Koopman showed that the use of the canonical spin orbitals as HF spin orbitals produce the lowest energy of the ion. Thus it is possible to approximate the ionization potentials by the eigenvalues of the Fock matrix. Fermi's Golden Rule applied to XAS Analysis of both the initial and final states of the absorbing atom is necessary to determine the probability of a core electron to absorb an x-ray photon. XAS measures the transition of an electron initially in a deep core state and finally in a previously unoccupied state. Fermi’s golden rule gives the transition probability, to which the x-ray absorption coefficient is proportional. This relationship can be expressed as $\mu(E)\sim\sum_f|\langle\psi_i|A(r)\cdot p|\psi_f\rangle|^2\delta(E-E_f)$ where the photoelectron energy is E=?? – $E_i$. $\psi_i$ and $\psi_f$ are the initial and final eigenstates and have energies $E_i$ and $E_f$. The initial state is the ground state of the atom. These wavefunctions are typically calculated using a Self-Consistent Field (SCF) approximation. The coupling to the x-ray field is represented by A-p, where p is the momentum operator. A(r) is the vector potential of the applied electromagnetic field, and is considered to be a classical wave of polarization. This can be seen as $\hat\epsilon\perp k$ or $A(r,t)\cong\hat\epsilon A_0e^{ik\cdot r}$. The entire expression is summed over unoccupied final states of energies $E_f$. Usually the Golden Rule is reduced to just a one-electron approximation, and calculations are based on the dipole approximation. The dipole approximation assumes that the spatial dependence of the electromagnetic field can be ignored, or $e^{ik\cdot r}=1$. However, the one-electron final state that should be used is still debated. The final state rule is often implemented in current research. In this approximation, the final state is calculated while considering the screened core-hole. The Hamiltonian used is the final state, one-particle Hamiltonian. However, for transition metal L2,3 edges (excitation of a 2p electron to a 3d shell), the one-electron approximation is rendered invalid. This is a result of the initial state wave function containing a partially filled d-shell. Upon excitation of a 2p electron to a 3d shell, there are two partially filled shells that have large overlap. Mathematically, this overlap can be written as $|\langle\psi_f|A(r)\cdot p|\psi_i\rangle|^2=|\langle\psi_i 2p3d|A(r)\cdot p|\psi_i\rangle|^2 = |\langle\psi_i^*2p^53d^{n+1}|A(r)\cdot p|\psi_i^* 3d^n\rangle|^2 = |\langle 2p^53d^{n+1}|A(r)\cdot p|3d^n\rangle|^2$ Note that the wave function for the final state is written as $\psi_i2p3d$. A little more on the "Interaction Hamiltonian" The overall interaction Hamiltonian can be written as a sum of the radiation field Hamiltonian, the atomic electron Hamiltonian, and the interaction Hamiltonian. $H=H_{RAD}+H_{ATOM}+H_{INT}$ where the radiation field Hamiltonian, HRAD, can be written $H_{RAD}=\sum_{k,\lambda}\hbar\omega_k(n_{k,\lambda}+1/2)$ where the whole expression is summed over the wave vector, k, and degrees of freedom, $\lambda$. The term in parenthesis is simply the zero point energy. The kinetic term $p^2/2m$ and potential energy term $V(r_i)$ together make up the expression for the Hamiltonian of the atomic electron. The potential energy term considers both the Coulombic interaction with the nucleus, as well as the electron-electron repulsion and spin-orbit interaction. $H_{ATOM}=\sum_i\left[\frac{p^2_i}{2m}+V(r_i)\right]$ Lastly, the interaction Hamiltonian is described as a slight perturbation comprised of two terms. The first of these terms shows the vector field A interacting on the momentum operator p. Another way of stating this is the electron moments being acted on by the electric field E. The second term in the interaction Hamiltonian shows how the magnetic field B, where B=? × A acts on the electron spin ?. $H_{INT(l)}=\frac{e}{mc}\sum_i p)i\cdot A(r_i)+\frac{e}{mc}\sum_i\sigma_i\cdot\nabla x A(r_i)$ So Fermi's Golden Rule gives the transition probability, $W_{fi}$, between a transition between a system's initial state ( ?i) and final state (?f) upon absorption of a photon of energy $\hbar\Omega$. This is expressed in the equation $W_{fi}=\frac{2\pi}{\hbar}|\langle\phi_f|T|\phi_i\rangle|^2\partial(E_f-E_i-\hbar\Omega)$ The role of the delta function is to take care of the conservation of energy. A transition occurs if the energy of the final state is equal to the energy of the initial state plus the energy from the x-ray. The transition rate is given by the squared matrix. The Lippman-Schwinger equation gives an expression for the transition operator, T, as $T=H_{INT}+H_{INT}\frac{1}{E_i-H+i\Gamma/2}T$ where H is the Hamiltonian of the unperturbed system, and gamma stands for the excited state's lifetime broadening. In order to describe the one-phone process of x-ray absorption, the equation for T is solved iteratively and in first order where $T_1 = H_{INT}(1)$. For an electric dipole transition, the transition operator is expressed as $T_1=\sum_q e\sqrt{\frac{2\pi\hbar\Omega}{V_5}}e_q\cdot r$ and for an electric quadrupole transition, the transition operator is described as $T_1(EQ)\propto e_{k,\lambda}\cdot Q\cdot\hat k$ where Q is the quadrupole operator $Q=rr-\frac{1}{3}r^2\partial_{ij}$ Dipole Selection Rules As stated previously, the dipole transition is able to describe x-ray absorption. When Fermi's golden rule is written in terms of this operator, the probability of a transition, $W_{fi}$, per unit time is $W_{\hbar}=\frac{e^{2}}{\hbar c} \frac{4 \Omega^{3}}{3 c^{2}} n\left|\left\langle\phi_{f}|r| \phi_{i}\right\rangle\right|^{2} \partial\left(E_{f}-E_{i}-h \Omega\right)$ where the fine structure constant is the first term, $\frac{e^2}{\hbar c}$. The symbol omega is the excitation frequency and n is the number of photons of the radiation field.The wave functions of an atom can be given quantum numbers J and M, where J is the overall momentum quantum number and M is the magnetic quantum number. If the integral term is written in terms of J and M, the following equation arises, where the radial and angular part of the matrix element can be separated as stated by the Wigner-Eckart theorem. $\left\langle\left.\phi_{f}(J M)\right|_{q \lambda} \cdot r | \phi_{i}\left(J^{\prime} M\right)\right\rangle=(-1)^{J-M}\left[\begin{array}{ccc} J & 1 & J^{\prime} \ -M & q & M \end{array}\right]\left\langle\left.\phi_{f}(J)\right|_{q} \cdot r | \phi_{i}\left(J^{\prime}\right)\right\rangle$ The total momentum quantum number, J, cannot change by more than one unit. Therefore, $\Delta$J=+1, 0 or -1. The magnetic quantum number, M, can change depending on the x-ray polarization. This can be stated as $\Delta$M=q. The incident x-ray has an angular momentum quantum number l value of lhv=1, and conservation of this angular momentum yields $\Delta$l=+1 or -1. The excited electron has an l value different from the original core state by 1. The spin quantum number, s, has the selection rule of $\Delta$sj=0, as the x-ray has no associated spin and this must be conserved. When an electron in the 1s core state is excited, the only state that is accessible is the p state. However, from the p state, either the s or d final states are accessible. The line strength of the transition can be written in terms of the radial part of the previous equation, or as $S=e^2\cdot|\langle\phi_f(J)|e_q\cdot r|\phi_i(J^\prime)\rangle|^2$ . $S=e^{2} \cdot\left|\left\langle\phi_{f}(J)\left|e_{q} \cdot r\right| \phi_{i}(J \eta)^{2}\right \rangle\right.$ Then the transition probability, $W_{ij}$, can be rewritten to include the $S$ term. $W_{A}=\frac{1}{\hbar c} \frac{4 \Omega^{3}}{3 c^{2}} n\left[\begin{array}{ccc} J & 1 & J^{\prime} \ -M & q & M \end{array}\right]^{2} S \partial\left(E_{f}-E_{i}-h \Omega\right)$ If the squared-J symbol is assumed to be unity, this can be written as $W_{n}=\frac{1}{\hbar c} \frac{4 \Omega^{3}}{3 c^{2}} n S \partial\left(E_{f}-E_{i}-h \Omega\right).$ Photon flux, the x-ray absorption cross-section, and the oscillator strength As seen in the previous equation, the transition probability is actually proportional to n, then number of photons, which is directly related to photon flux. The formula for photon flux $F_p$ can be written as $F_{p}=n \frac{\Omega^{2}}{\pi \hbar c^{2}}$ The x-ray absorption cross section, $\sigma$, is given in m2 in the following equation, and is directly proportional to the value of f which is the oscillator strength. \begin{aligned} \sigma &= \frac{W_{fi}}{F_p}=\frac{4\pi^2\Omega}{3c}S\partial(E_f=E_i-\hbar\Omega) \ \sigma &= \frac{2\pi^2e^2}{mc}f \end{aligned} The penetration depth, $\lambda_{p}$, of the x-rays is directly influenced by the cross section via $\lambda_{p}=\dfrac{1}{\rho \sigma}$ with the term $\rho$ being the density of the system. Here, the units of cross-section are typically given in values of angstroms-squared. Also, it is common to define the term "inverse density", or the space occupied by one atom of interest, as $V_{a t}=1 / \rho$ and the equation for penetration depth can be re-written as $\lambda_{p}=\frac{V_{n t}}{\sigma}$ Practice Problem 1. For La metal, which has a $V_{at}$ value of cubic angstroms, and a cross section of 0.15 square angstroms, what is the penetration depth? 2. What is the difference between XAS and XPS? 3. What information does the absorption edge yield?
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Spectroscopy/X-ray_Spectroscopy/XAS_-_Theory.txt
The Boltzmann average (sometimes known as the thermal average) for a given quantity or observable, let us say A, is given by $\langle A \rangle= \frac{\sum_{i}Ae^{-E_i/k_BT}}{\sum_{i}e^{^{-E_{i}/k_BT}}} \nonumber$ where kB is the Boltzmann constant, and T is the temperature. This provides the expected value of the property in question at a given temperature. This equation assumes non-degenerate states. • Boltzmann Distribution The Maxwell-Boltzmann distribution function is a function f(E) which gives the probability that a system in contact with a thermal bath at temperature T has energy E. This distribution is classical and is used to describe systems with identical but distinguishable particles. • Fluctuations The methods developed allow us to calculate thermodynamic averages. The deviation of a mechanical variable from its mean value is called a fluctuation. The theory of fluctuations is useful for understanding how the different ensembles (NVT, NPT etc.). Fluctuations are also important in theories of light scattering and in the study of transport processes. • Ideal Gas Partition Function • Proof that β = 1/kT • The Boltzmann constant The Boltzmann constant (k or kB) is the physical constant relating temperature to energy. It is named after the Austrian physicist Ludwig Eduard Boltzmann. Boltzmann Average The Maxwell-Boltzmann distribution function is a function f(E) which gives the probability that a system in contact with a thermal bath at temperature T has energy E. This distribution is classical and is used to describe systems with identical but distinguishable particles. $f(E) \propto \Omega(E) \exp \left[ - E/k_B T \right]$ where $\Omega(E)$ is the degeneracy of the energy E; leading to $f(E) = \frac{1}{Z} \Omega(E) \exp \left[ -E/k_B T \right]$ where • $k_B$ is the Boltzmann constant, • $T$ is the temperature, and • the normalization constant $Z$ is the partition function of the system. • SklogWiki Fluctuations The methods developed allow us to calculate thermodynamic averages. The deviation of a mechanical variable from its mean value is called a fluctuation. The theory of fluctuations is useful for understanding how the different ensembles (NVT, NPT etc.). Fluctuations are also important in theories of light scattering and in the study of transport processes. We will show that the fluctuations about the mean energy are very small for macroscopic systems. Given the fact that the number of particles is much smaller the relative magnitude of fluctuations in a MD simulation are much larger. In part, we study fluctuations to have a criterion for determining the quality of an average property that we calculate in a computer simulation. The average of a property is also called the first moment of a distribution. The second moment of the distribution is called the variance $\langle (x- \langle x \rangle)^2 \rangle = \langle x^2 \rangle - \langle x \rangle^2 \label{var}$ The variance is a measure of spread of the probability distribution about the mean value. The mean value is $\langle x \rangle = \int_{-\infty}^{\infty} x P(x)dx$ or if descritized data are involved $\langle x \rangle = \sum_i x_i P_i$ where $P(x)$ is a normalized probability distribution or $P_i$ is a discrete probability distribution. The mean square value is $\langle x^2 \rangle = \int_{-\infty}^{\infty} x^2 P(x)dx$ or if descritized data are involved $\langle x^2 \rangle = \sum_i x_i^2 P_i$ We consider the fluctuations in the NVT or canonical ensemble. The number, volume, and temperature are fixed, and we can calculate the fluctuations in the energy in this ensemble. The variance in the energy (Equation \ref{var}) is \begin{align} \sigma_E^2 & = \langle (E- \langle E \rangle)^2 \rangle = \langle E^2 \rangle - \langle E \rangle^2 \ & = \sum_i E^2_iP_i - \left( \sum_i E^2_i P_i \right)^2 \end{align} where We need to evaluate the mean-square energy term Thus, The spread in energies about the mean is given by the ratio of the square root of the variance relative to the energy For an ideal gas Cv = 3/2Nk and E = 3/2NkT The fluctuations are proportional to 1/Ö N which is an extremely small number for macroscopic systems where N is of the order of Avagadro's number. Ideal Gas Partition Function The canonical ensemble partition function, Q, for a system of N identical particles each of mass m is given by $Q_{NVT}=\frac{1}{N!}\frac{1}{h^{3N}}\int\int d{\mathbf p}^N d{\mathbf r}^N \exp \left[ - \frac{H({\mathbf p}^N,{\mathbf r}^N)}{k_B T}\right]$ where h is Planck's constant, T is the temperature and $k_B$ is the Boltzmann constant. When the particles are distinguishable then the factor N! disappears. $H(p^N, r^N)$ is the Hamiltonian corresponding to the total energy of the system. H is a function of the 3N positions and 3N momenta of the particles in the system. The Hamiltonian can be written as the sum of the kinetic and the potential energies of the system as follows $H({\mathbf p}^N, {\mathbf r}^N)= \sum_{i=1}^N \frac{|{\mathbf p}_i |^2}{2m} + {\mathcal V}({\mathbf r}^N)$ Thus we have $Q_{NVT}=\frac{1}{N!}\frac{1}{h^{3N}}\int d{\mathbf p}^N \exp \left[ - \frac{|{\mathbf p}_i |^2}{2mk_B T}\right] \int d{\mathbf r}^N \exp \left[ - \frac{{\mathcal V}({\mathbf r}^N)} {k_B T}\right]$ This separation is only possible if ${\mathcal V}({\mathbf r}^N)$ is independent of velocity (as is generally the case). The momentum integral can be solved analytically: $\int d{\mathbf p}^N \exp \left[ - \frac{|{\mathbf p} |^2}{2mk_B T}\right]=(2 \pi m k_B T)^{3N/2}$ Thus we have $Q_{NVT}=\frac{1}{N!} \frac{1}{h^{3N}} \left( 2 \pi m k_B T\right)^{3N/2} \int d{\mathbf r}^N \exp \left[ - \frac{{\mathcal V}({\mathbf r}^N)} {k_B T}\right]$ The integral over positions is known as the configuration integral, $Z_{NVT}$ (from the German Zustandssumme meaning "sum over states") $Z_{NVT}= \int d{\mathbf r}^N \exp \left[ - \frac{{\mathcal V}({\mathbf r}^N)} {k_B T}\right]$ In an ideal gas there are no interactions between particles so ${\mathcal V}({\mathbf r}^N)=0$. Thus $\exp(-{\mathcal V}({\mathbf r}^N)/k_B T)=1$ for every gas particle. The integral of 1 over the coordinates of each atom is equal to the volume so for N particles the configuration integral is given by $V^N$ where V is the volume. Thus we have $Q_{NVT}=\frac{V^N}{N!}\left( \frac{2 \pi m k_B T}{h^2}\right)^{3N/2}$ If we define the de Broglie thermal wavelength as $\Lambda$ where $\Lambda = \sqrt{h^2 / 2 \pi m k_B T}$ one arrives at (Eq. 4-12 in [1]) $Q_{NVT}=\frac{1}{N!} \left( \frac{V}{\Lambda^{3}}\right)^N = \frac{q^N}{N!}$ where $q= \frac{V}{\Lambda^{3}}$ is the single particle translational partition function. Thus one can now write the partition function for a real system can be built up from the contribution of the ideal system (the momenta) and a contribution due to particle interactions, i.e. $Q_{NVT}=Q_{NVT}^{\rm ideal} ~Q_{NVT}^{\rm excess}$ External links • Configuration integral page on VQWiki
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Statistical_Mechanics/Boltzmann_Average/Boltzmann_distribution.txt
The pressure in the state $j$ is given by $p_j = - (\partial E_j/\partial V)$. The average energy is $\bar{E}=\frac{\displaystyle \sum_{J} E_{j}(N, V) e^{-\beta E_{f}(N, n)}}{\displaystyle \sum_{j} e^{-\beta E / N, n}} \label{I}$ The average pressure is $\bar{p}=\frac{\displaystyle \sum p_{j}(N, V) e^{-\beta E_{f}(N, n)}}{\displaystyle \sum_{j} e^{-\beta E_{f}(N, n)}} \label{II}$ According the Gibbs postulate the average energy, average pressure and other average mechanical properties calculated using the partition function are equal to their thermodynamic counterparts. Note that some authors use $\bar{E}$ and $\bar{p}$ bar for the average quantities and elsewhere the angle bracket notation is used. These are equivalent notations. If we differentiate the expression for the average energy we can treat the denominator, $Q$ as a function of $V$ as well since it represents a sum over $\exp(-\beta E_j(N,V)$. Since $E_j$ appears both in the exponent and as a function multiplying the exponent we have $\left(\frac{\partial E}{\partial V}\right)_{N, \beta}=\frac{\displaystyle \sum\left(\frac{\partial E_{j}}{\partial V}\right) e^{-\beta E / N, n}}{Q}-\frac{\displaystyle \sum-\beta\left(\frac{\partial E_{j}}{\partial V}\right) E_{f} e^{-\beta E / N, n}}{Q} -\frac{\displaystyle \sum_{j} E_{j}^{-\beta E_{j} / N, n} \displaystyle \sum_{j}-\beta\left(\frac{\partial E_{j}}{\partial V}\right) e^{-\beta E_{f}(N, n)}}{Q^{2}}$ Here we used the quotient rule to take the derivative. This can written compactly as $\left(\frac{\partial E}{\partial V}\right)_{N, \beta}=-\bar{p}+\beta \overline{E p}-\beta \bar{E} \bar{p}$ We can differentiate Equation \ref{II} with respect to $\beta$ to obtain $\left(\frac{\partial \bar{p}}{\partial \beta}\right)_{N, V}=\bar{E} \bar{p}-\overline{E p}$ The two derivative expressions can be combined to give $\left(\frac{\partial E}{\partial V}\right)_{N, \beta}+\beta\left(\frac{\partial \bar{p}}{\partial \beta}\right)_{N, V}=-p$ This can be compared to the thermodynamic equation of state $\left(\frac{\partial E}{\partial V}\right)_{T}-T\left(\frac{\partial p}{\partial T}\right)_{V}=-p \label{IV}$ This can be derived as follows from $dE = TdS – PdV.$ First take the derivative of both sides with respect to $V$ at constant $T$. Now, note that $\left(\frac{\partial p}{\partial T}\right)_{V}=\left(\frac{\partial S}{\partial V}\right)_{T}$ This is known as a Maxwell relation. It is obtained from $dA = SdT + PdV \label{III}$ From the fact that $A$ is a state function (the Helmholtz free energy) we know that the second cross derivatives must be equal. That is: $\left(\frac{\partial A}{\partial V \partial T}\right)=\left(\frac{\partial A}{\partial T \partial V}\right)$ And from inspection of Equation \ref{III} we see that $\left(\frac{\partial A}{\partial T}\right)_{V}=S$ and $\left(\frac{\partial A}{\partial V}\right)_{T}=P$ Finally, using the relation $T\left(\frac{\partial}{\partial T}\right)=-\frac{1}{T}\left(\frac{\partial}{\partial(1 / T)}\right)$ Showing that this is true is a little tricky. For example, we can define $F = 1/T$. Then $\dfrac{\partial F}{\partial T} = \dfrac{-1}{T^2}$ and $\dfrac{\partial F}{\partial F} = 1$ So we can write $\dfrac{\partial F}{\partial T} = \dfrac{-1}{T^2} \left(\dfrac{\partial F}{\partial F}\right)$ or $\dfrac{\partial }{\partial T} = \dfrac{-1}{T^2} \left(\dfrac{\partial }{\partial F}\right)$ which gives Equation \ref{IV} when both sides are multiplied by $T$. $\left(\frac{\partial E}{\partial V}\right)_{T}+\frac{1}{T}\left(\frac{\partial p}{\partial(1 / T)}\right)_{V}=-p$ The comparison with the above equation shows that $\beta \propto 1/T$. This proves that $\beta = constant/T$. The constant turns out to be $k_B$ or Boltzmann’s constant by comparison with expressions for the average energy or average pressure with known thermodynamic equations. The Boltzmann constant The Boltzmann constant (k or kB) is the physical constant relating temperature to energy. It is named after the Austrian physicist Ludwig Eduard Boltzmann. Its experimentally determined value (in SI units, 2002 CODATA value) is: $k_B =1.380 6505(24) \times 10^{-23}\left. JK^{-1}\right.$ History of Boltzmann's constant "This constant is often referred to as Boltzmann's constant, although, to my knowledge, Boltzmann himself never introduced it - a peculiar state of affairs, which can be explained by the fact that Boltzmann, as appears from his occasional utterances, never gave thought to the possibility of carrying out an exact measurement of the constant." Max Planck, Nobel Lecture, June 2, 1920 Experimental determination of Boltzmann's constant Boltzmann's constant can be obtained from the ratio of the molar gas constant to the Avogadro constant. The molar gas constant can be obtained via acoustic gas thermometry, and Avogadros constant from either the Silicon sphere, or via the watt balance. Recently laser spectroscopy has been used to determine the constant (Refs. 3 and 4). Other techniques include Coulomb blockade thermometry (Refs. 5 and 6). 1. L. Storm "Precision Measurements of the Boltzmann Constant",Metrologia 22 pp. 229-234 (1986) 2. B Fellmuth, Ch Gaiser and J Fischer "Determination of the Boltzmann constant—status and prospects", Measurement Science and Technology 17 pp. R145-R159 (2006) 3. C. Daussy, M. Guinet, A. Amy-Klein, K. Djerroud, Y. Hermier, S. Briaudeau, Ch. J. Bordé, and C. Chardonnet "Direct Determination of the Boltzmann Constant by an Optical Method", Physical Review Letters 98 250801 (2007) 4. G. Casa, A. Castrillo, G. Galzerano, R. Wehr, A. Merlone, D. Di Serafino, P. Laporta, and L. Gianfrani "Primary Gas Thermometry by Means of Laser-Absorption Spectroscopy: Determination of the Boltzmann Constant", Physical Review Letters 100 200801 (2008) 5. J. P. Pekola, K. P. Hirvi, J. P. Kauppinen, and M. A. Paalanen "Thermometry by Arrays of Tunnel Junctions", Physical Review Letters 73 pp. 2903-2906 (1994) 6. Jukka P. Pekola, Tommy Holmqvist, and Matthias Meschke "Primary Tunnel Junction Thermometry", Physical Review Letters 101 206801 (2008)
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Statistical_Mechanics/Boltzmann_Average/Proof_that_%CE%B2_%3D_1%2F%2FkT.txt
Statistical mechanics makes the connection between macroscopic dynamics and equilibriums states based on microscopic dynamics. For example, while thermodynamics can manipulate equations of state and fundamental relations, it cannot be used to derive them. Statistical mechanics can derive such equations and relations from first principles. Before we study statistical mechanics, we need to introduce the concept of the density, referred to in classical mechanics as density function, and in quantum mechanics as density operator or density matrix. The key idea in statistical mechanics is that the system can have “microstates,” and these microstates have a probability. For example, there may be a certain probability that all gas atoms are in a corner of the room, and this is probably much lower than the probability that they are evenly distributed throughout the room. Statistical mechanics deals with these probabilities, rather than with individual particles. Three general contexts of probability are in common use: a. Discrete systems: and example would be rolling dice. A die has 6 faces, each of which is a “microstate.” Each outcome is equally likely if the die is not loaded, so $p_{i}=\dfrac{1}{W}=\dfrac{1}{6}$ is the probability of being in microstate “i” where W=6 is the total number of microstates. b. Classical systems: here the microstate has to be specified by the positions xi and the momenta pi of all particles in the system, so $\rho\left ( x_{i},p_{i},t \right )$ is the time dependent probability of finding the particles at xi and pi. Do not confuse the momentum here with the probability in a.! It should be clear from the context. r is the classical density function. c. Quantum systems: here the microstate is specified by the density operator or density matrix $\hat{\rho}\left ( t \right )$. The probability that the system is in quantum state “i” with state |i> is given by $p_{i}(t)=\int d x \Psi_{i}^{*}(x) \hat{\rho}(x, t) \Psi_{i}(x)=\langle i|\hat{\rho}(t)| i\rangle$. Of course the probability does not have to depend on time if we are in an equilibrium state. In all three cases, statistical mechanics attempts to evaluate the probability from first principles, using the Hamiltonian of the closed system. In an equilibrium state, the probability does not depend on time, but still depends on x and p (classical) or just x (quantum). We need to briefly review basic concepts in classical and quantum dynamics to see how the probability evolves in time, and when it does not evolve (reach equilibrium). Mechanics: classical Definition of phase space: Phase space is the 6n dimensional space of the 3n coordinates and 3n momenta of a set of n particles, which, taken together constitute the system. Definition of a trajectory: The dynamics of a system of N degrees of freedom (N = 6n for n particles in 3-D) are specified by a trajectory {xj(t), pj(t)}, j = 13n in phase space. Note: a system of n particles is phase space is defined by a single point {x1(t), x2(t), ... x3n(t), p1(t), p2(t), ... p3n(t)} that evolves in time. For that specific system, the density function is a delta function in time centered at the single point, and moving along in time as the phase spacve trajectory trajectory moves along. The phase space trajectory is not to be confused with the 3-D trajectories of individual particles. In statistical mechanics, the system must satisfy certain constraints: (e.g. all xi(t) must lie within a box of volume V; all speeds must be less than the speed of light; etc). The density function $\rho\left ( x_{i},p_{i},t_{i};constraints \right )$ that we usually care about in statistical mechanics is the average probability density of ALL systems satisfying the constraints. We call the group of systems satisfying the same constraints an “ensemble”, and the average density matrix the “ensemble denisty matrix”: $\rho=\dfrac{1}{W} \sum_{i=1}^{W} \rho_{i}$, where the sum is over all W possible systems in the microstates “m” satisfying the constraint. Unlike an individual rm, which is a moving spike in phase space, r sums over all microstates and looks continuous (at least after a small amount of smoothing). For example, consider a single atom with position and momentum {x, p} in a small box. Each different rm for each microstate “m” is a spike moving about in the phase space {x, p}. Averaging over all such microstates yields a r that is uniformly spread over the positions in the box (independent of position x), with a Maxwell-Boltzmann distribution of velocities: $\rho \sim e^{-p^{2}/2mk_{B}T}$ (assuming the walls of the box can equilibrate the particle to a temperature T). How does r evolve in time? Each coordinate qi = xi evolves according to Newton’s law, which can be recast as $\begin{gathered} F_{i}=m \ddot{x}_{i} \ -\dfrac{\partial V}{\partial x_{i}}=\dfrac{d}{d t}\left(m \dot{x}_{i}\right)=\dfrac{d}{d t}\left(\dfrac{\partial K}{\partial \dot{x}_{i}}\right) \end{gathered}$ if the force is derived from a potential V and where $K=\dfrac{1}{2} \sum_{i} m_{i} \dot{x}_{i}^{2}=\dfrac{1}{2} \sum_{i} \dfrac{p_{i}^{2}}{2 m_{i}}$ is the kinetics energy. We can define the Lagrangian L = KV and rewrite the above equation as $\dfrac{d}{d t}\left(\dfrac{\partial L\left(x_{i}, \dot{x}_{i}, t\right)}{\partial \dot{x}_{i}}\right)-\dfrac{\partial L}{\partial x_{i}}=0$. One can prove using variational calculus (see appendix A) that this differential equation (Lagrange’s equation) is valid in any coordinate system, and is equivalent to the statement $\min \left\{S=\int_{t_{0}}^{t_{f}} L\left(x_{i}, \dot{x}_{i}\right) d t\right\}$ where S is the action (not to be confused with the entropy!). Let’s say the particle moves from from positions xi(t0) at t=t0 to xi(tf) at tf. Guess a trajectory xi(t). The trajectory xi(t) can be used to compute velocity $= )\partial x_{i}/\partial t =\dot{x} \;and\;L\left ( x_{i},\dot{x}_{i} \right )$. The actual trajectory followed by the classical particles is the one that minimizes the above integral, called “the action.” $\dfrac{\partial L}{\partial \dot{x}_{i}}=p_{i} \quad\left(p_{i}=m \dot{x}_{i}\right)$ Thinking of $L$ as $L\left(\dot{x}_{i}\right)$ and of $p_{i}$ as the derivatives, we can Legendre transform to a new representation $H$ $-H \equiv L-\sum_{i=1}^{N} \dot{x}_{i} p_{i} .$ It will become obvious shortly why we define $H$ with a minus sign. According to the rules for Legendre transforms, $\dfrac{\partial H}{\partial p_{i}}=\dot{x}_{i} \text { and } \dfrac{\partial H}{\partial \dot{x}_{i}}=p_{i}$ These are Hamilton's equation of motion for a trajectory in phase space. They are equivalent to solving Newton's equation. Evaluating $H$, \begin{aligned} &H\left(\dot{x}_{i}, p_{i}\right)=-K+V+\sum_{i} \dfrac{p_{i}}{m_{i}} p_{i} \ &=K+V \end{aligned} The Hamiltonian is sum of kinetic and potential energy, i.e. the total energy, and is conserved if $H$ is not explicitly time-dependent: $\dfrac{d H(x, p)}{d t}=\dfrac{\partial H}{\partial x} \dfrac{d x}{d t}+\dfrac{\partial H}{\partial p} \dfrac{d p}{d t}=\dfrac{\partial H}{\partial x} \dfrac{d H}{d p}-\dfrac{\partial H}{\partial p} \dfrac{d H}{d x}=0 .$ Thus Newton's equations conserve energy. This is because all particles are accounted for in the Hamiltonian $H$ (closed system). Note that Lagrange's and Hamilton's equations hold in any coordinate system, so from now on we will write $H\left(q_{i}, p_{i}\right)$ instead of using $x_{i}$ (cartesian coordinates). Let $\hat{A}\left(q_{i}, p_{i}, t\right)$ be any dynamical variable (many $\hat{A} \mathrm{~s}$ of interest do not depend explicitly on t, but we include it here for generality). Then $\dfrac{d \hat{A}}{d t}=\sum_{i} \dfrac{\partial \hat{A}}{\partial q_{i}} \dfrac{d q_{i}}{d t}+\dfrac{\partial \hat{A}}{\partial p_{i}} \dfrac{d p_{i}}{d t}+\dfrac{\partial \hat{A}}{\partial t}=\sum_{i}\left(\dfrac{\partial \hat{A}}{\partial q_{i}} \dfrac{d H}{d p_{i}}-\dfrac{\partial \hat{A}}{\partial p_{i}} \dfrac{d H}{d q_{i}}\right)+\dfrac{\partial \hat{A}}{\partial t}=[\hat{A}, H]_{p}+\dfrac{\partial \hat{A}}{\partial t}$ gives the time dependence of $\hat{A}$. [] $]_{\mathrm{P}}$ is the Poisson bracket. Now consider the ensemble density $\rho\left ( x_{i},p_{i},t \right )$ as a specific example of a dynamical variable. Because trajectories cannot be destroyed, we can normalize $\iint d q_{i} d p_{i} \rho\left(q_{i}, p_{i}, t\right)=1$ Integrating the probability over all state space, we are guaranteed to find the system somewhere subject to the constraints. Since the above integral is a constant, we have \begin{aligned} \dfrac{d}{d t} \iint d q_{i} d p_{i} \rho=0=& \iint d q_{i} d p_{i} \dfrac{d \rho}{d t}=\iint d q_{i} d p_{i}\left\{[\rho, H]+\dfrac{\partial \rho}{\partial t}\right\}=0 \ & \Rightarrow \dfrac{\partial \rho}{\partial t}=-[\rho, H]_{p} \end{aligned} This is the Liouville equation, it describes how the density propagates in time. To calculate the average value of an observable $\hat{A}\left(q_{i}, p_{i}\right)$ in the ensemble of systems described by $\rho$, we calculate $A(t)=\iint d q_{i} d p_{i} \rho\left(q_{i}, p_{i}, t\right) \hat{A}\left(q_{i}, p_{i}\right)=\langle A\rangle_{\text {ens }}$ Thus if we know $\rho$, we can calculate any average observable. For certain systems which are left unperturbed by outside influences (closed systems) $\lim _{t \rightarrow \infty} \dfrac{\partial \rho}{\partial t}=0 \text {. }$ $\rho$ reaches an equilibrium distribution $\rho_{e q}\left(q_{i}, p_{i}\right)$ and $\left[\rho_{e q}, H\right]_{p}=0 \text { (definition of equilibrium) }$ In such a case, $A(\mathrm{t}) \rightarrow \mathrm{A}$, the equilibrium value of the observable. The goal of equilibrium statistical mechanics is to find the values A for a $\rho$ subject to certain imposed constraints; e.g. $\rho_{e q}\left(q_{i}, p_{i}, U, V\right)$ is the set of all possible system trajectories such that $U$ and $V$ are constant. The more general goal of non-equilibrium statistical mechanics is to find $A(\mathrm{t})$ given an initial condition $\rho_{0}\left(q_{i}, p_{i}\right.$; constraints). So much for the classical picture. Mechanics: quantum Now let us rehearse the whole situation again for quantum mechanics. The quantum formulation is the one best suited to systems where the energy available to a degree of freedom becomes comparable or smaller than the characteristic energy gap of the degree of freedom. Classical and quantum formulations are highly analogous. A fundamental quantity in quantum mechanics is the density operator $\hat{\rho}_{i}(t)=\left|\psi_{i}(t)\right\rangle\left\langle\psi_{i}(t)|=| t\right\rangle\langle t| \text {. }$ This density operator projects onto the microstate "i" of the system, $\left|\psi_{i}(t)\right\rangle$. If we have an ensemble of W systems, we can define the ensemble density operator $\hat{\rho}$ subject to some constraints as $\hat{\rho}(t ; \text { constraints })=\dfrac{1}{W} \sum_{I=1}^{\mathrm{W}} \hat{\rho}_{I}(t ; \text { constraints })$ For example, let the constraint be $U=$ const. Then we would sum over all microstates that are degenerate at the same energy $U$. This average is analogous to averaging the classical probability density over microstates subject to constraints. To obtain the equation of motion for $\hat{\rho}$, we first look at the wavefunction. Its equation of motion is $H \psi=i \hbar \dfrac{\partial}{\partial T} \psi$ which by splitting it into $\psi_{r}$ and $i \psi_{i}$, can be written $\dfrac{1}{\hbar} \hat{H} \psi_{r}=\psi_{i} \text { and } \dfrac{1}{\hbar} \hat{H} \psi_{i}=-\psi_{r} .$ Using any complete basis $H\left|\varphi_{j}\right\rangle=E_{j}\left|\varphi_{j}\right\rangle$, the trace of $\hat{\rho}$ is conserved $\operatorname{Tr}\left\{\rho_{i}(t)\right\}=\sum_{j}<\varphi_{j}\left|\psi_{i}(t)\right\rangle\left\langle\psi_{i}(t) \mid \varphi_{j}\right\rangle=\sum_{j}\left\langle\psi_{i} \mid \varphi_{j}\right\rangle\left\langle\varphi_{j} \mid \psi_{i}\right\rangle=\left\langle\psi_{i}(t) \mid \psi_{i}(t)\right\rangle=1$ if $\psi_{j}(t)$ is normalized, or $\operatorname{Tr}\left\{\hat{\rho}_{i}\right\}=1 \text {, and } \operatorname{Tr}\{\hat{\rho}\}=1 \text {. }$ This basically means that probability density cannot be destroyed, in analogy to trajectory conservation. Note that if $\begin{gathered} \hat{\rho}_{i}=|\psi\rangle\langle\psi| \ \Rightarrow \hat{\rho}_{i}^{2}=\hat{\rho}_{i} \text { so } \operatorname{Tr}\left(\hat{\rho}_{i}^{2}\right)=1 \end{gathered}$ A state described by a wavefunction $\mid \Psi>$ that satisfies the latter equation is a pure state. Most states of interest in statistical mechanics are NOT pure states. If $\hat{\rho}=\dfrac{1}{W} \sum_{i=1}^{W} \hat{\rho}_{i},$ the complex off-diagonal elements tend to cancel because of random phases and $\operatorname{Tr}\left(\hat{\rho}^{2}\right)<1$, this an impure state. Example: Let $\psi=c_{0}^{i}|0\rangle+c_{1}^{i}|1\rangle$ be an arbitrary wavefunction for a two-level system. $\Rightarrow \rho=|\Psi\rangle\left\langle\Psi\left|=c_{0}^{i} c_{0}^{i^{*}}\right| 0\right\rangle\left\langle 0\left|+c_{0}^{i} c_{1}^{i^{*}}\right| 0\right\rangle\left\langle 1\left|+c_{1}^{i} c_{0}^{i^{*}}\right| 1\right\rangle\left\langle 0\left|+c_{1}^{i} c_{1}^{i^{*}}\right| 1\right\rangle\langle 1|$ or, in matrix form, $\hat{\rho}_{i}=\left(\begin{array}{cc}\left|c_{0}^{i}\right|^{2} & c_{0}^{i} c_{1}^{i^{*}} \c_{0}^{i} c_{1}^{i^{*}} & \left|c_{1}^{i}\right|^{2} \end{array}\right) \quad T_{r}\left(\hat{\rho}_{i}\right)=\left|c_{0}^{i}\right|^{2}+\left|c_{1}^{i}\right|^{2}=1$ Generally, macroscopic constraints (volume, spin population, etc.) do not constrain the phases of $c_{0}=\left|c_{0}\right| e^{i \varphi_{0}}$ and $c_{1}=\left|c_{1}\right| e^{i \varphi_{1}}$. Thus, ensemble averaging \begin{aligned} &\hat{\rho}=\lim _{W \rightarrow \infty} \dfrac{1}{W} \sum_{i=1}^{W} \hat{\rho}_{I}=\lim _{W \rightarrow \infty}\left(\begin{array}{cc} \left|c_{0}\right|^{2} & \dfrac{e^{i \bar{\varphi}}}{\sqrt{W}} \ \dfrac{e^{-i \bar{\varphi}}}{\sqrt{W}} & \left|c_{1}\right|^{2} \end{array}\right)=\left(\begin{array}{cc} \left|c_{0}\right|^{2} & 0 \ 0 & \left|c_{1}\right|^{2} \end{array}\right) \ &\Rightarrow T_{r}\left(\hat{\rho}^{2}\right)=\left|c_{0}\right|^{4}+\left|c_{1}\right|^{4}<1 \end{aligned} Such a state is known as an impure state. To obtain the equation of motion of $\hat{\rho}$, use the time-dependent Schrödinger equation in propagator form, $\left|\psi_{i}(t)\right\rangle=e^{-\dfrac{i}{\hbar} \hat{H} t}\left|\psi_{i}(0)\right\rangle$ and the definition of $\rho$, $\hat{\rho}_{i}(t)=\left|\psi_{i}(t)\right\rangle\left\langle\psi_{i}(t)\right|$ to obtain the equation of motion \begin{aligned} \dfrac{\partial}{\partial t} \hat{\rho}_{i}(t) &=\dfrac{\partial}{\partial t}\left\{e^{-\dfrac{i}{\hbar} \hat{H} t}\left|\psi_{i}(0)\right\rangle\left\langle\psi_{i}(0)\right| e^{+\dfrac{i}{\hbar} \hat{H} t}\right\} \ &=-\dfrac{i}{\hbar} \hat{H} e^{-\dfrac{i}{\hbar} \hat{H} t}\left|\psi_{i}(0)\right\rangle\left\langle\psi_{i}(0)\left|e^{+\dfrac{i}{\hbar} \hat{H} t}+e^{-\dfrac{i}{\hbar} \hat{H} t}\right| \psi_{i}(0)\right\rangle\left\langle\psi_{i}(0)\right| e^{+\dfrac{i}{\hbar} \hat{H} t}\left(+\dfrac{i}{\hbar} \hat{H}\right) \ &=-\dfrac{i}{\hbar} H \rho_{i}+\dfrac{i}{\hbar} \rho_{i} H \ &=\dfrac{1}{i \hbar}\left[\rho_{i}, H\right] \end{aligned} This is known as the Liouville-von Neumann equation. The commutator defined in the last line is equivalent to the Poisson bracket in classical dynamics. Summing over all microstates to obtain the average density operator $\dfrac{1}{W} \sum_{i=1}^{W} \hat{\rho}_{i} \Rightarrow \dfrac{\partial \hat{\rho}}{\partial t}=-\dfrac{1}{i \hbar}[\hat{\rho}, \hat{H}]$ This von-Neumann equation is the quantum equation of motion for $\hat{\rho}$. If $\rho$ represent an impure state, this propagation cannot be represented by the time-dependent Schrödinger equation. We are interested in average values of observables in an ensemble of systems. Starting with a pure state, $A(t)=\left\langle\psi_{i}(t)|\hat{A}| \psi_{i}(t)\right\rangle=\sum_{j}\left\langle\psi_{i}(t) \mid \varphi_{j}\right\rangle\left\langle\varphi_{j}|\hat{A}| \psi_{i}(t)\right\rangle=\sum_{j}\left\langle\varphi_{j}|\hat{A}| \psi_{i}(t)\right\rangle\left\langle\psi_{i}(t) \mid \varphi_{j}\right\rangle=T_{r}\left\{\hat{A} \hat{\rho}_{i}(t)\right\}$ Summing over ensembles, $\dfrac{1}{W} \sum_{i=1}^{W} T_{r}\left\{\hat{A} \rho_{i}\right\}=T_{r}\left\{\hat{A} \dfrac{1}{W} \sum_{i=1}^{W} \hat{\rho}_{i}\right\} \Rightarrow A(t)=T_{r}\{\hat{A} \hat{\rho}(t)\}$ In particular, $P_{j}=T_{r}\left\{\hat{P}_{j} \hat{\rho}(t)\right\}=\operatorname{Tr}\left\{\left|\varphi_{j}\right\rangle\left\langle\varphi_{j}\right| \hat{\rho}(t)\right\}=\left\langle\varphi_{j}|\hat{\rho}(t)| \varphi_{j}\right\rangle=\rho_{j j}(t) \quad$ is $\quad$ the probability of being in state $\mathrm{j}$ at time $\mathrm{t}$. Finally, $\hat{\rho}$ may evolve to long-time solutions $\hat{\rho}_{e q}$ such that $\dfrac{\partial \rho}{\partial t}=0 \Rightarrow$ $\left[\hat{\rho}_{e q}, H\right]=0 \text{(condition for equilibrium)}$. In that case, the density matrix has relaxed to the equilibrium density matrix, which no longer evolves in time. Example: Consider a two-level system again. Let $H|j\rangle=E|j\rangle \text { for } j=0,1 \text { or } H=\left(\begin{array}{cc} E_{0} & 0 \ 0 & E_{1} \end{array}\right) ; \text { if } \hat{p}=\left(\begin{array}{cc} \rho_{00} & 0 \ 0 & \rho_{11} \end{array}\right) \Rightarrow[\hat{\rho}, H]=0$ Thus a diagonal density matrix of a closed system does not evolve. The equilibrium density matrix must be diagonal so $\left[\rho_{e q}, H\right]=0$. This corresponds to a completely impure state. Note that unitary evolution cannot change the purity of any closed system. Thus, the density matrix of a single closed system cannot evolve to diagonality unless $\hat{\rho}=\dfrac{1}{W} \sum_{i} \rho_{i}$ for the ensemble is already diagonal. In reality, single systems still decohere because they are open to the environment: let i denote the degrees of freedom of the system, and $\mathrm{j}$ of a bath (e.g. a heat reservoir). $\hat{\rho}$ depends on both $i$ and $j$ and can be written as a matrix $\hat{\rho}_{i j, i^{1} j^{1}}(t)$. For example, for a two-level system coupled to a large bath, indices i,j only go from 1 to 2 , but indices i' and j' could go to $10^{20}$. We can average over the bath by letting $\hat{\rho}^{(r e d)}=\operatorname{Tr}_{j}\{\hat{\rho}\}$ $\hat{\rho}^{(r d)}$ only has matrix elements for the system degrees of freedom, e.g. for a two level system in contact with a bath of $10^{20}$ states, $\hat{\rho}^{(c e d)}$ is still a $2 \times 2$ matrix. We will show later that for a bath is at constant $T$. $\hat{\rho}^{(r e d)} \rightarrow \rho_{e q}=\left(\begin{array}{cc} \dfrac{e^{-E_{0} / k_{B} T}}{Q} & 0 \ 0 & \dfrac{e^{-E_{1} / k_{B} T}}{Q} \end{array}\right)$ Note that "reducing" was not necessary in the classical discussion because quantum coherence and phases do not exist there. To summarize analogous entities: Classical Quantum ${\rho\left(q_{i}, p_{i}\text { constr. }\right)}$ ${\hat{\rho}(t ; \text { constr. })}$ Density function or operator $\hat{A}\left(q_{i}, p_{i}\right)$ operator $\hat{A}$ Observable, hermition $\dfrac{\partial}{\partial p} H=\dot{q} \dfrac{\partial}{\partial q} H=-\dot{p}$ $\dfrac{1}{\hbar} H \psi_{r}=\dot{\psi}_{i} \dfrac{1}{\hbar} H \psi_{i}=-\dot{\psi}_{r}$ Eq. of motion for traj. ψ $\text { Averaging } \ \iint d p_{i} d q_{i}$ $T_{r}\{\}$ Averaging $\iint d p_{i} d q_{i} \rho=1$ $T_{r}\{\hat{\rho}\}=1$ Conservation of probability $A(t)=\iint d p_{i} d q_{i} \hat{A}\left(q_{i}, p_{i}\right) \rho\left(q_{i}, p_{i}, t\right)$ $A(t)=T_{r}(\hat{A}, \hat{\rho})$ Expectation value in ensemble $\dfrac{\partial \rho}{\partial t}=-[\rho, H]_{P}$ $\dfrac{\partial \hat{\rho}}{\partial t}=\dfrac{1}{i \hbar}[\rho, H]$ Equation of motion for p ${\left[\rho_{e q}, H\right]_{P}=0}$ ${\left[\hat{\rho}_{e q}, H\right]=0}$ Necessary equlibrium condition (closed system) To illustrate basic ideas, we will often go back to simple discrete models with probability for each microstate $p_{i}$, but to get accurate answers, one may have to work with the full classical or quantum probability $\rho$.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Statistical_Mechanics/Fundamentals_of_Statistical_Mechanics/09._Classical_and_quantum_dynamics_of_density_matrices.txt
Thermodynamics puts constraints on the behavior of macroscopic systems without referencing the underlying microscopic properties. In particular, it does not provide a quantitative connection to the origin of its fundamental quantities $U$ and $S$. For $U$, this is less of a problem because we know from mechanics that $U=\dfrac{1}{2} \sum m_i v_i^2 + V(x_i),$ and the macroscopic formula arises by integrating over most coordinates and velocities. Somehow the thermal motions end up as $TS$, and the mechanical and electrical motions end up as terms such as $–PV+\mu n$. Statistical mechanics makes the macro-micro connection and provides a quantitative description of U and S is terms of microscopic quantities. For large systems (except near the critical point), its results are in agreement with thermodynamics: one can derive thermodynamic postulates 0 – 3 from statistical mechanics. For systems undergoing large fluctuations (small systems or those systems near a critical point), its prediction are different and more accurate. In addition as the ‘mechanics’ implies, statistical mechanics can deal with time-varying systems and systems out of equilibrium. Averages over x(t) and p(t)=mv(t) of the microscopic particles are done, but not in such a way that all time-dependent information is lost, as in thermodynamics. Unlike mechanics, statistical mechanics is not intended to discuss the time-dependence of an isolated particle. Rather, the time-dependent (e.g. diffusion coefficient) and time independent properties of whole systems of particles, and the averaged properties of whole ensembles of such systems, are of interest. We begin with an introduction to important facts from mechanics and statistics, then proceed to the postulates of statistical mechanics, consider in detail equilibrium systems, and finally non-equilibrium systems. Goal of statistical mechanics: • have a system of many particles with positions $x_{i}$ and velocities $\dot{x}_{i}$ (or wavefunctions $\Psi\left(x_{i}\right)$. • want values $\mathrm{A}(\mathrm{t})$ of any observable $A\left(x_{i}, \dot{x}_{i}\right)$ as particles move about, averaged over all microstates of the system consistent with constraints, such as energy $=\mathrm{U}=$ constant, or $\mathrm{V}=$ constant. Goal of thermodynamics: • find relations among extensive observables $X$ and their derivatives at equilibrium only. The postulates of statistical mechanics and connection to thermodynamics: Postulate I: Extension of microscopic laws Hamiltonian dynamics applies to the density operator $\hat{\rho}_{i}$ of any finite closed system; fully specified by its extensive constrainnt parameters and the Hamiltonian. Postulate II: Principle of equal probabilities The principle of equal probabilities holds in its ensemble (weak) form and is assumed in it strong (time) form i) weak form: all W microscopic realizations of a system satisfying I have equal probability. The ensemble density matrix is therefore given by $\hat{\rho}=\frac{1}{W} \sum_{i=1}^{W} \hat{\rho}_{i}$. The ensemble of these $W$ systems is the microcanonical ensemble. ii) strong form: for any ensemble satisfying i) at equilibrium, $\left\langle\hat{\rho}_{i}\right\rangle_{t}=\left\langle\hat{\rho}_{i}\right\rangle_{e}=\hat{\rho}$ (ergodic principle). This states that averaging over time is equivalent to averaging over the ensemble of $\mathrm{W}$ microstates. Postulate III: entropy The entropy of a n ensemble of systems satisfying postuilates I and II.i) is given by $S=-\operatorname{Tr}\{\hat{\rho} \ln \hat{\rho}\}$ Before we use them, these postulates require some explanation. I) This is a strong statement; the system i usually has a $>10^{20}$-dimensional phase space, and we assume that the dynamics are the same as for a few degrees of freedom! Classically, $\hat{\rho}_{i}$ corresponds to a specific trajectory; quantum mechanically, to a specific initial condition of the system. Among the extensive variables fixed in a closed finite system: U (always by postulate I). Other constrained variables: $\mathrm{V}$ (or $\mathrm{L}, \mathrm{A}, \mathrm{N}_{\mathrm{i}}=$ particle number ... depending on the system). Note that if $\hat{H}$ is independent of time, the system is closed, and $U$ is therefore constant (as it needs to be for 'full' specification of the system), P1 of thermodynamics is automatically satisfied. II) This is the postulate that lets us perform macroscopic averages over the individual density matrices, so we can derive properties for the energyconserving (microcanonical) ensemble. i. Classically, this says that as long as a trajectory stratifies the constraints in I (has specific energy U), we can combine it with equal weight with all other such trajectories to obtain $\rho\left(q_{i}, p_{i}, t\right)$, the classical density function. Quantum-mechanically, this means that all the linearly independent pure density matrices $\hat{\rho}_{i}$ characterizing a system with the same extensive parameters (i.e. all the members of the microcanonical ensemble) can be averaged with equal weights to obtain the ensemble density matrix. Example: consider a state of energy $U$ that can be realized in $W$ ways (W-fold degenerate or W microstates). One set of initial conditions $\hat{\rho}_{i}$ would be $\rho_{1}=\left(\begin{array}{ccc} 1 & 0 & 0 \ 0 & 0 & 0 \ 0 & 0 & 0 \end{array}\right), \rho_{2}=\left(\begin{array}{ccc} 0 & 0 & 0 \ 0 & 1 & 0 \ 0 & 0 & 0 \end{array}\right), \ldots$ These are pure states. All of these are equally likely because they have the same energy (and volume, etc.), so $\hat{\rho}=\frac{1}{W} \sum_{i=1}^{W} \hat{\rho}_{i}=\left(\begin{array}{ccc} 1 / W & & 0 \ & \ddots & \ 0 & & 1 / W \end{array}\right)$ This is a 'mixed' sate of constant energy U. Note that there is a potentially embarrassing problem with this: a finite quantum system (e.g. particle in a box) for which all extensive parameters (e.g. $U$, or $L$ for particles in a 1-D box) have been specified has as discrete energy spectrum given by $H\left|\varphi_{i}\right\rangle=E_{i}\left|\varphi_{i}\right\rangle .$ For a large system, the level spacing may be very narrow, but it is nonetheless discrete. Thus, at some every U we pick, there is likely to be no state, so we have nothing to average! In practice, this is resolved by having an energy window $\delta U$, and by considering all W levels within it. As discussed in more detail in III below, as the number of degrees of freedom $N=6 n$ of the system approaches infinity, the size of $\delta U$ rigorously has no effect on the result. ii. This says we could take a single trajectory, or a single initial condition $\hat{\rho}_{i}(0)$, propagate it in time, and all the possible microscopic states will also be visited in turn to yield again $\rho\left(q_{i}, p_{i}, t\right)$ (classically) or $\hat{\rho}(t)$ (quantum mechanically). This is a much stronger statement than i): the full ensemble of W microstates by definition includes all realizations i of the macroscopic system compatible with $\mathrm{H}$ and the constraints; on the other hand ii) says a single microstate will, in time, evolve to explore all the others, or at least come arbitrarily close to them. This property is know as 'ergodicity.' In practice, ergodicity cannot really be satisfied, but we can use ii) for 'all practical purposes.' Example showing why ergodicity cannot be satisfied: We will use a discrete system to illustrate. Consider a box with $M=\frac{V}{V_{0}}$ cells, filled with $N \ll M$ particles of volume $V_{0}$. The dynamics is that the particles hop randomly to unoccupied neighboring cells at each time step $\Delta t$. This model is crlled a hatice ideal gas. The number of arrangementsfor $N$ identical particles $W=\frac{M!}{(M-N) !} \cdot \frac{1}{N!}$ Large factorial n! (or gamma fuactious $\Gamma (n-1)-N!)$ can be approximated by $W=\frac{M!}{(M-N) !} \cdot \frac{1}{M!} \cdot M^{M}(M -N)^{N-M} M^{M}=\left(\frac{M}{N}\right)^{N}$ Let us plug realistic numbers into this $V_{0}=10 \hat{A}_{o}$, V=1 cm3, $\rightarrow M=\frac{V}{V_{0}}=10^{23}$. For N ~ 1019 gas molecules (~1 atm) $\rightarrow$ M/N =104 $v_{gas}=\left ( \frac{U_{o}}{m} \right )= 300 m/s$ (O2 at room temperature) $\Delta t= \frac{L_{o}}{v} = \frac{V_{o}^{1/3}}{v} =10^{-12}s = 1\;ps$ $W_{possible}= \frac{10^{12}s}{10^{18}s}=10^{30}$ Lifetime of universe: $\leqslant 10^{11} a \ldots 10^{18} s$ $W_{\text {actual }}$ = $\left(10^{4}\right)^{10^{19}}=$ 1 googol $\geqslant 10^{30}$ The possible $W$ that can te visited during the lifetime of the universe is a mere 1030, negligible compared to the actual rnumber of microsta Wactual at constant energy Clearly, not even a warm gas, a system about as random as conceivable, even touches the true microcanonical degeneracy W. Although the a priori probability of microstates (classically: of trajectories) may be the same (i), they simply cannot all be sampled in finite time as seen in (iii), this provides a practical solution to the quantum dilemma outlined in (i) Why assume (ii) at all? In real life $\hat{\rho}_{i}(t)$ is always observed, but it is difficult to compute. W or $\hat{\rho}(t)$ are often much easier to compute. Although (ii) fails by a factor, surprisingly it still works in most situations: most microstates in the ensemble of Wpossible microstates are indistinguishable(e.g. the gas atoms in the room right now vs 10 seconds from now), so leaving many of them out of the average still yields the same average; sampling only one in 1027 still gives the same result as true ensemble averaging. There are cases where this reasoning fails: in glasses, members of the ensemble can be so slowly interconnecting and so different from one another, that $\hat{\rho}$ is not at all like $\left\langle\rho_{i}\right\rangle_{t}$ unless very special care is taken. III) This definition of the entropy was made plausible in our mathematical review, on grounds of information content: a system with many microstates has a greater potential for disorder than a system of a few microstates. But instead of measuring disorder multiplicatively, we want an additive (extensive) quantity. This postulate proves the microscopic definition for thermodynamic entropy $\left(\hat{\rho}=\hat{\rho}_{e q} \& N \rightarrow \infty\right)$, just as energy is microscopically defined as $U=\langle H\rangle_{e} \text { where } H=\frac{1}{2} \sum_{j} \frac{p_{j}^{2}}{m_{j}}+V\left(x_{i}\right)$ so $S_{e q} \equiv S=-k_{B} T_{r}\left\{\hat{\rho}_{e q} \ln \hat{\rho}_{e q}\right\}$ gives the thermodynamic entropy $S$ in terms of the equilibrium density matrix. We must have $\operatorname{Tr}\left\{\hat{\rho}_{e q}\right\}=1,\left[\hat{\rho}_{e q}, H\right]=0$, and by postulate II.i), all elements of $\hat{\rho}_{e q}$ must be of equal size if we are in the microcanonical (constant energy U) ensemble. This is satisfied only by $\hat{\rho}_{e q}=\left(\begin{array}{ccc}\frac{1}{W} & & 0 \0 & & \frac{1}{W}\end{array}\right)$ Where $\hat{\rho}$ is a diagonal $\mathrm{W} \times \mathrm{W}$ matrix. Inserting into $\mathrm{S}$ and evaluating the trace in the eigenfunction basis of $\hat{H}$ (and $\hat{\rho}$ ), which we can call $|j\rangle:$ \begin{aligned}&S=-k_{B} \sum_{j=1}^{W}\left\langle j\left|\hat{\rho}_{e q} \ln \hat{\rho}_{e q}\right| j\right\rangle=-k_{B} \sum_{j=1}^{W} \frac{1}{W} \ln \frac{1}{W} \ &\Rightarrow S=k_{B} \ln W \quad \text { where } k_{B} \approx 1.38 \cdot 10^{-23} J / K \text { is Boltzmann's constant. } \end{aligned} This is Boltzmann's famous formula for the entropy. Postulate III is more general, but at equilibrium Boltzmann's formula holds. It secures for $\mathrm{S}$ all the properties in postulates 2 and 3 of thermodynamics, and provides a microscopic interpretation for $\mathrm{S}$ : W specifies disorder in a system: the more possible microstates correspond to the same macrostate, the more disorder a system has. For two independent systems, $W_{\text {tot }}=W_{\mathrm{i}} \cdot W_{2} .$ However, thermodynamic entropy has the property of additivity: $S_{\text {tot }}=S_{1}+S_{2}$. The function that uniquely effects the transformation from multiplication to addition is the log function (within a constant factor) $\Rightarrow S_{i}=\ln W_{i}$ must be true so that both relations at the beginning of this paragraph are satisfied. The constant factor $k_{\mathrm{B}}$ is provided to match the energy and temperature scales, which were independently defined in the early $19^{\text {th }}$ century when the equivalence of temperature and average energy was not understood. Consider a system divided into subsystems by constraints, with $W=W_{0} .$ When the constraints are removed at $t=0$, then by II.ii) the system now explores additional ensemble members as time goes on. Thus $\mathrm{W}(\mathrm{t}>0)=\mathrm{W}_{1}>\mathrm{W}_{0}$. If macroscopic equilibrium is reached $W_{e q}>W_{1}>W_{0} \Rightarrow S_{e q}>S_{0}$. Thus S in stat mech postulate III satisfies all requirements of postulate P2 of thermodynamics, which is a simple consequence of the fact that microscopic degrees of freedom tend to explore all available states (= all the available phase space in classical mechanics). Also, because $W$ is monotonic in $U$ (at higher energy $U$, there are always more quantum states in a multidimensional system) and because $S$ is monotonic in $W$ (property of the ln function), $S$ is monotonic in $U$. finally, we shall see in detail later that when $\left(\frac{\partial U}{\partial S}\right)_{V, N}=T$, only the ground state is populated, so $W \rightarrow 1 \Rightarrow \lim _{T \rightarrow 0} S=k_{B} \ln (1)=0$. Thus, the third postulate is also satisfied as long as the ground state is singly degenerate and the system can get to it during the experiment. (Glasses again would be a problem here!) The error of thermodynamics: it identifies the most probable value of a quantity with its average, by assuming the spread is negligible. We will derive examples of this spread later on. Thermodynamic limit: $\mathrm{N}$ goes to infinity but $\mathrm{N} / \mathrm{V}$ or any other ratio of extensive quantities remains constant. To conclude this chapter, we turn to the problem of computing $W$ in the quantum case. A closed finite quantum system has a discrete spectrum $\mathrm{E}_{i}$. The figure below shows the number of states below energy $U$ as a function of $U$. Because of quantum mechanics, the density of states $\Omega(U)=\frac{\partial \tilde{\Omega}}{\partial U}$ is discontinuous, and the integrated density of states (= total number of states up to energy $U$ ) has steps in it: $\tilde{\Omega}=\sum_{j} \operatorname{Step}\left(U-E_{j}\right) \Rightarrow \Omega=\sum_{j} \delta\left(U-E_{j}\right)$ At any randomly picked $U, \Omega$ is mostly likely zero, so $W=0$ also! However, because $\Omega$ increases so enormously rapidly with energy, the states are very (understatement!) closely spaced in energy for any system with even just a few particles. If a system is observed for a finite time $\delta t$, the states are broadened by the uncertainty principle: $\delta E \sim \frac{\hbar}{2 \delta t} \Rightarrow \Omega=\sum_{i} L_{i}\left(U-E_{i}, \delta t\right)$ where $L$ indicates a broadened profile of finite width that replaces the delta function. L still counts a single state, so $\int_{0}^{\infty} d U L_{i}\left(U-E_{i}, \delta t\right)=1 ;$ often $L_{i}$ is taken as a Lorentzian $L_{i}=\frac{1}{\pi} \frac{\delta U}{\left(U-E_{i}\right)^{2}+\delta U^{2}} .$ Thus, $\Omega$ can be taken as a smooth function and its value $\Omega(E, \delta t)=W(U)$ tells us how many states contribute to the degeneracy at energy U. It is clear from the above that if $\delta U \gg\left|E_{j}-E_{i}\right|$, (the broadening is greater than the spacing of adjacent levels), then $\Omega(U)$ is indeed independent of the choice of $\delta U$ or $\delta t$. This is guaranteed by the astronomical number of states for a macroscopic system (see example in II.ii)). Because $\tilde{\Omega}$ in the above figure grows so fast, $\Omega(U) \delta U \approx \tilde{\Omega}(U)$, as illustrated in the bottom right panel of the figure Another way to look at it is in state space (classically: action space). It has $\mathrm{N}$ coordinates for $\mathrm{N}$ degrees of freedom. $\tilde{\Omega}$ is the number of states under the surface $\mathrm{U}$ $=$ constant. If $U \gg U_{0}$ (where $U_{0}$ is the average characteristic energy step for one degree of freedom) then $\tilde{\Omega}_{1} \sim\left(\frac{U}{U_{0}}\right)^{N}$ Letting $\delta U$ now be an uncertainty in U instead of in individual energy levels, the number of states in the interval $(U-\delta U, U)$ is $\tilde{\Omega}(U)-\tilde{\Omega}(U-\delta U) \sim\left(\frac{U}{U_{0}}\right)^{N}-\left(\frac{U-\delta U}{U_{0}}\right)^{N} \approx\left(\frac{U}{U_{0}}\right)^{N}\left\{1-\left[1-\frac{\delta U}{U}\right]^{N}\right\} \approx\left(\frac{U}{U_{0}}\right)^{N}$ Because $\mathrm{N} \sim 10^{20}$, as long as $\delta U<U$ (even if only a small amount!), the number of states in any width shell $\delta U$ is the same as the total number of states up to U: In a hyperspace of $10^{20}$ dimensions, all states lie near the surface Thus $\tilde{\Omega}(U) \approx \Omega(U, \delta U) \approx W(U)$ to extreme precision. This topic will be taken up once more in the examples of microcanonical calculations given in the next chapter.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Statistical_Mechanics/Fundamentals_of_Statistical_Mechanics/10._Postulates_of_statistical_mechanics.txt
The goal of equilibrium statistical mechanics is to calculate the diagonal elements of $\rho_{e q}$ so we can evaluate average observables $\langle A\rangle=\operatorname{Tr}\left\{A \rho_{e q}\right\}=A$ that give us fundamental relations or equations of state. Just as thermodynamics has its potentials $\mathrm{U}, \mathrm{A}, \mathrm{H}, \mathrm{G}$ etc., so statistical mechanics has its ensembles, which are useful depending on what macroscopic variables are specified. We first consider the microcanonical ensemble because it is the one directly defined in postulate II of statistical mechanics. In the microcanonical ensemble $U$ is fixed (Postulate I), and other constraints that are fixed are the volume $V$ and mole number $n$ (for a simple system), or other extensive parameters (for more complicated systems). Definition of the partition function The 'partition function' of an ensemble describes how probability is partitioned among the available microstates compatible with the constraints imposed on the ensemble. In the case of the microcanonical ensemble, the partitioning is equal in all microstates at the same energy: according to postulate II, with $p_{i}=\rho_{i i}^{(e q)}=1 / W(U)$ for each microstate "i" at energy U. Using just this, we can evaluate equations of state and fundamental relations. 2. Calculation of thermodynamic quantities from W(U) Example 1: Fundamental relation for lattice gas: entropy-volume part. Consider again the model system of a box with $\mathrm{M}=\mathrm{V} / \mathrm{V}_{0}$ volume elements $\mathrm{V}_{0}$ and $\mathrm{N}$ particles of volume $\mathrm{V}_{0}$, so each particle can fill one volume elements. The particles can randomly hop among unoccupied volume elements to randomly sample the full volume of the box. This is a simple model of an ideal gas. As shown in the last chapter, $W=\dfrac{M !}{(M-N) ! N !}$ for identical particles, and we can approximate this, if $\mathrm{M}<<\mathrm{N}$ by $W \approx \dfrac{1}{N !}\left(\dfrac{V}{V_{0}}\right)^{N}$ since $M ! /(M-N !) \approx M^{N}$ in that case. Assuming the hopping samples all microstates so the system reaches equilibrium, we compute the equilibrium entropy, as proved in chapter 10 from postulate III, as $S=k_{B} \ln W \approx S_{0}+N k_{B} \ln \left(V / V_{0}\right)$ where $S_{0}$ is independent of volume. This gives the volume dependence of the entropy of an ideal gas. Note that by taking the derivative $(\partial \mathrm{S} / \partial \mathrm{V})=k_{B} \mathrm{~N} / \mathrm{V}=\mathrm{P} / \mathrm{T}$ we can immediately derive the ideal gas law $\mathrm{PV}=\mathrm{N} k_{B} \mathrm{~T}=\mathrm{nRT}$. Example 2: Fundamental relation for a lattice gas: entropy-energy part. The above model does not give us the energy dependence, since we did not explicitly consider the energy of the particles, other than to assume there was enough energy for them to randomly hop around. We now remedy this by considering the energy levels of particles in a box. The result will also demonstrate once more that $\tilde{\Omega}$ increases so ferociously fast that it is equal to $\mathrm{W}$ with incredibly high accuracy for more than a handful of particles. Let the total energy $U$ be randomly distributed among particles in a box of volume $L^{3}=$ $\mathrm{V}$. The energy is given by $U=\dfrac{1}{2 m} \sum_{i=1}^{3 N} p_{i}^{2},$ where $\mathrm{i}=1,2,3$ are the $\mathrm{x}, \mathrm{y}, \mathrm{z}$ coordinates of particle #1, and so forth to $\mathrm{i}=3 N-2,3 N-1,3 N$ are the $\mathrm{x}, \mathrm{y}, \mathrm{z}$ coordinates of particle $\# N$. In quantum mechanics, the momentum of a free particle is given by $p=h / \lambda$, where $h$ is Planck's constant. Only certain waves $\Psi(x)$ are allowed in the box, such that $\Psi(x)=0$ at the boundaries of the box, as shown in the figure below. The wavelengths $\lambda=\mathrm{L} / 2, \mathrm{~L}, 3 \mathrm{~L} / 2 \cdots$ can be inserted in the equation for total energy, yielding $U=\dfrac{1}{2 m} \sum_{i=1}^{3 N}\left(\dfrac{h n_{i}}{2 L}\right)^{2}=\sum_{i=1}^{3 N} \dfrac{h^{2} n_{i}^{2}}{8 m L^{2}}, \quad n_{i}=1,2,3 \cdots,$ the energy for a bunch of particles in a box. W(U) is the number of states at energy U. Looking at the figure again, all the energy levels are "dots" in a $3 \mathrm{~N}$-dimensional cartesian space, called the "state space", or "action space" or sometimes "quantum number space." The surface of constant energy $\mathrm{U}$ is the surface of a hypersphere of dimension $3 \mathrm{~N}-1$ in state space. The reason is that the above equation is of the form constant $=x^{2}+y^{2}+\cdots$ where the variables are the quantum numbers. The number of states within a thin shell of energy $U$ at the surface of the sphere is $W(U) \; where\; \lim _{N \rightarrow \infty} W(U)=\tilde{\Omega}$. $\tilde{\Omega}$ is the total number of states inside the sphere, which at a first glance would seem to be much larger than W(U), the states in the shell. In fact, for a very high dimensional hypervolume, a thin shell at the surface contains all the volume, so in fact, $\tilde{\Omega}$ is essentially equal to W(U) and we can just calculate the former to a good approximation when $N$ is large. If this is hard to believe, consider an analogous example of a hypercube instead of a hypersphere. Its volume is $L^{\mathrm{m}}$, where $m$ is the number of dimensions. The change in volume with side length $\mathrm{L}$ is $\partial \mathrm{V} / \partial \mathrm{L}=\mathrm{mL}^{\mathrm{m}-1}$, so $\Delta \mathrm{V}=\mathrm{mL}^{\mathrm{m}-1} \Delta \mathrm{L}$ is the volume of a shell of width $\Delta \mathrm{L}$ at the surface of the cube. The ratio of that volume to the total volume is $\Delta \mathrm{V} / \mathrm{V}=\mathrm{m} \Delta \mathrm{L} / \mathrm{L}$. Let's take the example our intuition is built on, $\mathrm{m}=3$, and assume $\Delta \mathrm{L} / \mathrm{L}=0.001$, just a $0.1 \%$ surface layer. Then $\Delta \mathrm{V} / \mathrm{V}=3 \cdot 10^{-3}<<\mathrm{V}$ indeed. But now consider $\mathrm{m}=10^{20}$, a typical number of particles in a statistical mechanical system. Now $\Delta \mathrm{V} / \mathrm{V}=10^{20} 10^{-3}=10^{17}$. The little increment in volume is much greater than the original volume of the cube, and contains essentially all the volume of the new "slightly larger" cube. It may be "slightly" large in side length, but it is astronomically larger in volume. Now back to our hypersphere in the figure. Its volume, which is essentially equal to W(U), the number of states just at the surface of the sphere, is $W(U)=\left(\dfrac{1}{2}\right)^{3 N} V_{\text {hypersphere }}=\left(\dfrac{1}{2}\right)^{3 N} \dfrac{\pi^{3 N / 2}}{\Gamma(3 N / 2+1)} R^{3 N}=\left(\dfrac{U}{U_{0}}\right)^{3 N / 2} .$ The $(1 / 2)^{3 \mathrm{~N}}$ is there because all quantum numbers must be greater than zero, so only the positive part of the sphere should be counted. The Gamma function $\Gamma$ is related to the factorial function, and $R$ is the radius of the sphere, which is given by $R=n_{\text {max }}=\sqrt{\dfrac{8 m L^{2} U}{h^{2}}},$ the largest quantum number, if all energy is in a single mode. They key is that $R \sim \sqrt{U}$, so $U$ is raised to the $3 \mathrm{~N} / 2$ power, where $N$ is the number of particles, 3 is because there are three modes per particle, and the $1 / 2$ is because of the energy of a free particle depends on the square of the quantum number. Thus $S(U)=k_{B} \ln W(U)=S_{0}+\dfrac{3}{2} N k_{B} \ln U=S_{0}+\dfrac{3}{2} n R \ln U \text {, }$ where the constant $S_{0}$ is not the same as in the previous example. We used the volume equation from the previous example to obtain an equation of state (PV=nRT), and we can obtain another equation of state here: $\left(\dfrac{\partial S}{\partial U}\right)_{V, n}=\dfrac{1}{T}=\dfrac{3}{2} n R \dfrac{1}{U} \text { or } U=\dfrac{3}{2} n R T$ This equation relates the energy of an ideal gas to its temperature. $3 n$ is the number of modes or degrees of freedom (3 velocities per particle time $n$ moles of particles), whereas the factor of 2 comes directly from the particle-in-a-box energy function - in case you ever wondered where that comes from. So, for a harmonic oscillator, $\mathrm{n} \sim U$ $\left(E=\hbar \omega(n+1 / 2)\right.$ as you may recall) instead of $n \sim U^{1 / 2}$, and you might expect $U=3 n R T$ for $3 \mathrm{~N}$ particles held together by springs into a solid crystal lattice. And indeed, that is true for an ideal lattice at high temperature (in analogy to an ideal gas at high temperature). Unlike free particles, the energy of oscillators does not have the factor of 1/2. The 'deep' reason is that an oscillator has two degrees of freedom to store energy in each direction, not just one: there's still the kinetic energy, but there's also potential energy. Example 3: A system of $\mathrm{N}$ uncoupled spins $\mathrm{s}_{\mathrm{z}}=\pm 1 / 2$ The Hamiltonian for this system in a magnetic field is given by $H=\sum_{j=1}^{N} s_{z j} B+\dfrac{N B}{2} \text {, }$ where the extra term at the end is added so the energy equals zero when all the spins are pointing down. At energy $U=0$, no spin is excited. For each excited spin, the energy increases by $B$, so at energy $U, U / B$ atoms are excited. These $U / B$ excitations are indistinguishable and can be distributed in $\mathrm{N}$ sites: $W(U)=\dfrac{N !}{\left(N-\dfrac{U}{B}\right) !\left(\dfrac{U}{B}\right) !}=\dfrac{\Gamma(N+1)}{\Gamma\left(N+1-\dfrac{U}{B}\right) \Gamma\left(\dfrac{U}{B}+1\right)} .$ This is our usual formula for permutations; the right side is in terms of Gamma functions, which are defined even when $U / B$ is not an integer. Gamma functions basically interpolate the factorial function for noninteger values. This formula has a potential problem built-in: clearly, when $U$ starts out at 0 and then increases, $W$ initially increases. But for $U=N B$ (the maximum energy), $W=1$ again. In fact, $W$ reaches its maximum for $U=N B / 2$. But if $W(U)$ is not monotonic in $U$, then $S$ isn't either, violating P3 of thermodynamics. Let's see how this works out. For large $\mathrm{N}$, and temperature neither so low that $\dfrac{U}{B} \sim O(1)$, nor so high that $\dfrac{U}{B} \sim O(N)$, we can use the Stirling expansion $\ln N ! \approx N \ln N-N$, yielding \begin{aligned} &\dfrac{S}{k_{B}}=\ln \Omega \approx N \ln N-N-\left(N-\dfrac{U}{B}\right) \ln \left(N-\dfrac{U}{B}\right)+N \dfrac{U}{B}-\dfrac{U}{B} \ln \dfrac{U}{B}+\dfrac{U}{B} \ &\approx N \ln N-\left(N-\dfrac{U}{B}\right) \ln \left(N-\dfrac{U}{B}\right)-\dfrac{U}{B} \ln \dfrac{U}{B}+\dfrac{U}{B} \ln N-\dfrac{U}{B} \ln N \ &\approx-N \ln \left(1-\dfrac{U}{N B}\right)+\dfrac{U}{B} \ln \left(1-\dfrac{U}{N B}\right)-\dfrac{U}{B} \ln \left(\dfrac{U}{N B}\right) \ &\approx\left(\dfrac{U}{B}-N\right) \ln \left(1-\dfrac{U}{N B}\right)-\dfrac{U}{B} \ln \left(\dfrac{U}{N B}\right) \end{aligned} after canceling terms as much as possible. We can now calculate the temperature and obtain the equation of state $U(T)$ : $\dfrac{1}{T}=\left(\dfrac{\partial S}{\partial U}\right)_{N} \approx \dfrac{k}{B} \ln \left(\dfrac{N B}{U}-1\right) \Rightarrow U \approx \dfrac{N B}{1+e^{B / k T}}$ In this equations, at $T \sim 0, \quad U \rightarrow 0$; and as $T \rightarrow \infty, U \rightarrow N B / 2$. So even at infinite temperature the energy can only go up to half the maximum value, where $W(U)$ is monotonic. The population cannot be 'inverted' to have more spins point up than down. At most half the spins can be made to point up by heating. This should come as no surprise: if the number of microstates is maximized by having only half the spins point up when energy is added, then that's the state you will get (this is true even in the exact solution). Note that this does not mean that it is impossible to get all spins to point up. It is just not an equilibrium state at any temperature between 0 and $\infty .$ Such nonequilibrium states with more spins up (or atoms excited) than down are called "inverted" populations. In lasers, such states are created by putting the system (like a laser crystal) far out of equilibrium. Such a state will then relax back to an equilibrium state, releasing a pulse of energy as the spins (or atoms) drop from the excited to the ground state. The heat capacity of the above example system is $c_{v}=\left(\dfrac{\partial U}{\partial T}\right)_{N} \approx \dfrac{N B^{2}}{k T^{2}} \dfrac{e^{B / k T}}{\left(1+e^{B / k T}\right)^{2}} \text { peaked at } 4 k_{B} T / B \text {, }$ so we can calculate thermodynamic quantities as input for thermodynamic manipulations. As we shall see in detail later (actually, we saw it in the previous example!), in any real system the heat capacity must eventually approach $c_{v}=N k_{B} / 2$, where $N$ is the number of degrees of freedom. However, a broad peak at $4 k_{B} T / B$ is a sign of two low-lying energy levels spaced by $B$. Levels at higher energy will eventually contribute to $c_{v}$, making sure it does not drop. Example 4: Let us check that T derived from $\left(\dfrac{\partial S}{\partial U}\right)_{N}=\left(\dfrac{\partial k_{B} \ln W}{\partial U}\right)_{N}=\dfrac{1}{T}$ indeed agrees with the intuitive concept of temperature. Consider two baths within a closed system, so $U=U_{1}+U_{2}=$ const. $\Rightarrow d U=0 \Rightarrow d U_{1}=-d U_{2}$. If we know $W_{i}\left(U_{i}\right) \Rightarrow d W_{i}=\dfrac{\partial W_{i}}{\partial U_{i}} d U_{i}$ for each bath, then \begin{aligned} W_{\text {tot }} &=W_{1} W_{2} \ d W_{\text {tot }} &=\left(W_{1}+d W_{1}\right) \cdot\left(W_{2}+d W_{2}\right)-W_{1} W_{2}+O\left(d W^{2}\right) \ &=W_{1} d W_{2}+W_{2} d W_{1} \ &=\left(-W_{1} \dfrac{\partial W_{2}}{\partial U_{2}}+W_{2} \dfrac{\partial W_{1}}{\partial U_{1}}\right) d U_{1}=0 \end{aligned} at equilibrium because the maximum number of states is already occupied. For this to be true for any infinitesimal energy flow $d U_{1}$, \begin{aligned} &\Rightarrow \dfrac{1}{W_{2}}\left(\dfrac{\partial W_{2}}{\partial U_{2}}\right)_{V, N}=\dfrac{1}{W_{1}}\left(\dfrac{\partial W_{1}}{\partial U_{2}}\right)_{V, N} \ &\Rightarrow\left(\dfrac{\partial \ln W_{2}}{\partial U_{2}}\right)_{V, N}=\left(\dfrac{\partial \ln W_{1}}{\partial U_{2}}\right)_{V, N} \text { or }\left(\dfrac{\partial S_{2}}{\partial U_{2}}\right)_{V, N}=\dfrac{1}{T_{2}}=\left(\dfrac{\partial S_{1}}{\partial U}\right)_{V, N}=\dfrac{1}{T_{1}} \end{aligned} At equilibrium, the temperatures are equal, fitting out thermodynamic definition that "temperature is equalized when heat flow is allowed."
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Statistical_Mechanics/Fundamentals_of_Statistical_Mechanics/11._The_Microcanonical_Ensemble.txt
The central application of statistical mechanics rests on the assumption that the average of a property over a large number of systems will give the same value as the thermodynamic quantity of interest. We can distinguish between mechanical properties such as pressure, energy, volume etc. and non-mechanical properties such as entropy. Although there are a large number of particles and an extremely large number of quantum states accessible to even a small system, the state of the system can be characterized by just a few thermodynamic variables. • Introduction to Ensembles The central application of statistical mechanics rests on the assumption that the average of a property over a large number of systems will give the same value as the thermodynamic quantity of interest. We can distinguish between mechanical properties such as pressure, energy, volume etc. and non-mechanical properties such as entropy. • The Canonical Ensemble The most practical ensemble is the canonical ensemble with N, V, and T fixed. We can imagine a collection of boxes with equal volumes, and equal number of particles. The entire collection is kept in thermal equilibrium. • The Grand Canonical Ensemble To consider theories for fluctuations in the number of particles we require an ensemble that keeps V, T, and the chemical potential, m constant, a grand canonical ensemble. Statistical Mechanical Ensembles The central application of statistical mechanics rests on the assumption that the average of a property over a large number of systems will give the same value as the thermodynamic quantity of interest. We can distinguish between mechanical properties such as pressure, energy, volume etc. and non-mechanical properties such as entropy. Although there are a large number of particles and an extremely large number of quantum states accessible to even a small system, the state of the system can be characterized by just a few thermodynamic variables. These statements are known as the Gibbs postulate: The ensemble average of a property corresponds to the thermodynamic quantity. For example, the average energy corresponds to the internal energy, the average pressure corresponds to the thermodynamic pressure, etc. To accomplish the kind of averaging needed to calculate the pressure, for example, we must consider a large number of systems. The concept of a collection of systems, or an ensemble was first introduced by Gibbs. An ensemble consists of a very large number of systems, each constructed to be a replica on the macroscopic level. We will introduce several types of ensemble in this course depending on which variables are held fixed. Corresponding to each ensemble there is a partition function that represents the average number of states accessible at a given temperature. Symbol Ensemble Name Fixed Variables W Microcanonical N, V, E Q Canonical N, V, T D Isobaric-isothermal N, P, T Q Grand canonical m , V, T The magnitude of the partition function, W (E) for the microcanonical ensemble is the same as the degeneracy of the system at the given energy, E. The principle of equal a priori probabilities states that each and every quantum state of the system must be represented an equal number of times. Another way to put this is to state that in an isolated system (N, V, E, fixed) any one of the W possible quantum states is equally likely. The partition function W of the microcanonical ensemble is the same as the thermodynamic probability W introduced in the Boltzmann equation for the statistical entropy, S = k ln W. The same equation holds for the microcanonical ensemble, S = k ln W and thus represents a direct thermodynamic connection between the partition function and the entropy. The difficulty with application of this information is that it is very difficult to obtain a set of molecules at a constant energy. In spite of this experimental difficulty, the microcanonical ensemble is useful for illustrating the number of degenerate (equal energy) states in systems of interest. The conclusion from these investigations will be that the number of quantum states accessible to a system is vast and that provides the motivation for application of statistical techniques to the calculation of average quantities, fluctuations and transport properties. Molecular motion: the quantum view We consider the three types of molecular motion 1. translation 2. vibration 3. rotation From quantum mechanics we have HY = EY where H is the hamiltonian where h is h/2p , that is, Planck's constant divided by 2p . The hamiltonian comprises both the kinetic and potential energy of the system. For the purposes of statistical mechanics it is the energy levels and their degeneracy that are of interest. The wavefunctions, Y do not appear in averages and are therefore not given below. Translation Translation is calculated using the particle-in-a-box treatment. Setting the potential U(x) = 0 over the range x = 0 to a and U(x) equal to infinity outside this range, for one dimension the hamiltonian, the energy levels are . The extension to three dimensions is easily made and is shown below. Vibration The classical Hooke's law potential is 1/2 kx2 and this is exactly what is used as the potential in the quantum mechanical hamiltonian. In one dimension this becomes The energy levels are where w = (k/m)1/2. Rotation The rigid rotor hamiltonian is The energy levels are I is the moment of inertia of the rotor. The essence of statistical mechanics is to connect these quantum mechanical energy levels to the macroscopically measured thermodynamic energies, pressure, and entropy. There are two important aspects of these energy levels. First, there is a ladder of increasing energy states. Second, in some cases there is a degeneracy associated with the states. For the rigid rotor solutions the degeneracy is 2J + 1. In the case of the solutions for the particle-in-a-box there is an enormous degeneracy because of the three dimensional solution. This is important for understanding ensembles and the strategy of statistical mechanics. The degeneracy of translational energy levels is very large The solution of the particle-in-a-box problem in three dimensions is The degeneracy is given by the number of ways that the integer M = 8ma2e /h2 can be written as the sum of squares of three positive integers. The degeneracy becomes a smooth function for large M. Consider a three-dimensional space spanned by nx, ny, and nz. There is a one-to-one correspondence between energy states given by above energy equation and the points in this space. A radius in this space is given by R2 = nx2 + ny2 + nz2 so that R = (8ma2e /h2)1/2. We wish to calculate the number of lattice points that are at some fixed distance from the origin in this space. In practice, this means that we want the number of states between energy e and e + de . To obtain the total number of states with energy less than e we consider the volume of one octant (recall that the quantum numbers must be positive). This number of states is: The number of states between e and e + de is Expand e + de about de = 0 as follows Keeping only the first two terms we have This derivation is valid for the degeneracy of single particle. A simple calculation taking e = 3kT/2 » 6 x 10-21 J, m = 10-25 kg, a = 1 m and de = 10-9 gives w (e,de ) » 1044for a single particle in a of one part per billion about the energy e . For an N particle system, the degeneracy is tremendously greater. To see this consider N noninteracting particles in a cube. The energy of the system is Defining the space of the quantum numbers as an N-dimensional sphere, the number of states with energy less than E is where G (n) is the gamma function. The number of states between E and E + D E is Even though there are a large number of levels and we will assume that the they are all equally probable, there is an energetic constraint on the system. This leads to the concept of a most probable distribution among these levels. Although the microcanonical ensemble is useful for illustrating the number of levels, the most convenient ensemble for determining the most probable distribution is the canonical ensemble (constant N, V, and T).
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Statistical_Mechanics/Statistical_Mechanical_Ensembles/Introduction_to_Ensembles.txt
The most practical ensemble is the canonical ensemble with N, V, and T fixed. We can imagine a collection of boxes with equal volumes, and equal number of particles. The entire collection is kept in thermal equilibrium. Note that this is exactly the condition that we have using periodic boundary conditions and running dynamics at constant volume. The difference between a MD simulation and the theoretical concept of an N, V, T ensemble in that the MD simulation must use a trick to keep the temperature constant (to maintain thermal equilibrium). In practice, the ensemble is sampled by running the dynamics for a period of time sufficiently large that phase space is sampled. Here, phase space means all distributions of position and momenta of the particles. The methods used to maintain constant temperature will not be found in Allen & Tildesley. These have been investigated extensively. The Boltzmann Distribution We are ultimately interested in the probability that a given distribution will occur. The reason for this is that we must have this information in order to obtain useful thermodynamic averages. The method used to obtain the distribution function of the ensemble of systems is known as the method of the most probable distribution. We begin with the statistical entropy, S = k lnW. The weight, W (or thermodynamic probability) is the number of ways that distinguishable particles can be arranged into groups such a0 is the number in the zeroth group, a1 is the number in the first group etc. where A is the total number of systems in the ensemble. A = total number of systems. a0, a1, a2… = occupation numbers for system in each quantum state. The overall probability that Pj that a system is in the jth quantum state is obtained by averaging aj/A over all the allowed distributions. Thus, Pj is given by where the angle brackets indicate an ensemble average. Using this definition we can calculate any average property (i.e. any thermodynamic property) using the Gibbs postulate. The method of the most probable distribution is based on the idea that the average over á ajñ /A is identical to the most probable distribution (i.e. that the distribution is arbitrarily narrow in width). Physically, this results from the fact that we have so many particles in a typical system that the fluctuations from the mean are extremely (immeasurably) small. This point is confusing to most students. If we think only of translation motion, McQuarrie shows in Chapter 1 that the number of states increases dramatically as the energy (and quantum number increase). Although the number of states is an increasing function the kinetic energy is fixed and must be distributed in some statistical manner among all of the available molecules. The equivalence of the average probability of an occupation number and the most probable distribution is expressed as follows: To find the most probable distribution we maximize the probability function subject to two constraints. Conservation of energy requires: where ej is the energy of the jth system in its quantum state. Conservation of mass requires: which says only that the total number of the all of the systems in the ensemble must be A. Using S = k lnW we can reason that the system will tend towards the distribution among the aj that maximizes S. This can be expressed as S j( S/ aj) = 0. This condition is satisfied by S j(ln W/aj) = 0 subject to constraints Using the method of LaGrange undetermined multipliers we have: We can evaluate ( lnW/ aj) = 0 using Stirling's approximation, ln x! » xlnx – x. We find ( ln A!/ aj) – S i ( ln ai!/ aj) = 0 as outlined below. Simplification of (lnW/aj) First step is to note that ln W = ln A! - S j ln aj! = A ln A – A - S j aj ln aj - S j aj Since A = S j aj these two cancel to give ln W = A ln A - S j aj ln aj The derivative is: Therefore we have: These latter derivatives result from the fact that ( ai/ ai) = 1 and ( aj/ ai)=0. The simple expression that results from these manipulations is: The most probable distribution is aj/A = ea -be j Now we need to find the undetermined multipliers a and b . The left hand side is 1. Thus, we have This determines a and defines the Boltzmann distribution. We will show that b =1/kT. This identification will show the importance of temperature in the Boltzmann distribution. The distribution represents a thermally equilibrated most probable distribution over all energy levels. The sum over all factors e-be is given a name. It is called the molecular partition function, q. The molecular partition function q gives an indication of the average number of states that are thermally accessible to a molecule at the temperature of the system. The ensemble partition function We distinguish here between the partition function of the ensemble, Q and that of an individual molecule, q. Since Q represents a sum over all states accessible to the system it can written as where the indices i,j,k, represent energy levels of different particles. Regardless of the type of particle the molecular partition function, q represents the energy levels of one individual molecule. We can rewrite the above sum as Q = qiqjqk… or Q = qN for N particles. Note that qi means a sum over states or energy levels accessible to molecule i and qj means the same for molecule j. The molecular partition function, q counts the energy levels accessible to molecule i only. Q counts not only the states of all of the molecules, but all of the possible combinations of occupations of those states. However, if the particles are not distinguishable then we will have counted N! states too many. The factor of N! is exactly how many times we can swap the indices in Q(N,V,T) and get the same value (again provided that the particles are not distinguishable). Example: If we consider 3 particles we have i,j,k j,i,k, k,i,j k,j,i j,k,i i,k,j or 6 = 3! Thus we write the partition function as The sum of all of the probabilities must equal 1. This is called normalization. The normalization constant of the above probability is 1/Q where Q is called the system partition function. The population of a particular state J with energy EJ is given by This expression is the Boltzmann distribution for the entire system. The molecular partition function We are concerned with the calculation of average thermodynamic properties using the partition function. For an ideal gas of non-interacting particles only the translational partition function matters. In polyatomic gases, solutions, or solids the vibrational, rotational, and electronic states also can contribute to the molecular partition function. Molecular energy levels are e = e a trans + e b vib + e c rot + e d elec where the indices a, b, c, d run over the levels of one particular molecule. We can write the molecular partition function as We will treat the individual contributions to the molecular partition function as needed. For a monatomic gas only qtrans contributes so we will consider the molecular partition function due to translation motion. We will consider the remaining molecular energy levels in subsequent lectures. The translational partition function Translational energy levels are so closely spaced as that they are essentially a continuous distribution. The quantum mechanical description of the energy levels is obtained from the quantum mechanical particle in a box. The energy levels are The box is a cube of length a, m is the mass of the molecule, h is Plank's constant, and nx, ny, nz are quantum numbers. The average quantum numbers will be very large for a typical molecule. This is very different than what we find for vibration and electronic levels where the quantum numbers are small (i.e. only one or a few levels are populated). Many translational levels are populated thermally. The translational partition function is The three summations are identical and so they can be written as the cube of one summation. The fact that the energy levels are essentially continuous and that the average quantum number is very large allows us to rewrite the sum as an integral. The sum started at 1 and the integral at 0. This difference is not important if the average value of n is ca. 109! If we have the substitution a = h2/8ma2kT we can rewrite the integral as This is a Gaussian integral. The solution of Gaussian integrals is discussed in the review section of the Website. If we now plug in for a and recognize that the volume of the box is V = a3 we have This is the molecular partition function. The system partition function for N indistinguishable gas molecules is Q = qN/N! The system partition function is where L is the thermal wavelength, We will use this partition function to calculate average thermodynamic quantities for a monatomic ideal gas. The Canonical Ensemble If we denote the average energy á Eñ then We use the notation that á Eñ = U - U(0) where U(0) is the energy at zero Kelvin. Recalling that b = 1/kT this can be rewritten as This can be written compactly as Entropy within the Canonical Ensemble We have calculated E = U – U(0), which is the internal energy referenced to the value, U(0) at absolute zero (T = 0 K). We can now calculate the entropy, S = k ln W Now recalling the definition of the Boltzmann distribution ln pi = - bei – ln q The entropy is, . The entropy can be expressed in terms of the system partition function Q . Heat Capacity within the Canonical En The heat capacity is a coefficient that gives the amount of energy to raise the temperature of a substance by one degree Celsius. The heat capacity can also be described as the temperature derivative of the average energy. The constant volume heat capacity is defined by using the notation that á Eñ = U - U(0) where U(0) is the energy at zero Kelvin. The molar internal energy of a monatomic ideal gas is á Eñ = 3/2RT. The heat capacity of a monatomic ideal gas is therefore Cv = 3/2R. For a monatomic gas there are three degrees of freedom per atom (these are the translations along the x, y, and z direction). Each of these translations corresponds to ½RT of energy. For an ideal diatomic gas some of the energy used to heat the gas may also go into rotational and vibrational degrees of freedom. For solids there is no translation or rotation and therefore the entire contribution to the heat capacity comes from vibrations. Given their extended nature the vibrations in solids are much lower in frequency than those of gases. Therefore, while vibrations in typical diatomic gases typically contribute little to the heat capacity, the vibrational contribution to the heat capacity of solids is the largest contribution. As the temperature is increased, there are more levels of the solid accessible by thermal energy and therefore Q increases. This also means that U increases and finally that Cv increases. In the high temperature limit in an ideal solid there are 3N vibrational modes that are accessible giving rise to a contribution to the molar heat capacity of 3R. Helmholtz Energy within the Canonical The Helmholtz free energy is A = U - TS. Substituting in for U and S from above we have A = - kT ln Q.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Statistical_Mechanics/Statistical_Mechanical_Ensembles/The_Canonical_Ensemble/Energy_within_the_Canonical_Ensemble.txt
To consider theories for fluctuations in the number of particles we require an ensemble that keeps V, T, and the chemical potential, m constant, a grand canonical ensemble. To construct the grand canonical ensemble, the system is enclosed in a container that is permeable both to heat and to the passage of particles. The number of particles in the system can range over all possible values. As in the canonical ensemble we have occupation numbers aNj describing the number of systems that have energy Ej and N particles. There are two indices for summation since neither the energy of the system nor the number of particles is the same in all of the systems. We can specify the state of the ensemble by specifying that aN1, aN2, aN3, of the systems are in states 1, 2, 3, …, respectively, with energies EN1, EN2, EN3,…. depending on N, the number of particles in each system. In the grand canonical ensemble the occupation numbers obey three conditions. Following the principle of a priori probabilities, we assume that every distribution of occupation number is equally probable. As we have seen for the canonical ensemble we can use the method of the most probable distribution to derive the form of the distribution function and the partition function for the grand ensemble. The number of ways W(a) or W(aN1, aN2, aN3,….) that any particular distribution of aNj’s can be achieved is given by As before we assume that the systems are macroscopic and are distinguishable objects that can be distributed among available states. In any particular distribution aNj/A is the fraction of systems of the canonical ensemble in the jth energy state containing N particles. The overall probability PNj that a system is in the jth quantum state with N particles is obtained by averaging aNj/A over all the allowed distributions. The notation of summing over a means that the value of aj depends on the distribution and that the summations are over all distributions that satisfy the constraints. The most probable distribution is the distribution that maximizes W. The maximum W will be found by setting the derivative ( lnW/ aNi) = 0 subject to the constraints above that the aNj must sum to A, the total energy E is equal to the sum of aNjEj, and the total number of particles N is aNjN. This implies that In other words, there is no change in the total number of systems A, total energy E, and the total number of particles, N with respect to changes in the occupation numbers. The procedure followed here is analogous to that used for the canonical ensemble; we maximize W subject to the constraints. The difference is that there is one addition constraint on the number of particles, N that was not present in the canonical ensemble. To maximize subject to constraints we use the method of LaGrange undetermined multipliers. where we have moved the summation symbol in front of the three terms. The constants a , –b , and g are the undetermined multipliers. We first carry out the derivative and then find the value of the multipliers. We can evaluate ( lnW/ aNi) = ( lnA!/ aNi) – SN Sj ( ln aNj!/ aNi) using Stirling's approximation: lnx! » xlnx – x. To simplify ( lnW/ aNi) the first step is to note that lnW = lnA! - SN Sj ln aNj! = AlnA – A - SN Sj aNj ln aNj - SN Sj aNj Since A = SN Sj aNJ the last two terms cancel to give lnW = AlnA - SN Sj aNj ln aNj. Note that exactly the same procedure and algebra are used to show that the derivative of ln W is equal to the most probable distribution in the canonical ensemble, and so we have: The most probable distribution is: Now we only need to find the undetermined multipliers a , b, and g . By summing both sides the indices N and j we can obtain a . The left-hand side is equal to one and ea = 1/X where X is the grand canonical partition function or the grand partition function (for short). The Boltzmann distribution in this ensemble can be written The star indicates that this is the most probable distribution as shown above by maximizing W with respect to the occupation numbers. The averages of mechanical properties E, P, and N are We have shown that b = 1/kBT where kB is Boltzmann’s constant. This is true for all of the ensembles. A similar approach can be used to show that g = -m /kBT. The differential of the grand partition function is Using the definitions of mechanical properties from above we have The last term is the ensemble-averaged work done by the system. We add d( Eñ ) + d(g N) to both sides The thermodynamic equation dE = TdS - PdV + m dN can be rearranged to TdS = dE - m dN + PdV Comparing these two equations by dividing through by b we find that g = -mb = -m /kT from the second term in each equation. Since we can replace g /b by -m we have which gives the entropy as Since G = m N = E + PV - TS We can use the above equation to determine that There are several ways to express the grand partition function. Starting with the definition, we can define the canonical partition function for N particles as and then insert this expression into the grand partition function We know also that eg = em/kT . The quantity em/kT is often denoted l . For indistinguishable particles Q = qN/N! where q is the molecular partition function. Therefore,
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Statistical_Mechanics/Statistical_Mechanical_Ensembles/The_Grand_Canonical_Ensemble.txt
Thermodynamics is the study of stable equilibrium in macroscopic systems. Advanced Thermodynamics Thermodynamics is the study of stable equilibrium in macroscopic systems. The terms “macroscopic” and “stable equilibrium” require some definition: 2. The Postulates of Thermodynamics Thermodynamics has three important ingredients: fundamental relations, which describe the relationship between state functions for a particular system; postulates that make statements about the state functions from which all other thermodynamic statements can be derived; and a body of mathematical manipulations that allows one to derive theorems from the postulates, and manipulate the fundamental relations in order to obtain the desired result. It is worth noting at this stage that fundamental relations cannot be derived from the postulates. The postulates provide some constraint on what is a valid fundamental relation, but otherwise leave a lot of freedom. Fundamental relations must be determined either empirically, or through a model. Statistical mechanics provides a means of deriving fundamental relations from the most generally valid of models, dynamics, either classical or quantum. For now, we will assume that fundamental relations have been obtained via measurement or from a model, and we only manipulate them. We begin with the postulates of thermodynamics before considering their mathematical manipulation. The postulates are like axioms in mathematics, but with one important difference: they can actually be derived as special cases of the postulates of statistical mechanics, by letting the particle number go to infinity. Postulate 0: Equilibrium States Simple systems have equilibrium states which are fully characterized by a unique set of extensive state functions $\{U, X_i\}$, where $U$ is the internal energy of the system (energy for short) and $X_i$ are other required positive extensive state functions (e.g. $V$, $A$, $L$, $n_i$, etc.) Lemma A composite system also has a unique set $\{U, Xi\}$ where $U=\sum_k U_k$ and $X_i=\sum_k X_{ik}$; however, this set does not fully characterize the composite system unless the constraints are also specified. Postulate 1: Consevation of Energy The quantity $U$ is conserved for a closed system. Notes: • U is usually a relative energy, not an absolute energy. For example, stating that $U = 0$ under standard conditions for O2 neglects the nuclear energy, which however does not change during a chemical reaction. For the second and third postulates, only relative energies U are important. • Relativistically, $mc^2$ is a form of energy not conserved by itself. Chemically, mass is conserved; actually, even atomic nuclei are conserved. Strictly speaking, even for a free particle, $U^2 = p^2c^2 + m^2c^4$. Postulate 2: Closed Systems For a set of simple systems $\{S_k\}$, there exist single-valued, continuous, and differentiable extensive state functions $S_k(U_k, X_{ik})$, defined for stable equilibrium states, such that for a closed composite system $\{S\}=\sum_k \oplus \{s_k\}$, the state functions $U_k$ and $X_{ik}$ take on those values that maximize the entropy $S=\sum S_k$ of the composite system, subject to its internal constraints. Notes: • $S_k=S_k(U_k, N_{ik})$ or $S=\sum_k S_k$ are called the fundamental relations. Think of the $S_k$ as entropies of the subsystems, and $S$ as the total entropy of the closed system. • When dealing only with simple systems, the subscripts k will usually be dropped. • The total energy $U=\sum_k U_k$ of the closed composite system is of course conserved even while $S$ is maximized, alternatively if S is held constant we shall see that U is minimized. • “Stable” means $d^2S < 0$ so a well-defined maximum exists. Postulate 3 $S$ is a monotonically increasing function of U and , where is a vector of all independent extensive variables of the closed composite system. Note: This will later be seen equivalent to the statement $\lim_{t \rightarrow 0} S=0$ because $\left( \dfrac{\partial U}{\partial S} \right)_x = T$. We now can outline a method for the general solution of thermodynamic problems: 1. Identify subsystems $\{S_k\}$ of the system $\{S\}$ (e.g. open system and reservoir). 2. Determine fundamental relations $S_k(U_k, N_{ik})$ (empirically or from model) 3. Differentiate to maximize $S=\sum_kS_k$, subject to constraints (e.g. by Lagrange multipliers) 4. $U_k$ and $X_{ik}$ at maximum are the equilibrium conditions Example 2.1 Consider the following example of this method: a closed box is partitioned into volumes $V_1$ and $V_2$ by an impermeable wall, each side of which is filled with $n_1$ and $n_2$ moles of a gas having the fundamental relation $S_k = c + n_kR\ln V_k$. When equilibrium is reached, what is the relationship between volumes $V_1$ and $V_2$? 1. The two partitions with volumes $V_1$ and $V_2$ are the subsystems. 2. The fundamental relations are given (based on empirical formulas derived in detail in the next chapter) 3. $S = 2c + n_1 R\ln V_1+n_2 R\ln (V-V_1)$, making use of the fact that the box is closed so total volume is conserved. Differentiating yields $\partial S/ \partial V_1 = n_1R/V_1 – n_2R/[V-V_1] = 0$ 4. Thus at equilibrium, $V_1/V_2 = n_1/n_2$. As we suspect, the volumes will equilibrate in the same ratio as the number of moles of gas on each side of the impermeable wall. The main problem with this approach using the postulates directly is that the fundamental relations usually are unknown! Instead, partial information about the system in the form of equations of state such as $PV=nRT$, $U = 3/2 nRT$ is usually available, and one must see what information can be extracted from them subject to the known constraints. Note that thermodynamics provides no clue as to the functional nature of $S(U, N_i)$, except that it must be compatible with the postulates. As we will see in the next chapter, the fundamental relations can be obtained if enough equations of state are known.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Thermodynamics/Advanced_Thermodynamics/1._Introduction_and_Definitions.txt
3.1 Energy minimum principle $S$ can be written as $S(U,\bar{X})$, where $\bar{X}$ is a vector of all independent internal extensive variable (e.g. all but one Uk, and all other Xi). Because S is monotonic in U and continuous, we can invert to $U(S,\bar{X})$. This relation is fully equivalent to the fundamental relation. Because of the shape of $S(U,\bar{X})$ or $U(S,\bar{X})$, as shown in the figure, maximizing the entropy at constant U is equivalent to minimizing the energy at constant S. This is the familiar version from mechanics, where system properties are usually formulated in terms of energies, instead of entropies. Postulate 2': Minimizing Energy The internal energy of a composite system at constant $S$ is minimized at equilibrium. 3.2 Intensive parameters: Temperature Working for now with $U$ for a simple system, $U(S,\vec{X}$ we can write $du=\left(\dfrac{\partial U}{\partial S}\right)_{\vec{X}}dS + \left(\dfrac{\partial U}{\partial \vec{X}}\right)_S \cdot d\vec{X}$ with appropriate constraints n each $\dfrac{\partial U}{\partial X_i}$ derivative $dU = TdS + \vec{I} \cdot d\vec{X}$ where $T \equiv \left( \dfrac{\partial U}{\partial S} \right)_{\vec{X}} > 0$ by postulate three. By construction, $T$ and the $\{I\}$ are intensive variables. For example, $U \rightarrow \lambda U$ and $S \rightarrow \lambda S \Rightarrow T \rightarrow \left( \dfrac{\partial \lambda U}{\partial \lambda S} \right) =T$ We consider in detail the properties of the energy derivative $T$, and then briefly by analogy other intensive variables $\{I_i\}$. Let all the $\vec{X}$ (such as $dV$, $dn_i$, $dM$, etc.) equal to zero: no mechanical macroscopic variables are being altered except for energy. It then follows that $dU = dq$ because $dw = 0$. Therefore $dq= TdS$ for small (quasistatic) charges in heat, the change in system entropy is linearly proportional to the heat increment. Thus as we add energy to the uncontrollable degrees of freedom of our system, entropy increases, in accord with the notion that entropy is disorder. Furthermore, we can rewrite this as $dS = \dfrac{dq}{T}.$ When $T$ is larger, the entropy increases less for a given heat input. What is this quantity $T$? Consider a closed composite system $\{S\}$ of two subsystems $\{S_1\}$ and $\{S_2\}$ separated by a diathermal wall. A diathermal wall allows only heat flow, so $dX=0$ again. At equilibrium, $dS = 0 = \left( \dfrac{\partial S_1}{\partial U_1} \right)_x dU_1 + \left(\dfrac{\partial S_2}{\partial U_2} \right)_x dU_2$ according to P2 or $dS = \dfrac{1}{T_1} dU_1 + \dfrac{1}{T_2} dU_2.$ But $dU = 0$ for a closed system by P1, from which follows that $dU_2 = -dU_1$ or $dS = \left ( \dfrac{1}{T_2} - \dfrac{1}{T_2} \right) dU_1.$ At equilibrium, $dS = 0$ for any variation of $dU_1$, which can only be true if $\left ( \dfrac{1}{T_2} - \dfrac{1}{T_2} \right) = 0 \Rightarrow T_1=T_2$ Thus, $T$ is the quantity that is equalized between two subsystems when heat is allowed to flow between them. This is the most straightforward definition of temperature: the thing that becomes equal when heat stops flowing from one place to another. We can thus identify the intensive variable as the temperature of the system. Temperature is always guaranteed to be positive by P3 because entropy is a monotonically increasing function of energy. Finally, if $T = (\partial U/ \partial S)_X$, we can rewrite the third postulate as $\lim_{T \rightarrow 0} S =0,$ more commonly known as the “third law of thermodynamics.” As all the energy is removed from a system by lowering its temperature, the system becomes completely ordered. It is worth noting that there are systems (glasses), where reaching this limit takes an inordinate amount of time. A very general principle of quantum mechanics guarantees that the third law holds even in those cases, if we can actually get the system to equilibrium: a coordinate or spin Hamiltonian always has a single groundstate of $A_1$ symmetry. This is the state any system reaches as $T \rightarrow 0$. In practice, this state may just not be reachable even approximately in glasses, and heuristic replacements of the third law have been developed for this case, which is really a non-equilibrium case. To summarize $\Delta S _{closed} > 0$ always by postulate P2 $ds = \dfrac{dq}{T}$ by P2 for a quasistatic process when no work is done $T>0$ always by postulate P3 $T_1=T_2$ for two systems in thermal equilibrium $\lim _{T \rightarrow 0} S =0$ always by P3, difficult to reach even approximately in some cases Thus $T$ and $S$ have all the intuitive characteristics of temperature and disorder, and we can take them as representing temperature and disorder. The latter can be justified even more deeply by making use of statistical mechanics in later chapters, where the second postulate follows from microscopic properties of the system. A note on units: $TS$ must have units of energy. It would be convenient to let $T$ have units of energy (as an “energy per unit size of the system”) and to let $S$ be unitless, but for historical reasons, $T$ has arbitrary units of Kelvin and S has units of Joules/Kelvin to compensate. 3.3 Other extensive-intensive Variable Pairs The more complex a composite system becomes, the more extensive variables it requires beyond $U$, leading to additional intensive variables. For example: Pressure $V$ (volume) leads to an energy change $dU_v = \left( \dfrac{\partial U}{\partial V} \right)_{\vec{X}} dV \equiv -PdV.$ The intensive derivative is called the pressure of the system. $PV$ has units of Joules, so $P$ must have units of Joules/m3 or N/m2. Thus $P$ certainly has the units we normally associate with pressure, or force per unit area. Usually $\partial U/ \partial V < 0$ because squeezing a system increases its energy. Thus $P$ is generally a positive quantity, again in accord with our intuition. Note however that there is no postulate that says $P$ must be positive. In fact, we can bring systems to negative pressure by pulling on the system, or putting tension on it. Is $P$ is in fact pressure? It is easy to see that it is, by applying the minimum energy principle to a diathermal flexible wall, in analogy to what was done for temperature above: $dU=0 = dU_1 +dU_2$ by Postulate 1 $dU= T_1dS_1 - P_!dV_1 + T_2dS_2 -P_2dV_2$ by Postulate 2' $dU = (T_1-T_2)dS_1 - (P_1 - P_2) dV_1$ In the third line, we assume a closed system and reversible process, so $dV = 0$ and $dS = 0$. When the energy has reached equilibrium, the equation must hold for any small perturbation of the entropy or volume of subsystem 1, which can only be satisfied if $T_1 = T_2$ (again), and $P_1 = P_2$. Thus, P is the quantity which is the same in two subsystems when they are connected by a flexible wall. This is the most straightforward definition of pressure: the thing that is equalized between two systems when the volume can change to whatever it wants. $P$ is a pressure, not just in units, but agrees with our intuitive notion of what a pressure should be. Surface Area $A$ (area in surface system) $\Rightarrow dU_A = \left( \dfrac{\partial U}{\partial A} \right)_X dA = -\Gamma dA$ where $\Gamma$ has units of (N/m) and is therefore the surface tension. Magnetization $M$ (magnetization) $\Rightarrow dU_M = \left(\dfrac{\partial U}{\partial M}\right)_{\vec{X}} dM \equiv HdM$ where $H$ is the externally applied magnetic field. Mole Number $n_i$ (mole number) $\Rightarrow dU_{n_i} = \left(\dfrac{\partial U}{\partial n_i}\right)_{\vec{X}} dn_1 \equiv \mu_i dn_i$ where $\mu_i$ is the chemical potential equalized when particles are allowed to flow. Length $L$ (length) $\Rightarrow dU_L = \left(\dfrac{\partial U}{\partial L}\right)_{\vec{X}} dL \equiv FdL$ where $F$ is the linear tension force. In general Many more conjugate pairs of extensive and intensive variables are possible, but this gives the general picture. For an arbitrary variation in $U$ we have $dU = TdS + \vec{I} \cdot d\vec{X},$ where $\vec{I}$ is the vector of all intensive variables except temperature. Often, we will use $dU = TdS - PdV + \mu dn$ as an example, when dealing with a simple 3-dimensional 1-component system. 3.4 First order homogeneity Consider $S$ for a closed system. Because $S$ is extensive, $S(\lambda U,\lambda \vec{X}) = \lambda S(U, \vec{X})$. This agrees with the intuitive notion that 2 identical disordered systems amount to twice as much disorder as a single one. Similarly, $U(\lambda S,\lambda \vec{X}) = \lambda U(S, \vec{X})$. Differentiating both sides with respect to $\lambda$ yields $\left( \dfrac{\partial U}{\partial \lambda S}\right)_{\vec{X}} \left( \dfrac{\partial \lambda S}{\partial \lambda }\right) + \left( \dfrac{\partial U}{\partial \lambda \vec{X}}\right)_{\vec{X}} \cdot \left( \dfrac{\partial \lambda \vec{X}}{\partial \lambda}\right) = U(S,\vec{X})$ or $\left( \dfrac{\partial U}{\partial \lambda S}\right)_{\vec{X}} S + \left( \dfrac{\partial U}{\partial \lambda \vec{X}}\right)_{\vec{X}} \cdot \vec{X} =U(S,\vec{X})$ When $\lambda = 1$, this yields $\left( \dfrac{\partial U}{\partial S}\right)_{\vec{X}} S + \left( \dfrac{\partial U}{\partial \vec{X}}\right)_{\vec{X}} \cdot \vec{X} = U$ or $U=TS + \vec{I} \cdot \vec{X}$ Thus the energy has a surprisingly simple form: it is simply a bilinear function of the intensive and extensive parameters; it is known as the Euler form. The formula for energy looks like the formula for $dU$ with the $dS$ removed. For example, $U=TS-PV + \mu n$ for a simple one-component system. Solving for $S$ yields an analogous formula in the entropy representation, $S=\left( \dfrac{1}{T} \right) U - \left(\dfrac{\vec{I}}{T} \right) \cdot \vec{X}$ for example $S=\left( \dfrac{1}{T} \right) U - \left(\dfrac{P}{T} \right) V - \dfrac{\mu}{T} n$ The entropy is also a simple bilinear function of its intensive and extensive parameters. 3.5 Gibbs-Duhem relation The differential of $U$ combined with first order homogeneity requires that not all intensive parameters be independent. For a completely arbitrary variation of $U$, $dU =TdS + SdT + \vec{I} \cdot d\vec{X} + \vec{X} \cdot d\vec{I}$ But we know from earlier that $dU =TdS + \vec{I} \cdot d\vec{X} \Rightarrow SdT + \vec{X} \cdot d\vec{I} = 0$ Using this Gibbs-Duhem relation, one intensive parameter can be expressed in terms of the others. For example, consider a simple multicomponent system: $U = TS-PV \sum_{i=1}^r \mu_in_i \Rightarrow SdT - VdP + \sum_{i=1}^r n_id\mu_i$ $\Rightarrow d\mu_i = \left( \dfrac{V}{n_1}\right) dP- \left( \dfrac{S}{n_1}\right) dT - \sum_{i=2}^r {\dfrac{n_i}{n_1} d\mu_i}$ One chemical potential change can be expressed in terms of pressure, temperature, and the other chemical potentials. In general, an $r$-component simple 3-D system has only $2 + (r-1) = r+1$ degrees of freedom. This will be useful for multi-phase systems. For example, let two phases of the same substance be at equilibrium, and particle flow is allowed from one phase to another. Then $\mu_1 = \mu_2$ (or particles would flow to the phase of lower chemical potential according to 3.), and to remain at equilibrium when the chemical potential changes, $d\mu_1 = d\mu_2$. Combining the Gibbs-Duhem relations for each phase, $S_1dT - V_1dP =-d\mu_1$ and $S_2dT - V_2dP =-d\mu_2$ $\overset{d\mu_1=d\mu_2} {\longrightarrow} (S_1-S_2)dT=(V_1-V_2)dP$ or $\dfrac{dP}{dT}=\dfrac{\Delta S_{12}}{\Delta V_{12}}$ Thus letting $d\mu_1 = d\mu_2$ traces out the $T$, $P$ conditions where the two phases are at equilibrium. This is known as the Clausius equation. 3.6 Equations of State and the Fundamental Relation Often we do not know the fundamental equation $U(S,\vec{X})$ or $S(U,\vec{X})$ instead we know equation involving intensive variables, known as equations of state. For example, $U = U(S,X) \Rightarrow T=\left(\dfrac{\partial U}{\partial S}\right)_X=T(S,X).$ Similarly, the derivative with respect to any other $X$ yields the corresponding equation of state $I(S, X)$. These are called equations of state in normal form, and express one intensive variable in terms of all the extensive variables. There are as many equations of state as there are extensive variables for the system (e.g. $r+2$ for a simple $r$-component system). Note that an equation of state does not contain the same amount of information as the original fundamental relation; it can be integrated up to a constant that depends on all extensive variables except the one involved in the derivative, but that part of $U$ (or $S$), if we derive equations of state from $S(U, X)$ cannot simply be left out. If all the equations of state in normal form are known, we can reconstruct the fundamental relation by using the Euler form from 4 $U= T(S,\vec{X})S+\vec{I}(S,\vec{X})\cdot \vec{X}$ this is also solvable for $S$ because of P3. If they are not known in normal form, we may also be able to obtain the fundamental relation by integrating a differential form, such as $dS =\left(\dfrac{1}{T} \right) dU - \left(\dfrac{I}{T}\right) \cdot d\vec{X}.$ If needed, we can compute one intensive variable from the Gibbs-Duhem relation, so we need one less equation of state (only $r+1$ for a simple $r$-component system) to evaluate the fundamental relation. Finally, equations of state may also be substituted into one another, yielding equations that depend on more than one intensive variable. These are also referred to as equations of state, but they are not in normal form. Let us consider two examples of how to determine a fundamental relation. We start with the fundamental relation for a rubber band, where we can write down reasonable guesses for both equations of state needed. $dU = TdS + FdL \Rightarrow ds = \dfrac{1}{T} dU - \dfrac{F}{T} dL$ We need equations of state so $T$ and $F$ can be eliminated to yield $S(U,L)$: a) $F=c_1T(L-L_0)$; $L_0$ is the relaxed length of the rubber band, and we are treating it like a linear spring once stretched. An unusual feature is that $F$ increases with $T$. At higher T polymer chains wrinkle into more random coils, causing shrinkage, and increasing the tension for the same length. b) $U=c_2L_0T$, as long as $F$ depends only linearly on $T \Rightarrow F/T=F(L)$ only. The reason is that $\dfrac{\partial^2 S(U,L)}{\partial U \partial L} =\dfrac{\partial}{\partial U} \left( \dfrac{-F}{T} \right) = \dfrac{\partial}{\partial L} \left( \dfrac{1}{T} \right) =0$ so $\dfrac{1}{T}$ can be any single-valued function of $U$ as long as it is independent of $L$; for simplicity we pick $U \sim T$, as for an ideal gas. We can now insert the two equations of state into the differential form, and integrate it $dS = \dfrac{c_2L_o}{U} dU -c_1(L-L_0)dL \Rightarrow$ $S=S_0 + c_2L_0 \ln \dfrac{U}{U_0} - \dfrac{c_1}{2} (L-L_0)^2$ The constant can be determined by invoking the third law. However, note that this can lead to singularities if the equations of state themselves are not correct at low temperature, as is the case in this example. Moreover, note that $c_2$ most be intensive, and $c_1^{-1}$ must be extensive so that $S$ is extensive. From the fundamental relation we can calculate any desired properties of the rubber band. Alternatively, we could try to obtain the fundamental relation in terms of $U = TS + FL$, but then we would need $T(S,L)$ and $F(S,L)$ instead of $\frac{1}{T}(U,L)$ and $\frac{F}{T}(U,L)$, which were not available. Similarly, to plug into $S = U/T – FL/T$, we would need $T(U, L)$ and $F(U, L)$; we have the former, but not the latter: the equation of state for $F=c_1T(L-L_0)$ is in terms of another intensive variables, and not in the basic form required for the Euler form. Note that plugging $T = U/c_2L_0$ into $F(T, L)$ to get a $F(U, L)$ will not help either because this does not yield an equation of state in normal form as it would have been obtained by taking the derivative of $S$. As another example, consider the fundamental relation for an ideal monatomic gas. In this case, we will derive one of the equations of state from the others, get all three equations of state in normal form, before inserting all three to obtain the fundamental relation. The gas has 1 component, so we need $r+1=2$ equations of state to get started: $Pv-Rt \tag{1}$ $u=\dfrac{3}{2}RT \tag{2}$ Here the two well-known equations of state for an ideal gas are written in terms of intensive variables $u = U/n$ and $V = V/n$. Again the first equation depends on two intensive variables and is not in standard form. We can bring both equations into standard form as follows: $P=\dfrac{R}{v}T=\dfrac{R}{v} \left(\dfrac{2u}{3R}\right) = \dfrac{2}{3}\dfrac{u}{v} = -\left( \dfrac{\partial u}{\partial v} \right)_{S,n} \tag{1'}$ $T=\dfrac{2u}{3R} = \left( \dfrac{\partial u}{\partial S}\right)_{V,n} \tag{2'}$ We now need $\mu(u,n)$ as the third equation of state. Proceeding with the Gibbs-Duhem relation, $d\mu = -Sdt +vdP.$ We must eliminate $S$ since we formulated $P$ and $T$ as a function of $U$, not $S$. Using the bilinear form of $S$, $d\mu = -\left( \dfrac{u}{T} + \dfrac{Pv}{T} -\dfrac{\mu}{T} \right) dT + vdP.$ Next we eliminate $P$ and $T$ by using equations 1’ and 2’: $d\mu = -d\mu -\dfrac{2}{3} du + \mu \dfrac{du}{u} + \dfrac{2}{3}du - \dfrac{2}{3}u\dfrac{dv}{v}.$ We then divide by u on both sides, rearrange, and integrate: $\dfrac{d\mu}{u}- \mu \dfrac{du}{u^2} = d\left(\dfrac{\mu}{u}\right) = -\dfrac{du}{u} - \dfrac{2}{3} \dfrac{dv}{d}$ $\int_0^{final} d\left(\dfrac{\mu}{u}\right) = \left(\dfrac{\mu}{u}\right)-\left(\dfrac{\mu}{u}\right)_0= -\ln \dfrac{u}{u_0}-\dfrac{2}{3}\ln \dfrac{v}{v_0}$ or $\mu = -u \ln \dfrac{u}{u_0} - \dfrac{2}{3} u \ln \dfrac{v}{v_0} + u\left(\dfrac{\mu}{u}\right)_0$ This is the third equation of state, for the chemical potential. We now have all intensive parameters as normal form equations of state, to construct the fundamental relations $s(u,n)$ or $u(s, v)$. (Of course, the homogeneous first order property means that to get $S$ and $U$, we just multiply by $n$.) Doing $s$, for example, $s = \dfrac{1}{T} u + \dfrac{P}{T} v - \dfrac{\mu}{T}$ $= \dfrac{3R}{2u} u + \dfrac{2u}{3v}\dfrac{3R}{2u} v - \dfrac{3R}{2u} u \left\{ -\ln \dfrac{u}{u_0} - \dfrac{2}{3} \ln \dfrac{v}{v_0} + \left( \dfrac{\mu}{u}\right)_0 \right\}$ $= \dfrac{5}{2}R - \dfrac{3}{2} R \left(\dfrac{\mu}{u} \right)_0 +\dfrac{3}{2}R \ln \dfrac{u}{u_0} + R\ln \dfrac{v}{v_0}$ $=\dfrac{3}{2}R\ln u + R\ln v +c$ Note that this equation of state violates Postulate 3: $\left(\dfrac{\partial U}{\partial S} \right)_V=T=\dfrac{2u}{3R}$ so $T \rightarrow 0 \equiv u \rightarrow 0$; but $S \rightarrow 0$ as $T \rightarrow 0$, it approaches $-\infty$. Thus, either $PV=nRT$, or $U=\dfrac{3}{2} nRT$, or both must be high-temperature approximations that fail as $T \rightarrow 0$. At low $T$, excluded volume effects, particle interaction, and quantum effects come into play. The ideal gas equation would have to be replaced by a more accurate equation, such as the van der Waals equation to satisfy the third law closer to $T = 0$. In that sense, thermodynamics can point out to us when approximate equations of state break down. 3.7 Stability and Second Derivatives The first derivatives (intensive parameters) are very useful because they correspond to quantities that are equalized among equilibrated subsystems. However, the first order relationship $dS=0$, although necessary by Postulate 2 at equilibrium, is not sufficient. The extremum in $S$ must be a maximum: $d^2S < 0$ or according to Postulate 1: $d^2U > 0$ Extrema with $d^2S > 0$ or $d^2S = 0$ are also possible (minima, saddles, degenerate points). However, thermodynamics cannot make statements about such points without some further assumptions that go beyond the postulates. This suggests that the study of second derivative will be fruitful, to ensure that one is working near a stable equilibrium point. Three of these second derivatives encountered later are $\alpha =\dfrac{1}{V} \left(\dfrac{\partial V}{\partial T}\right)_{P,n_i}$ $\kappa = -\dfrac{1}{V} \left( \dfrac{\partial V}{\partial P} \right)_T$ $c_p = \dfrac{T}{n} \left ( \dfrac{\partial S}{\partial T} \right)_P = \left( \dfrac{dq}{dT} \right)_P$ For a simple system, only three second derivatives are linearly independent if we exclude ones based on $\dfrac{\partial }{\partial n_i}$. The reason is that the terms in the energy $U = TS – PV + …$ have only three second derivatives, 1. $\left( \dfrac{\partial T}{\partial S} \right)_V = \dfrac{\partial^2 U}{\partial S^2}$ 2. $\left( \dfrac{\partial P}{\partial V} \right)_S = \dfrac{\partial^2 U}{\partial V^2}$ 3. $\left( \dfrac{\partial T}{\partial V} \right)_S = -\left( \dfrac{\partial P}{\partial S} \right)_V=\dfrac{\partial^2 U}{\partial V \partial S}$ or $d \mu$ is not a perfect differential. Rather than picking those three, we will usually work with the first independent set, corresponding to quantities with more obvious physical interpretations to chemists working at constant pressure and temperature. We consider the corresponding fundamental relations in the next chapter.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Thermodynamics/Advanced_Thermodynamics/3._Basic_properties_of_U_S_and_their_differentials.txt
We have seen that min{$U(S,\bar{X})$} and min{$U(S,\hat{X})$} imply one another. Under certain conditions, these principles are very convenient. For example, $dS = \dfrac{1}{T} dU - \dfrac{P}{T} dV + \sum \dfrac{\mu_i}{T} dn_i \Rightarrow max \left\{ \sum \dfrac{\mu_i}{T} dn_i \right\}$ maximizes $S$ at constant $U$ and $V$. But what do we do if we are working at constant (T,V) or (T,P)? We then have open systems to deal with, where energy must flow (to keep $T$ constant), or the volume must change (to keep P constant), etc. This problem arises for many laboratory reactions in chemistry. As it turns out, if thermodynamic variables other than U and V are held constant, thermodynamic potentials other than entropy or energy of the open system are extremized. (Of course, $S$ is still maximized for a closed system containing our open system of interest.) Very conveniently, these potentials can be computed just from the properties of the open system of interest alone (e.g. one where $T$ is constant, and therefore energy must be allowed to flow in and out), and the added assumption that the corresponding variable (e.g. $T$) is always constant in the environment. The environment is thus treated as a “bath” or “reservoir” for the intensive variable of interest. That is, we assume the closed system containing our open system of interest is so vast that a change in extensive variable (e.g. $U$) of the small open system does not affect the conjugate intensive variable in the environment (e.g. $T$; the energy deposited or taken from the environments by the open system is too small to change T in the environment). Our goal: we want potentials where our choice of intensive variable is in the dependent variables. Let’s begin by discussing an apparently obvious way of doing this, which does not work. Let’s say we have the fundamental relation for $U$, but we want to hold $T$ constant, not $S$: $U = U(S,V,n_i) \Rightarrow T=\left( \dfrac{\partial U}{\partial S} \right)_{V,n_i} = T(S,V,n_i)$ solving for $S=S(T,V,n_i)$, and insert in $U$ we seem to be able to get $U =U(T,V,n_i).$ Now we can hold $T$ constant. The problem with this: $T$ is the slope of $U(S)$, and expressing a function in terms of its own slope leaves the intercept indeterminate: $U$ is no longer completely defined, and we do not have a fundamental relation to which the laws of thermodynamics can be applied. We do however want an equation in terms of the slope. The solution is to express the slope $m$ of $y(x)$ in terms of the intercept $\phi$ (or vice-versa). The intercept as a function of slope does contain all the original information about the function $y(x)$. The process of transforming a function $y(x)$ into $\phi(m)$ is called Legendre transform. 5. Thermodynamic Processes Although thermodynamics strictly speaking refers only to equilibria, by introducing the concept of work flow and heat flow, as discussed in chapter 1, we can discuss processes by which a system is moved from one state to another. The concepts of heat and work are only meaningful because certain highly averaged variables are stable as a function of time. Energy changes related to such variables, like volume, are considered work. Microscopic variables, like the position of a single particle, are unstable and unpredictable as a function of time. If we treat the system classically, we would say such a variable has a Lyapunov coefficient $L > 0$. That is, its orbit diverges from prediction as $\Delta X = \Delta X_0 \exp[-Lt]$, where $\Delta X_0$ is the initial measurement error in the variable, and $\Delta X$ is the error at time $t$. Here’s an example from meteorology: the Lyapunov coefficient for seasonal temperatures is 0; they have predictable averages (colder in the January, warmer in July in the northern hemisphere). Small weather patterns (e.g. motion of a cloud front) have Lyapunov coefficient of about (2 days)-1. This is not a question of better measurements, but a fundamental limitation: the error grows exponentially, so a multiplicative improvement in measurement buys you only a linear improvement in time. The situation is no better in quantum mechanics, where a particle initially started out in a position eigenstate $\delta(x-x_0)$ evolves to ever greater position uncertainty as a matter of Heisenberg’s principle. Thus both classical and quantum motions are inherently unpredictable, for different reasons; the corresponding energy flow is heat flow. But when one averages over enough degrees of freedom, the averaged variables may be well behaved; that energy flow is work flow. Types of processes: Definition: Quasistatic Processes A quasistatic process lies on the surface $S(U,x_i)$ Note: this cannot be achieved in reality, but approximated by taking small steps whose endpoints lie on the fundamental surface. Definition: Reversible Processes A reversible process is a quasistatic process with $S$ = constant. Note: according to postulate 2, upon change of constraints, any process must satisfy $S_{final}>S_{initial}$. The reverse of such a process would violate postulate 2 ($S_{final}<S_{initial}$), and real processes are therefore irreversible. A reversible process is the idealized quasistatic limit where $S_{final} = Si_{nitial}$. Fig. 5.1: Irreversible, quasistatic and reversible processes Thermodynamics only makes statements about equilibrium states, when the fundamental equation is satisfied. However, by using quasistatic and reversible processes as idealized limits, we can derive inequalities satisfied by real processes. As seen earlier $dU = đQ + TdS$ for small (quasistatic) heat transfers in absence of work. The best we can do in the presence of work is therefore that all of $dU = Tds - pdV + \sum_i \mu_idn_i + \Gamma \,dA + H\,dM + ...$ goes into work except for the first term, which corresponds to the infinitesimal heat transfer. $–PdV$ would be simple bulk volume work (e.g. expansion of a gas), $H dM$ would be chemical work (e.g. electrochemical if $n$ refers to the mole number of an ion), $\Gamma dA$ would be surface tension work (e.g. blowing a soap bubble), $H dM$ would be magnetic work, etc. The heat transfer cannot be reduced below $TdS$. However, part or all of the energy flow $đW$ can of course be converted to heat $đQ_W$. The entropy will rise further by $T\,dS_W=đQ_W$, and correspondingly less work is done: $dU=TdS + đQ_W + đW_{\text{left over}}$ If all possible work is converted to heat, $đW_{\text{left over}}=0$ and heat flow is maximized. The following theorem tells us how much work at most we can extract from a system.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Thermodynamics/Advanced_Thermodynamics/4._General_Extremum_Principles_and_Thermodynamic_Potentials.txt
With $U$, $A$, $H$ and $G$ in hand we have potentials as a functions of whichever variable pair we want: $S$ and $V$, to $T$ and $P$. Additional Legendre transforms will provide us with further potentials in case we have other variables (such as surface area $A$, length $L$, magnetic moment $M$, etc.). Thermodynamic problems always involve computing a variable of interest. It may be a derivative if it is an intensive variable, or even a second derivative (higher derivatives are rarely of interest). Example 1st order ones like $\left( \dfrac{\partial G}{\partial P} \right)_T=V$ or 2nd order ones like $\left( \dfrac{\partial^2 G}{\partial P^2} \right)_T= \left( \dfrac{\partial V}{\partial P} \right)_T = \kappa V$ The solution procedure is thus: 1. Select the derivative or variable to be computed; 2. Select the potential representation that makes it easiest, or corresponds to variables you already have in hand. 3. Manipulate the thermodynamic derivative you know to get the one you want. Easy as 1-2-3! We now turn to two methods to manipulate the thermodynamic derivations: 8. Phase Transitions When chemical reactions occur, the system makes transition among multiple minima at the molecular level. The figure below illustrates the molecular connection between free energy $G$, its derivative $\Delta G$, and the free energy as a function of reaction coordinate, $G(x)$. Figure 8.1: The free energy $G(x)$ as a function of coordinate (e.g. bond distance) has two local minima, A’ being lower in free energy. When A converts to A’, $\Delta G$ kJ/mole are released per infinitesimal amount of reaction $d\xi$. Note that both x and $\xi$ can vary between 0 and 1, but the meaning is very different: when all of the substance is in state A, $x=\xi=0$, and when it is all in state A', $x=\xi=1$. However, while the substance can have $\xi=0.5$ when the reaction is half completed, almost none of it will ever be at x=0.5. Rather, half will be at x=1 and half at x=1. So far, we have treated macroscopic pure substances and mixtures as though they had a single minimum in the free energy as a function of reaction coordinate. However, thermodynamics does not forbid multiple minima even at the macroscopic level, and can be used to make comparative statements about the minima. Definition: Phase A phase is a local minimum in the free energy surface. Unlike ordinary chemical reactions, transition between phases can occur even when only one pure substance is present: $A^{(1)} \rightarrow A^{(2)}$ For phase transitions, we call the reaction coordinate “order parameter.” The superscripts refer to the phases. Definition: Order Parameter An order parameter is a thermodynamic variable scaled to zero in one phase, nonzero in an(other) phase. Example 8.1 Example: a gas-liquid transition order parameter $O = \rho - \rho_{gas}$ or $O = \dfrac{ \rho - \rho_{gas}}{ \rho_{liq} - \rho_{gas}}$ in general $O = X - X^{(2)}$ or $O = \dfrac{ X - X^{(2)}}{ X^{(2)} - X^{(1)}}$ Thermodynamics cannot make statements about the details of the barrier (e.g. its height), or how fast a transition can occur. The transition itself is a rather delicate matter – it violates P2 since temporarily $\Delta G >0$ if the transition occurs at constant $T$ and $P$. The solution to this dilemma: if climbing the barrier were required of the entire macroscopic system, phase transitions could indeed never occur. Rather, a small portion of phase (1), called a nucleus, fluctuates to look like phase (2). The nucleus is at the barrier top in fig. 8.1. From this nucleus, phase (2) grows downhill in chemical potential if it is at lower free energy (P2). Thus the transition itself relies on microscopic fluctuations, and microscopic information is required to determine the barrier height, which is rather small. We need statistical mechanics to compute rates. If we are interested only in equilibrium, not how we get there, we can treat the phase transition like any other chemical reaction: $A^{(1)}$ and $A^{(2)}$ interconvert to yield mole numbers $n^{(1)}_{eq}$ & $n^{(2)}_{eq}$ or concentrations or pressures that minimize $G$: $A{(1)} \rightarrow A^{(2)}$ $G(T,P,n^{(i)})=\mu^{(1)}n^{(1)} +\mu^{(2)}n^{(2)} = \mu^{(1)}n^{(1)} + \mu^{(2)}(n-n^{(1)})$ where $n$ is a constant. At equilibrium $dG = 0$ $\mu^{(1)} n^{(1)} +\mu^{(2)} n^{(2)} = (\mu^{(1)}-\mu^{(2)}) dn^{(2)}$ $\mu^{(1)} = \mu^{(2)}$ Fig. 8.2: Chemical potential of two phases as a function of temperature and pressure (Gibbs ensemble). At high $T$, phase 1 is more stable, at (3)-(5) both phases coexist, at low $T$ phase 2 is more stable. In this diagram at high T and P, the two chemical potentials become degenerate and only one phase exists at (6). Definition: First Order Phase Transition A 1st order phase transition occurs when the chemical potential difference DG between two phases separated by a barrier vanishes. Definition: Critical Phase Transition A critical phase transition occurs when the chemical potential barrier between two phases just vanishes.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Thermodynamics/Advanced_Thermodynamics/6._The_solution_of_thermodynamic_problems.txt
Calorimetry is the process of measuring the amount of heat released or absorbed during a chemical reaction. By knowing the change in heat, it can be determined whether or not a reaction is exothermic (releases heat) or endothermic (absorbs heat). Calorimetry also plays a large part of everyday life, controlling the metabolic rates in humans and consequently maintaining such functions like body temperature. • Constant Pressure Calorimetry Because calorimetry is used to measure the heat of a reaction, it is a crucial part of thermodynamics. In order to measure the heat of a reaction, the reaction must be isolated so that no heat is lost to the environment. This is achieved by use of a calorimeter, which insulates the reaction to better contain heat. Coffee cups are often used as a quick and easy to make calorimeter for constant pressure. More sophisticated bomb calorimeters are built for use at constant volumes. • Constant Volume Calorimetry Constant Volume (bomb) calorimetry, is used to measure the heat of a reaction while holding volume constant and resisting large amounts of pressure. Although these two aspects of bomb calorimetry make for accurate results, they also contribute to the difficulty of bomb calorimetry. Here, the basic assembly of a bomb calorimeter will be addressed, as well as how bomb calorimetry relates to the heat of reaction and heat capacity and the calculations involved in regards to these two topics. • Differential Scanning Calorimetry Differential scanning calorimetry is a specific type of calorimetry including both a sample substance and a reference substance, residing in separate chambers. While the reference chamber contains only a solvent, the sample chamber contains an equal amount of the same solvent in addition to the substance of interest, of which the ΔH is being determined. The ΔH due to the solvent is constant in both chambers, so any difference can be attributed to the presence of the substance of interest. Calorimetry Because calorimetry is used to measure the heat of a reaction, it is a crucial part of thermodynamics. In order to measure the heat of a reaction, the reaction must be isolated so that no heat is lost to the environment. This is achieved by use of a calorimeter, which insulates the reaction to better contain heat. Coffee cups are often used as a quick and easy to make calorimeter for constant pressure. More sophisticated bomb calorimeters are built for use at constant volumes. Contributors and Attributions • Michelle Dube, Allison Billings (UCD), Rachel Morris (UCD), Ryan Starr (UCD) Constant Volume Calorimetry Learning Objectives Make sure you thoroughly understand the following essential concept: • Describe a simple calorimeter and explain how it is employed and how its heat capacity is determined. Constant Volume Calorimetry, also know as bomb calorimetry, is used to measure the heat of a reaction while holding volume constant and resisting large amounts of pressure. Although these two aspects of bomb calorimetry make for accurate results, they also contribute to the difficulty of bomb calorimetry. In this module, the basic assembly of a bomb calorimeter will be addressed, as well as how bomb calorimetry relates to the heat of reaction and heat capacity and the calculations involved in regards to these two topics. Introduction Calorimetry is used to measure quantities of heat, and can be used to determine the heat of a reaction through experiments. Usually a coffee-cup calorimeter is used since it is simpler than a bomb calorimeter, but to measure the heat evolved in a combustion reaction, constant volume or bomb calorimetry is ideal. A constant volume calorimeter is also more accurate than a coffee-cup calorimeter, but it is more difficult to use since it requires a well-built reaction container that is able to withstand large amounts of pressure changes that happen in many chemical reactions. Most serious calorimetry carried out in research laboratories involves the determination of heats of combustion $\Delta H_{combustion}$, since these are essential to the determination of standard enthalpies of formation of the thousands of new compounds that are prepared and characterized each month. In a constant volume calorimeter, the system is sealed or isolated from its surroundings, which accounts for why its volume is fixed and there is no volume-pressure work done. A bomb calorimeter structure consists of the following: • Steel bomb which contains the reactants • Water bath in which the bomb is submerged • Thermometer • A motorized stirrer • Wire for ignition Since the process takes place at constant volume, the reaction vessel must be constructed to withstand the high pressure resulting from the combustion process, which amounts to a confined explosion. The vessel is usually called a “bomb”, and the technique is known as bomb calorimetry. The reaction is initiated by discharging a capacitor through a thin wire which ignites the mixture. Another consequence of the constant-volume condition is that the heat released corresponds to $q_v$, and thus to the internal energy change $ΔU$ rather than to $ΔH$. The enthalpy change is calculated according to the formula $ΔH = q_v + Δn_gRT$ where $Δn_g$  is the change in the number of moles of gases in the reaction. Example $1$: Combustion of Biphenyl A sample of biphenyl ($\ce{(C6H5)2}$) weighing 0.526 g was ignited in a bomb calorimeter initially at 25°C, producing a temperature rise of 1.91 K. In a separate calibration experiment, a sample of benzoic acid ($\ce{C6H5COOH}$) weighing 0.825 g was ignited under identical conditions and produced a temperature rise of 1.94 K. For benzoic acid, the heat of combustion at constant volume is known to be 3,226 kJ mol–1 (that is, ΔU = –3,226 kJ mol–1.) Use this information to determine the standard enthalpy of combustion of biphenyl. Solution Begin by working out the calorimeter constant: • Moles of benzoic acid: $\dfrac{0.825 g}{122.1 \;g/mol} = 0.00676\; mol \nonumber$ • Heat released to calorimeter: $(0.00676\; mol) \times (3226\; kJ/mol) = 21.80\; kJ \nonumber$ • Calorimeter constant: $\dfrac{21.80\; kJ}{1.94\; K} = 11.24\; kJ/K \nonumber$ Now determine $ΔU_{combustion}$ of the biphenyl ("BP"): • moles of biphenyl: $\dfrac{0.526\; g}{154.12\; g/mol} = 0.00341 \; mol \nonumber$ • heat released to calorimeter: $(1.91\; K) \times (11.24\; kJ/K) = 21.46\; kJ \nonumber$ • heat released per mole of biphenyl: $\dfrac{21.46\; kJ}{0.00341\; mol} = 6,293\; kJ/mol \nonumber$ $ΔU_{combustion} (BP) = –6,293\; kJ/mol \nonumber$ This is the heat change at constant volume, $q_v$; the negative sign indicates that the reaction is exothermic, as all combustion reactions are. From the balanced reaction equation $\ce{(C6H5)2(s) + 29/2 O2(g) \rightarrow 12 CO2(g) + 5 H2O(l)} \nonumber$ we can calculate the change in the moles of gasses for this reaction $Δn_g = 12 - \frac{29}{2} = \frac{-5}{2} \nonumber$ Thus the volume of the system decreases when the reaction takes place. Converting to $ΔH$, we can write the following equation. Additionally, recall that at constant volume, $ΔU = q_V$. \begin{align*} ΔH &= q_V + Δn_gRT \[4pt] &= ΔU -\left( \dfrac{5}{2}\right) (8.314\; J\; mol^{-1}\; K^{-1}) (298 \;K) \[4pt] &= (-6,293 \; kJ/mol)–(6,194\; J/mol) \[4pt] &= (-6,293-6.2)\;kJ/mol \[4pt] &= -6299 \; kJ/mol \end{align*} A common mistake here is to forget that the subtracted term is in J, not kJ. Note that the additional 6.2 kJ in $ΔH$ compared to $ΔU$ reflects the work that the surroundings do on the system as the volume of gases decreases according to the reaction equation. Determining the Heat of Reaction The amount of heat that the system gives up to its surroundings so that it can return to its initial temperature is the heat of reaction. The heat of reaction is just the negative of the thermal energy gained by the calorimeter and its contents ($q_{calorimeter}$) through the combustion reaction. $q_{rxn} = -q_{calorimeter} \label{2A}$ where $q_{calorimeter} = q_{bomb} + q_{water} \label{3A}$ If the constant volume calorimeter is set up the same way as before, (same steel bomb, same amount of water, etc.) then the heat capacity of the calorimeter can be measured using the following formula: $q_{calorimeter} = \text{( heat capacity of calorimeter)} \times \Delta{T} \label{4A}$ Heat capacity is defined as the amount of heat needed to increase the temperature of the entire calorimeter by 1 °C. The equation above can also be used to calculate $q_{rxn}$ from $q_{calorimeter}$ calculated by Equation \ref{2A}. The heat capacity of the calorimeter can be determined by conducting an experiment. Example $4$: Heat of Combustion 1.150 g of sucrose goes through combustion in a bomb calorimeter. If the temperature rose from 23.42 °C to 27.64 °C and the heat capacity of the calorimeter is 4.90 kJ/°C, then determine the heat of combustion of sucrose, $\ce{C12H22O11}$ (in kJ per mole of $\ce{C12H22O11}$). Solution Given: • mass of $C_{12}H_{22}O_{11}$: 1.150 g • $T_{initial}$: 23.42°C • $T_{final}$:27.64°C • Heat Capacity of Calorimeter: 4.90 kJ/°C Using Equation \ref{4A} to calculate $q_{calorimeter}$: \begin{align*} q_{calorimeter} &= (4.90\; kJ/°C) \times (27.64 - 23.42)°C \[4pt] &= (4.90 \times 4.22) \;kJ = 20.7\; kJ \end{align*} Plug into Equation \ref{2A}: \begin{align*} q_{rxn} &= -q_{calorimeter} \[4pt] &= -20.7 \; kJ \; \end{align*} But the question asks for kJ/mol $\ce{C12H22O11}$, so this needs to be converted: \begin{align*}q_{rxn} &= \dfrac{-20.7 \; kJ}{1.150 \; g \; C_{12}H_{22}O_{11}} \[4pt] &= \dfrac{-18.0 \; kJ}{g\; C_{12}H_{22}O_{11}} \end{align*} Convert to per Mole $\ce{C12H22O11}$: \begin{align*}q_{rxn} &= \dfrac{-18.0 \; kJ}{\cancel{g \; \ce{C12H22O11}}} \times \dfrac{342.3 \; \cancel{ g \; \ce{C12H22O11}}}{1 \; mol \; \ce{C12H22O11}} \[4pt] &= \dfrac{-6.16 \times 10^3 \; kJ \;}{mol \; \ce{C12H22O11}} \end{align*} "Ice Calorimeter" Although calorimetry is simple in principle, its practice is a highly exacting art, especially when applied to processes that take place slowly or involve very small heat changes, such as the germination of seeds. Calorimeters can be as simple as a foam plastic coffee cup, which is often used in student laboratories. Research-grade calorimeters, able to detect minute temperature changes, are more likely to occupy table tops, or even entire rooms: The ice calorimeter is an important tool for measuring the heat capacities of liquids and solids, as well as the heats of certain reactions. This simple yet ingenious apparatus is essentially a device for measuring the change in volume due to melting of ice. To measure a heat capacity, a warm sample is placed in the inner compartment, which is surrounded by a mixture of ice and water. The heat withdrawn from the sample as it cools causes some of the ice to melt. Since ice is less dense than water, the volume of water in the insulated chamber decreases. This causes an equivalent volume of mercury to be sucked into the inner reservoir from the outside container. The loss in weight of this container gives the decrease in volume of the water, and thus the mass of ice melted. This, combined with the heat of fusion of ice, gives the quantity of heat lost by the sample as it cools to 0°C.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Thermodynamics/Calorimetry/Constant_Pressure_Calorimetry.txt
Calorimetry involves the experimental quantification of heat released in a chemical process, either a reaction or a conformational alteration. It can be used to determine parameters such as the Heat of Reaction ($Δ_{r}H$), which is the change in enthalpy associated with the process of a chemical reaction. When $Δ_{r}H$ is a negative value, the process is exothermic and releases heat; when $Δ_{r}H$ is a positive value, the process is endothermic and requires heat input. Calorimetry uses a closed system, meaning there is a system separated from its surroundings by some boundary, through which heat and energy but not mass are able to flow. Calorimetry may be conducted at either constant pressure or volume and allows one to monitor the change in temperature as a result of the chemical process being investigated. Introduction Differential scanning calorimetry is a specific type of calorimetry including both a sample substance and a reference substance, residing in separate chambers. While the reference chamber contains only a solvent (such as water), the sample chamber contains an equal amount of the same solvent in addition to the substance of interest, of which the ΔrH is being determined. The $Δ_{r}H$ due to the solvent is constant in both chambers, so any difference between the two can be attributed to the presence of the substance of interest. Each chamber is heated by a separate source in a way that their temperatures are always equal. This is accomplished through the use of thermocouples; the temperature of each chamber is constantly monitored and if a temperature difference is detected, then heat will be added to the cooler chamber to compensate for the difference. The heating rate used to maintain equivalent temperatures is logged as a function with respect to the temperature. For example, if the experimental goal is to determine the $Δ_{r}H$ of a protein denaturation process, the reference cell could contain 100 mL H2O, and the sample cell could contain 1 mg of the protein in addition to the same 100 mL H2O. Therefore, the contribution of the solvent (H2O) to the heat capacity of each cell would be equal, and the only difference would be the presence of the protein in the sample chamber. Equations The following equation relates the change in temperature to the change in enthalpy: $dH = \int^{T_f}_{T_i}nC_p dt$ where $dH$ is the rate of change in enthalpy, $C_P$ is the molar heat capacity of the calorimeter, $dT$ is the rate of change in temperature, $n$ is the number of moles of material, and $T_i$ and $T_f$ are the initial and final temperatures, respectively. This equation can be integrated to yield $\Delta H = nC_p \Delta T$ where ΔH is the total change in enthalpy and ΔT is the change in temperature. Differential Thermograms The output yielded by differential scanning calorimetry is called a differential thermogram, which plots the required heat flow against temperature. Data analysis is highly dependent on the assumption that both the reference and sample cells are constantly and accurately maintained at equal temperatures. This graph indicates the change in power (electrical heat) as the temperatures of the two cells are gradually increased. A change in specific heat results in a small change in power, and can be either positive or negative depending on the particular process. The advent of an endothermic reaction will cause an increase in power as temperature increases, since additional heat is required to drive the reaction and still maintain the reference temperature. When an exothermic reaction occurs, the opposite effect is observed; power decreases because heat is released by the reaction and less power is required to maintain equivalent temperatures in the chambers. Examples Differential scanning calorimetry can be used to study many different fields including biopolymer energetics where it is used to find the enthalpy of the protein denaturation process. A protein can be changed from its native state, in which it has a specific conformation due to non-covalent intramolecular interactions, to a denatured state where this characteristic structure is altered. Analysis of proteins through DSC can provide both the enthalpy of denaturation and information about the cooperativity of the denaturation process. A sharper peak in the thermogram indicates a higher level of cooperativity, meaning that when one structural association is disturbed, the likelihood of disruption at other points of association will be enhanced. DSC is also used in conjunction with differential thermal analysis. Through the combination of these two techniques, thermal behavior of inorganic compounds can be studied while the melting, boiling and decomposition points of organic compounds and polymers are found. Contributors and Attributions • Alyssa Cassabaum and Valerie Winton (HOPE)
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Thermodynamics/Calorimetry/Differential_Scanning_Calorimetry.txt
Contributors and Attributions Stephen Lower, Professor Emeritus (Simon Fraser U.) Chem1 Virtual Textbook Chemical Energetics What is Energy? Energy is one of the most fundamental and universal concepts of physical science, but one that is remarkably difficult to define in a way that is meaningful to most people. This perhaps reflects the fact that energy is not a “thing” that exists by itself, but is rather an attribute of matter (and also of electromagnetic radiation) that can manifest itself in different ways. It can be observed and measured only indirectly through its effects on matter that acquires, loses, or possesses it. The concept that we call energy was very slow to develop; it took more than a hundred years just to get people to agree on the definitions of many of the terms we use to describe energy and the interconversion between its various forms. But even now, most people have some difficulty in explaining what it is; somehow, the definition we all learned in elementary science ("the capacity to do work") seems less than adequate to convey its meaning. Although the term "energy" was not used in science prior to 1802, it had long been suggested that certain properties related to the motions of objects exhibit an endurance which is incorporated into the modern concept of "conservation of energy". In the 17th Century, the great mathematician Gottfried Leibnitz (1646-1716) suggested the distinction between vis viva ("live force") and vis mortua ("dead force"), which later became known as kinetic energy (1829) and potential energy (1853). Kinetic energy and potential energy Whatever energy may be, there are basically two kinds. Kinetic energy is associated with the motion of an object, and its direct consequences are part of everyone's daily experience; the faster the ball you catch in your hand, and the heavier it is, the more you feel it. Quantitatively, a body with a mass m and moving at a velocity v possesses the kinetic energy mv2/2. Example 1 A rifle shoots a 4.25 g bullet at a velocity of 965 m s–1. What is its kinetic energy? Solution The only additional information you need here is that 1 J = 1 kg m2 s–2: KE = ½ × (.00425 kg) (965 m s–1)2 = 1980 J Potential energy is energy a body has by virtue of its location. But there is more: the body must be subject to a "restoring force" of some kind that tends to move it to a location of lower potential energy. Think of an arrow that is subjected to the force from a stretched bowstring; the more tightly the arrow is pulled back against the string, the more potential energy it has. More generally, the restoring force comes from what we call a force field— a gravitational, electrostatic, or magnetic field. We observe the consequences of gravitational potential energy all the time, such as when we walk, but seldom give it any thought. If an object of mass m is raised off the floor to a height h, its potential energy increases by mgh, where g is a proportionality constant known as the acceleration of gravity; its value at the earth's surface is 9.8 m s–2. Example 2 Find the change in potential energy of a 2.6 kg textbook that falls from the 66-cm height of a table top onto the floor. Solution PE = m g h = (2.6 kg)(9.8 m s–2)(0.66 m) = 16.8 kg m2 s–2 = 16.8 J Similarly, the potential energy of a particle having an electric charge qdepends on its location in an electrostatic field. "Chemical energy" Electrostatic potential energy plays a major role in chemistry; the potential energies of electrons in the force field created by atomic nuclei lie at the heart of the chemical behavior of atoms and molecules. "Chemical energy" usually refers to the energy that is stored in the chemical bonds of molecules. These bonds form when electrons are able to respond to the force fields created by two or more atomic nuclei, so they can be regarded as manifestations of electrostatic potential energy. In an exothermic chemical reaction, the electrons and nuclei within the reactants undergo rearrangement into products possessing lower energies, and the difference is released to the environment in the form of heat. Interconversion of potential and kinetic energy Transitions between potential and kinetic energy are such an intimate part of our daily lives that we hardly give them a thought. It happens in walking as the body moves up and down. Our bodies utilize the chemical energy in glucose to keep us warm and to move our muscles. In fact, life itself depends on the conversion of chemical energy to other forms. Energy is conserved: it can neither be created nor destroyed. So when you go uphill, your kinetic energy is transformed into potential energy, which gets changed back into kinetic energy as you coast down the other side. And where did the kinetic energy you expended in peddling uphill come from? By conversion of some of the chemical potential energy in your breakfast cereal. • When drop a book, its potential energy is transformed into kinetic energy. When it strikes the floor, this transformation is complete. What happens to the energy then? The kinetic energy that at the moment of impact was formerly situated exclusively in the moving book, now becomes shared between the book and the floor, and in the form of randomized thermal motions of the molecular units of which they are made; we can observe this effect as a rise in temperature. • Much of the potential energy of falling water can be captured by a water wheel or other device that transforms the kinetic energy of the exit water into kinetic energy. The output of a hydroelectric power is directly proportional to its height above the level of the generator turbines in the valley below. At this point, the kinetic energy of the exit water is transferred to that of the turbine, most of which (up to 90 percent in the largest installations) is then converted into electrical energy. • Will the temperature of the water at the bottom of a water fall be greater than that at the top? James Joule himself predicted that it would be. It has been calculated that at Niagara falls, that complete conversion of the potential energy of 1 kg of water at the top into kinetic energy when it hits the plunge pool 58 meters below will result in a temperature increase of about 0.14 C°. (But there are lots of complications. For example, some of the water breaks up into tiny droplets as it falls, and water evaporates from droplets quite rapidly, producing a cooling effect.) • Chemical energy can also be converted, at least partially, into electrical energy: this is what happens in a battery. If a highly exothermic reaction also produces gaseous products, the latter may expand so rapidly that the result is an explosion — a net conversion of chemical energy into kinetic energy (including sound). Thermal energy Kinetic energy is associated with motion, but in two different ways. For a macroscopic object such as a book or a ball, or a parcel of flowing water, it is simply given by ½ mv2. However, as we mentioned above, when an object is dropped onto the floor, or when an exothermic chemical reaction heats surrounding matter, the kinetic energy gets dispersed into the molecular units in the environment. This "microscopic" form of kinetic energy, unlike that of a speeding bullet, is completely random in the kinds of motions it exhibits and in its direction. We refer to this as "thermalized" kinetic energy, or more commonly simply as thermal energy. We observe the effects of this as a rise in the temperature of the surroundings. The temperature of a body is direct measure of the quantity of thermal energy is contains. Thermal energy is never completely recoverable Once kinetic energy is thermalized, only a portion of it can be converted back into potential energy. The remainder simply gets dispersed and diluted into the environment, and is effectively lost. To summarize, then: • Potential energy can be converted entirely into kinetic energy.. • Potential energy can also be converted, with varying degrees of efficiency,into electrical energy. • The kinetic energy of macroscopic objects can be transferred between objects (barring the effects of friction). • Once kinetic energy becomes thermalized, only a portion of it can be converted back into either potential energy or be concentrated back into the kinetic energy of a macroscopic. This limitation, which has nothing to do with technology but is a fundamental property of nature, is the subject of the second law of thermodynamics. • A device that is intended to accomplish the partial transformation of thermal energy into organized kinetic energy is known as a heat engine. Energy scales are always arbitrary You might at first think that a book sitting on the table has zero kinetic energy since it is not moving. But if you think about it, the earth itself is moving; it is spinning on its axis, it is orbiting the sun, and the sun itself is moving away from the other stars in the general expansion of the universe. Since these motions are normally of no interest to us, we are free to adopt an arbitrary scale in which the velocity of the book is measured with respect to the table; on this so-called laboratory coordinate system, the kinetic energy of the book can be considered zero. We do the same thing with potential energy. If the book is on the table, its potential energy with respect to the surface of the table will be zero. If we adopt this as our zero of potential energy, and then push the book off the table, its potential energy will be negative after it reaches the floor. Energy units Energy is measured in terms of its ability to perform work or to transfer heat. Mechanical work is done when a force f displaces an object by a distance d: \[w = f × d\] The basic unit of energy is the joule. One joule is the amount of work done when a force of 1 newton acts over a distance of 1 m; thus 1 J = 1 N-m. The newton is the amount of force required to accelerate a 1-kg mass by 1 m/sec2, so the basic dimensions of the joule are kg m2 s–2. The other two units in wide use. the calorie and the BTU (British thermal unit) are defined in terms of the heating effect on water. Because of the many forms that energy can take, there are a correspondingly large number of units in which it can be expressed, a few of which are summarized below. 1 calorie will raise the temperature of 1 g of water by 1 C°. The “dietary” calorie is actually 1 kcal. An average young adult expends about 1800 kcal per day just to stay alive. (you should know this definition) 1 cal = 4.184 J 1 BTU (British Thermal Unit) will raise the temperature of 1 lb of water by 1F°. 1 BTU = 1055 J The erg is the c.g.s. unit of energy and a very small one; the work done when a 1-dyne force acts over a distance of 1 cm. 1 J = 107 ergs 1 erg = 1 d-cm = 1 g cm2 s–2 The electron-volt is even tinier: 1 eV is the work required to move a unit electric charge (1 C) through a potential difference of 1 volt. 1 J = 6.24 × 1018 eV The watt is a unit of power, which measures the rate of energy flow in J sec–1. Thus the watt-hour is a unit of energy. An average human consumes energy at a rate of about 100 watts; the brain alone runs at about 5 watts. 1 J = 2.78 × 10–4watt-hr 1 w-h = 3.6 kJ The liter-atmosphere is a variant of force-displacement work associated with volume changes in gases. 1 L-atm = 101.325 J The huge quantities of energy consumed by cities and countries are expressed in quads; the therm is a similar but smaller unit. 1 quad = 1015 Btu = 1.05 × 1018 J If the object is to obliterate cities or countries with nuclear weapons, the energy unit of choice is the ton of TNT equivalent. 1 ton of TNT = 4.184 GJ (by definition) In terms of fossil fuels, we have barrel-of-oil equivalent, cubic-meter-of-natural gas equivalent, and ton-of-coal equivalent. 1 bboe = 6.1 GJ 1 cmge = 37-39 mJ 1 toce = 29 GJ Heat and Work Heat and work are both measured in energy units, so they must both represent energy. How do they differ from each other, and from just plain “energy” itself? In our daily language, we often say that "this object contains a lot of heat", but this is gibberish in thermodynamics terms, although it is ok to say that the object is "hot", indicating that its temperature is high. The term "heat" has a special meaning in thermodynamics: it is a process in which a body (the contents of a tea kettle, for example) acquires or loses energy as a direct consequence of its having a different temperature than its surroundings. Hence, thermal energy can only flow from a higher temperature to a lower temperature. It is this flow that constitutes "heat". Use of the term "flow" of heat recalls the incorrect 18th-century notion that heat is an actual substance called “caloric” that could flow like a liquid. Note: Heat We often say that "this object contains a lot of heat," however, this makes no sense since heat represents an energy transfer. Transfer of thermal energy can be accomplished by bringing two bodies into physical contact (the kettle on top of the stove, or through an electric heating element inside the kettle). Another mechanism of thermal energy transfer is by radiation; a hot object will convey energy to any body in sight of it via electromagnetic radiation in the infrared part of the spectrum. In many cases, both modes will be active. Work refers to the transfer of energy some means that does not depend on temperature difference. Work, like energy, can take various forms, the most familiar being mechanical and electrical. • Mechanical work arises when an object moves a distance Δx against an opposing force f: \[w = f Δx\] • Electrical work is done when a body having a charge q moves through a potential difference ΔV. Note: Work A transfer of energy to or from a system by any means other than heat is called “work”. Work can be completely converted into heat (by friction, for example), but heat can only be partially converted to work. Conversion of heat into work is accomplished by means of a heat engine, the most common example of which is an ordinary gasoline engine. The science of thermodynamics developed out of the need to understand the limitations of steam-driven heat engines at the beginning of the Industrial Age. The Second Law of Thermodynamics, states that the complete conversion of heat into work is impossible. Something to think about when you purchase fuel for your car! Contributors and Attributions Stephen Lower, Professor Emeritus (Simon Fraser U.) Chem1 Virtual Textbook
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Thermodynamics/Chemical_Energetics/Energy%2C_Heat%2C_and_Work.txt
Learning Objectives You are expected to be able to define and explain the significance of terms identified in italic type. • As a homogeneous chemical reaction proceeds, the Gibbs energies of the reactants become more negative and those of the products more positive as the composition of the system changes. • The total Gibbs energy of the system (reactants + products) always becomes more negative as the reaction proceeds. Eventually it reaches a minimum value at a system composition that defines the equilibrium composition of the system, after which time no further net change will occur. • The equilibrium constant for the reaction is determined the standard Gibbs energy change: ΔG° = -RT ln Kp • The sign of the temperature dependence of the equilibrium constant is governed by the sign of ΔH°. This is the basis of the Le Chatelier Principle. • The Gibbs energies of solid and liquid components are constants that do not change with composition. Thus in heterogeneous reactions such as phase changes, the total Gibbs energy does not pass through a minimum and when the system is not at equilibrium only all-products or all-reactants will be stable. • Two reactions are coupled when the product of one reaction is consumed in the other. If ΔG° for the first reaction is positive, the overall process can still be spontaneous if ΔG° for the second reaction is sufficiently negative— in which case the second reaction is said to "drive" the first reaction. Under conditions of constant temperature and pressure, chemical change will tend to occur in whatever direction leads to a decrease in the value of the Gibbs energy. In this lesson we will see how G varies with the composition of the system as reactants change into products. When G falls as far as it can, all net change comes to a stop. The equilibrium composition of the mixture is determined by ΔG° which also defines the equilibrium constant K. The Road to Equilibrium is Down the Gibbs Energy Hill This means, of course, that if the total Gibbs energy $G$ of a mixture of reactants and products goes through a minimum value as the composition changes, then all net change will cease— the reaction system will be in a state of chemical equilibrium. You will recall that the relative concentrations of reactants and products in the equilibrium state is expressed by the equilibrium constant. In this lesson we will examine the relation between the Gibbs energy change for a reaction and the equilibrium constant. To keep things as simple as possible, we will consider a homogeneous chemical reaction of the form $A + B \rightleftharpoons C + D$ in which all components are gases at the temperature of interest. If the sum of the standard Gibbs energies of the products is less than that of the reactants, ΔG° for the reaction will be negative and the reaction will proceed to the right. But how far? If the reactants are completely transformed into products, the equilibrium constant would be infinity. The equilibrium constants we actually observe all have finite values, implying that even if the products have a lower Gibbs energy than the reactants, some of the latter will always remain when the process comes to equilibrium. A homogeneous reaction is one in which everything takes place in a single gas or liquid phase. To understand how equilibrium constants relate to ΔG° values, assume that all of the reactants are gases, so that the Gibbs energy of gas A, for example, is given at all times by $G_A = G_A^° + RT \ln P_A \label{5-1}$ The Gibbs energy change for the reaction is sum of the Gibbs energies of the products, minus the sum of Gibbs energies of the reactants: $\Delta G = \underbrace{G_C + G_D}_{\text{products}} \underbrace{– G_A – G_B}_{\text{reactants}} \label{5-2}$ Using Equation $\ref{5-1}$ to expand each term on the right of Equation \ref{5-2}, we have $\Delta G = (G^°_C + RT \ln P_C) + (G^°_D + RT \ln P_D) – (G^°_B + RT \ln P_B) – (G^°_A + RT \ln P+A) \label{5-3}$ We can now express the $G^°$ terms collectively as $\Delta G^°$, and combine the logarithmic pressure terms into a single fraction $\Delta G = \Delta G° + RT \ln \left( \dfrac{P_CP_D}{P_AP_B} \right) \label{5-4}$ which is more conveniently expressed in terms of the reaction quotient $Q$. $\Delta{G} = \Delta G^° + RT \ln Q \label{5-5}$ The Gibbs energy $G$ is a quantity that becomes more negative during the course of any natural process. Thus as a chemical reaction takes place, $G$ only falls and will never become more positive. Eventually a point is reached where any further transformation of reactants into products would cause $G$ to increase. At this point $G$ is at a minimum (see below), and no further net change can take place; the reaction is then at equilibrium. Although Equations \ref{5-1}-\ref{5-5} are strictly correct only for perfect gases, we will see later that equations of similar form can be applied to many liquid solutions by substituting concentrations for pressures. Example $1$: Dissociation of Dinitrogen Tetroxide Consider the gas-phase dissociation reaction $\ce{N_2O_4 \rightarrow 2 NO_2 } \nonumber$ which is a simple example of the Gibbs energy relationships in a homogeneous reaction. The Gibbs energy of 1 mole of N2O4 (1) is smaller than that of 2 moles of NO2 (2) by 5.3 kJ; thus $\Delta G^o = +5.3\, \text{kJ}$ for the complete transformation of reactants into products. The straight diagonal line shows the Gibbs energy of all possible compositions if the two gases were prevented from mixing. The red curved line show the Gibbs energy of the actual reaction mixture. This passes through a minimum at (3) where 0.814 mol of $N_2O_4$ are in equilibrium with 0.372 mol of $NO_2$. The difference (4) corresponds to the Gibbs energy of mixing of reactants and products which always results in an equilibrium mixture whose Gibbs energy is lower than that of either pure reactants or pure products. Thus some amount of reaction will occur even if ΔG° for the process is positive. What’s the difference between ΔG and ΔG°? It’s very important to be aware of this distinction; that little ° symbol makes a world of difference! First, the standard Gibbs energy change ΔG° has a single value for a particular reaction at a given temperature and pressure; this is the difference $\sum G^°_{f} (\text{products}) – \sum G^°_{f}(\text{reactants})$ that are tabulated in thermodynamic tables. It corresponds to the Gibbs energy change for a process that never really happens: the complete transformation of pure N2O4 into pure NO2 at a constant pressure of 1 atm. The other quantity $\Delta G$, defined by Equation $\ref{5-5}$, represents the total Gibbs energies of all substances in the reaction mixture at any particular system composition. In contrast to $\Delta G^°$ which is a constant for a given reaction, $\Delta G$ varies continuously as the composition changes, finally reaching zero at equilibrium. $\Delta G$ is the “distance” (in Gibbs energy) from the equilibrium state of a given reaction. Thus for the limiting cases of pure $\ce{N_2O_4}$ or $\ce{NO_2}$ (as far from the equilibrium state as the system can be!), $Q = \dfrac{[NO_2]^2}{[N_2O_4]} = \pm\infty$ which makes the logarithm in Equation $\ref{5-5}$, and thus the value of $\Delta G$, approach the same asymptotic limits (1) or (2). As the reaction proceeds in the appropriate direction $\Delta G$ approaches zero; once there (3), the system is at its equilibrium composition and no further net change will occur. Example $2$: Isomerization of Butane The standard molar Gibbs energy change for this very simple reaction is –2.26 kJ, but mixing of the unreacted butane with the product brings the Gibbs energy of the equilibrium mixture down to about –3.1 kJ mol–1 at the equilibrium composition corresponding to 77 percent conversion. Notice particularly that • The sum of the Gibbs energies of the two gases (n-butane and iso-butane) separately varies linearly with the composition of the mixture (red line ). • The green curve adds the Gibbs energy of mixing to the above sum; its minimum defines the equilibrium composition. • As the composition approaches the equilibrium value , $ΔG$ (which denotes how much farther the Gibbs energy of the system can fall) approaches zero. The detailed calculations that lead to the values shown above can be found here. Why reactions lead to mixtures of reactants and products We are now in a position to answer the question posed earlier: if ΔG° for a reaction is negative, meaning that the Gibbs energies of the products are more negative than those of the reactants, why will some of the latter remain after equilibrium is reached? The answer is that no matter how low the Gibbs energy of the products, the Gibbs energy of the system can be reduced even more by allowing some of the products to be "contaminated" (i.e., diluted) by some reactants. Owing to the entropy associated with mixing of reactants and products, no homogeneous reaction will be 100% complete. An interesting corollary of this is that any reaction for which a balanced chemical equation can be written can in principle take place to some extent, however minute that might be. Gibbs energies of mixing of products with reactants tend to be rather small, so for reactions having ΔG° values that are highly negative or positive (±20 kJ mol–1, say), the equilibrium mixture will, for all practical purposes, be either [almost] "pure" reactants or products. The Equilibrium Constant Now let us return to Equation $\ref{5-5}$ which we reproduce here: $\Delta{G} = \Delta{G^°} + RT \ln Q$ As the reaction approaches equilibrium, $\Delta G$ becomes less negative and finally reaches zero. At equilibrium $\Delta{G} = 0$ and $Q = K$, so we can write (must know this!) $\Delta{G^°} = –RT \ln K_p \label{5-6}$ in which $K_p$, the equilibrium constant expressed in pressure units, is the special value of $Q$ that corresponds to the equilibrium composition. This equation is one of the most important in chemistry because it relates the equilibrium composition of a chemical reaction system to measurable physical properties of the reactants and products. If you know the entropies and the enthalpies of formation of a set of substances, you can predict the equilibrium constant of any reaction involving these substances without the need to know anything about the mechanism of the reaction. Instead of writing Equation $\ref{5-6}$ in terms of Kp, we can use any of the other forms of the equilibrium constant such as Kc (concentrations), Kx (mole fractions), Kn(numbers of moles), etc. Remember, however, that for ionic solutions especially, only the Ka, in which activities are used, will be strictly valid. It is often useful to solve Equation $\ref{5-6}$ for the equilibrium constant, yielding $K = \exp {\left ( {-\Delta G \over RT} \right )} \label{5-7}$ This relation is most conveniently plotted against the logarithm of $K$ as shown in Figure $3$, where it can be represented as a straight line that passes through the point (0,0). Example $3$ Calculate the equilibrium constant for the reaction from the following thermodynamic data: $\ce{H^{+}(aq) + OH^{–}(aq) <=> H_2O(l)} \nonumber$ $H^+(aq)$ $OH^–(aq)$ $H_2O(l)$ ΔHf°, kJ mol–1 0 –230.0 –285.8 S°, J K–1 mol–1 0* –10.9 70.0 * Note that the standard entropy of the hydrogen ion is zero by definition. This reflects the fact that it is impossible to carry out thermodynamic studies on a single charged species. All ionic entropies are relative to that of $\ce{H^{+}(aq)}$, which explains why some values (as for aqueous hydroxide ion) are negative. Solution From the above data, we can evaluate the following quantities: \begin{align*} \Delta{H}^o &= \sum \Delta H^o_{f}(\text{products}) - \sum \Delta H^o_{f}(\text{reactants}) \[4pt] &= (–285.8) - (-230) \[4pt] &= –55.8\, kJ \; mol^{-1} \end{align*} \begin{align*}\Delta{S}^o &= \sum \Delta S^o (\text{products}) - \sum \Delta S° (\text{reactants}) \[4pt] &= (70.0) – (–10.9) \[4pt] &= +80.8\, J \; K^{-1}\; mol^{-1} \end{align*} The value of $\Delta{G}°$ at 298 K is \begin{align*} \Delta H^o – T\Delta S^o &= (–55800) – (298)(80.8) \[4pt] &= –79900\, J\, mol^{–1} \end{align*} From Equation $\ref{5-7}$ we have \begin{align*} K &= \exp\left(\dfrac{–79900}{8.314 \times 298}\right) \[4pt] &= e^{32.2} = 1.01 \times 10^{–14} \end{align*} Equilibrium and Temperature We have already discussed how changing the temperature will increase or decrease the tendency for a process to take place, depending on the sign of ΔS°. This relation can be developed formally by differentiating the relation $\Delta G^° = \Delta H^° – T\Delta S^° \label{5-8}$ with respect to the temperature: $\dfrac{d(-\Delta G^°)}{dT} = -\Delta S^° \label{5-9}$ Hence, the sign of the entropy change determines whether the reaction becomes more or less allowed as the temperature increases. We often want to know how a change in the temperature will affect the value of an equilibrium constant whose value is known at some fixed temperature. Suppose that the equilibrium constant has the value $K_1$ at temperature $T_1$ and we wish to estimate $K_2$ at temperature $T_2$. Expanding Equation $\ref{5-7}$ in terms of $\Delta H^°$ and $\Delta S^°$, we obtain $–RT_1 \ln K_1 = \Delta H^ ° – T_1 \Delta S^°$ and $–RT_2 \ln K_2 = \Delta H ^° – T_2 \Delta S^°$ Dividing both sides by RT and subtracting, we obtain $\ln K_1 - \ln K_2 = - \left( \dfrac{\Delta H^°}{RT_1} -\dfrac{\Delta H^°}{RT_2} \right) \label{5-10}$ Which is most conveniently expressed as the ratio $\ln \dfrac{K_1}{K_2} = - \dfrac{\Delta H^°}{R} \left( \dfrac{1}{T_1} -\dfrac{1}{T_2} \right) \label{5-11}$ This is its theoretical foundation of Le Chatelier's Principle with respect to the effect of the temperature on equilibrium: • if the reaction is exothermic $\Delta H^° < 0$, then increasing temperature will make the second exponential term smaller and $K$ will decrease. The equilibrium will then “shift to the left”. • If $\Delta H^° > 0$, then increasing T will make the exponent less negative and $K$ will increase and the equilibrium will “shift to the right”. This is an extremely important relationship, but not just because of its use in calculating the temperature dependence of an equilibrium constant. Even more important is its application in the “reverse” direction to experimentally determine ΔH° from two values of the equilibrium constant measured at different temperatures. Direct calorimetric determinations of heats of reaction are not easy to make; relatively few chemists have the equipment and experience required for this rather exacting task. Measurement of an equilibrium constant is generally much easier, and often well within the capabilities of anyone who has had an introductory Chemistry course. Once the value of ΔH° is determined it can be combined with the Gibbs energy change (from a single observation of K, through Equation $\ref{5-7}$) to allow ΔS° to be calculated through Equation $\ref{5-9}$. Equilibrium Without Mixing: it's all or nothing You should now understand that for homogeneous reactions (those that take place entirely in the gas phase or in solution) the equilibrium composition will never be 100% products, no matter how much lower their Gibbs energy relative to the reactants. As was summarized in the N2O4-dissociation example discussed previously. This is due to "dilution" of the products by the reactants. In heterogeneous reactions (those which involve more than one phase) this dilution, and the effects that flow from it, may not be possible. A particularly simple but important type of a heterogeneous process is phase change. Consider, for example, an equilibrium mixture of ice and liquid water. The concentration of H2O in each phase is dependent only on the density of the phase; there is no way that ice can be “diluted” with water, or vice versa. This means that at all temperatures other than the freezing point, the lowest Gibbs energy state will be that corresponding to pure ice or pure liquid. Only at the freezing point, where the Gibbs energies of water and ice are identical, can both phases coexist, and they may do so in any proportion. Gibbs energy of the ice-water system Only at 0°C can ice and liquid water coexist in any proportion. Note that in contrast to the homogeneous N2O4 example, there is no Gibbs energy minimum at intermediate compositions. Coupled Reactions Two reactions are said to be coupled when the product of one of them is the reactant in the other: $A \rightarrow B \nonumber$ and $B \rightarrow C \nonumber$ If the standard Gibbs energy of the first reaction is positive but that of the second reaction is sufficiently negative, then for the overall process will be negative and we say that the first reaction is “driven” by the second one. This, of course, is just another way of describing an effect that you already know as the Le Chatelier principle: the removal of substance B by the second reaction causes the equilibrium of the first to “shift to the right”. Similarly, the equilibrium constant of the overall reaction is the product of the equilibrium constants of the two steps. 1 Cu2S(s) → 2 Cu(s) + S(s) ΔG° = + 86.2 kJ ΔH° = + 76.3 kJ 2 S(s) + O2(g)→ SO2(g) ΔG° = –300.1 kJ ΔH° = + 296.8 kJ 3 Cu2S(s)→ 2 Cu(s) + SO2(g) ΔG° = –213.9 kJ ΔH° = – 217.3 kJ In the above example, reaction 1 is the first step in obtaining metallic copper from one of its principal ores. This reaction is endothermic and it has a positive Gibbs energy change, so it will not proceed spontaneously at any temperature. If Cu2S is heated in the air, however, the sulfur is removed as rapidly as it is formed by oxidation in the highly spontaneous reaction 2, which supplies the Gibbs energy required to drive 1. The combined process, known as roasting, is of considerable industrial importance and is one of a large class of processes employed for winning metals from their ores. Free Energy and Equilibrium Students often wonder why many chemical reactions yield an equilibrium mixture in which a significant amount of the reactants are present, even though the products have a lower standard free energy than the reactants. One might at first think that as long as any reactants are present, the free energy could be reduced if conversion of reactants to products were complete. The short answer is that by "contaminating" some of the product with reactants, the free energy of the system (including both reactants and products) can be reduced below that of the pure products alone. This additional drop in the free energy is due to the free energy of mixing of reactants with products. Unless you are enrolled in a more advanced course, you are probably not expected to know how to calculate free energies of mixing. All you really need to know is that it is formally equivalent to the expansion of gases (or to the dilution of a solute) into a larger volume. See here for more details. This example illustrates how the free energies of the reaction components combine with the free energies of mixing reactants with products to minimize the Gibbs function in the equilibrium mixture. To keep things as simple as possible, we will deal with the isomerization equilibrium between the two butanes C4H10 at 298 K: n-butane iso-butane S°, J mol–1 310 295 ΔGf°, kJ mol–1 –15.71 –17.97 Contributors and Attributions Stephen Lower, Professor Emeritus (Simon Fraser U.) Chem1 Virtual Textbook Thermodynamics and the weather Contributors and Attributions Stephen Lower, Professor Emeritus (Simon Fraser U.) Chem1 Virtual Textbook Some Applications of Enthalpy and The First Law Contributors and Attributions Stephen Lower, Professor Emeritus (Simon Fraser U.) Chem1 Virtual Textbook The First Law of Thermodynamics Contributors and Attributions Stephen Lower, Professor Emeritus (Simon Fraser U.) Chem1 Virtual Textbook Thermochemistry and Calorimetry The heat that flows across the boundaries of a system undergoing a change is a fundamental property that characterizes the process. It is easily measured, and if the process is a chemical reaction carried out at constant pressure, it can also be predicted from the difference between the enthalpies of the products and reactants. The quantitative study and measurement of heat and enthalpy changes is known as thermochemistry. Contributors and Attributions Stephen Lower, Professor Emeritus (Simon Fraser U.) Chem1 Virtual Textbook
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Thermodynamics/Chemical_Energetics/Free_Energy_and_Equilibrium/Composition_of_an_Equilibrium_Mixture.txt
• Chemical Energy Chemical reactions involve the making and breaking of chemical bonds (ionic and covalent) and the chemical energy of a system is the energy released or absorbed due to the making and breaking of these bonds. Breaking bonds requires energy, forming bonds releases energy, and the overall reaction can be either endergonic (ΔG<0) or exergonic (ΔG>0) based on the overall changes in stability from reactants to products. • Differential Forms of Fundamental Equations The fundamental thermodynamic equations follow from five primary thermodynamic definitions and describe internal energy, enthalpy, Helmholtz energy, and Gibbs energy in terms of their natural variables. Here they will be presented in their differential forms. • Enthalpy When a process occurs at constant pressure, the heat evolved (either released or absorbed) is equal to the change in enthalpy. Enthalpy (H) is the sum of the internal energy (U) and the product of pressure and volume (PV). • Free Energy Free energy is a composite function that balances the influence of energy vs. entropy. • Internal Energy The internal energy of a system is identified with the random, disordered motion of molecules; the total (internal) energy in a system includes potential and kinetic energy. This is contrast to external energy which is a function of the sample with respect to the outside environment (e.g. kinetic energy if the sample is moving or potential energy if the sample is at a height from the ground etc). • Potential Energy Potential Energy is energy due to position, composition, or arrangement. Also, it is the energy associated with forces of attraction and repulsion between objects. Any object that is lifted from its resting position has stored energy therefore it is called potential energy because it has a potential to do work when released. • Thermal Energy Thermal Energy, also known as random or internal Kinetic Energy, due to the random motion of molecules in a system. Kinetic Energy is seen in three forms: vibrational, rotational, and translational. Vibrational is the energy caused by an object or molecule moving in a vibrating motion, rotational is the energy caused by rotating motion, and translational is the energy caused by the movement of one molecule to to another location. Energies and Potentials Chemical reactions involve the making and breaking of chemical bonds (ionic and covalent) and the chemical energy of a system is the energy released or absorbed due to the making and breaking of these bonds. Breaking bonds requires energy, forming bonds releases energy, and the overall reaction can be either endergonic (\(\Delta G<0\)) or exergonic (\(\Delta G > 0\)) based on the overall changes in stability from reactants to products. Introduction Simply put, chemical energy is the potential of a chemical system to undergo a transformation from one system to another and to impart a transformation on another system (this may be chemical, but can also involve other energy-requiring processes like electron current or pressure-volume work). Chemical energy is a concept that is related to every single process of life on earth and powers the cars that we drive. Chemical energy plays a crucial role into each and every one of our every day lives. Through simple reactions and redox chemistry, the breaking and forming of bonds, energy can be extracted and harnessed into a usable fashion. Contributors and Attributions • Solomon Koo, Ben Nolte
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Thermodynamics/Energies_and_Potentials/Chemical_Energy.txt
The fundamental thermodynamic equations follow from five primary thermodynamic definitions and describe internal energy, enthalpy, Helmholtz energy, and Gibbs energy in terms of their natural variables. Here they will be presented in their differential forms. Introduction The fundamental thermodynamic equations describe the thermodynamic quantities U, H, G, and A in terms of their natural variables. The term "natural variable" simply denotes a variable that is one of the convenient variables to describe U, H, G, or A. When considered as a whole, the four fundamental equations demonstrate how four important thermodynamic quantities depend on variables that can be controlled and measured experimentally. Thus, they are essentially equations of state, and using the fundamental equations, experimental data can be used to determine sought-after quantities like $G$ or $H$. First Law of Thermodynamics The first law of thermodynamics is represented below in its differential form $dU = đq+đw$ where • $U$ is the internal energy of the system, • $q$ is heat flow of the system, and • $w$ is the work of the system. The "đ" symbol represent inexact differentials and indicates that both $q$ and $w$ are path functions. Recall that $U$ is a state function. The first law states that internal energy changes occur only as a result of heat flow and work done. It is assumed that w refers only to PV work, where $w = -\int{pdV}$ The fundamental thermodynamic equation for internal energy follows directly from the first law and the principle of Clausius: $dU = đq + đw$ $dS = \dfrac{\delta q_{rev}}{T}$ we have $dU = TdS + \delta w$ Since only $PV$ work is performed, $dU = TdS - pdV \label{DefU}$ The above equation is the fundamental equation for $U$ with natural variables of entropy $S$ and volume$V$. Principle of Clausius The Principle of Clausius states that the entropy change of a system is equal to the ratio of heat flow in a reversible process to the temperature at which the process occurs. Mathematically this is written as $dS = \dfrac{\delta q_{rev}}{T}$ where • $S$ is the entropy of the system, • $q_{rev}$ is the heat flow of a reversible process, and • $T$ is the temperature in Kelvin. Enthalpy Mathematically, enthalpy is defined as $H = U + pV \label{DefEnth}$ where $H$ is enthalpy of the system, p is pressure, and V is volume. The fundamental thermodynamic equation for enthalpy follows directly from it deffinition (Equation $\ref{DefEnth}$) and the fundamental equation for internal energy (Equation $\ref{DefU}$) : $dH = dU + d(pV)$ $= dU + pdV + VdP$ $dU = TdS - pdV$ $dH = TdS - pdV + pdV + Vdp$ $dH = TdS + Vdp$ The above equation is the fundamental equation for H. The natural variables of enthalpy are S and p, entropy and pressure. Gibbs Energy The mathematical description of Gibbs energy is as follows $G = U + pV - TS = H - TS \label{Defgibbs}$ where $G$ is the Gibbs energy of the system. The fundamental thermodynamic equation for Gibbs Energy follows directly from its definition $\ref{Defgibbs}$ and the fundamental equation for enthalpy $\ref{DefEnth}$: $dG = dH - d(TS)$ $= dH - TdS - SdT$ Since $dH = TdS + Vdp$ $dG = TdS + Vdp - TdS - SdT$ $dG = Vdp - SdT$ The above equation is the fundamental equation for G. The natural variables of Gibbs energy are p and T, pressure and temperature. Helmholtz Energy Mathematically, Helmholtz energy is defined as $A = U - TS \label{DefHelm}$ where $A$ is the Helmholtz energy of the system, which is often written as the symbol $F$. The fundamental thermodynamic equation for Helmholtz energy follows directly from its definition (Equation $\ref{DefHelm}$) and the fundamental equation for internal energy (Equation $\ref{DefU}$): $dA = dU - d(TS)$ $= dU - TdS - SdT$ Since $dU = TdS - pdV$ $dA = TdS - pdV -TdS - SdT$ $dA = -pdV - SdT$ The above equation is the fundamental equation for A with natural variables of $V$ and $T$. For the definitions to hold, it is assumed that only PV work is done and that only reversible processes are used. These assumptions are required for the first law and the principle of Clausius to remain valid. Also, these equations do not account include n, the number of moles, as a variable. When $n$ is included, the equations appear different, but the essence of their meaning is captured without including the n-dependence. Chemical Potential The fundamental equations derived above were not dependent on changes in the amounts of species in the system. Below the n-dependent forms are presented1,4. $dU = TdS - PdV + \sum_{i=1}^{N}\mu_idn_i$ $dH = TdS + VdP + \sum_{i=1}^{N}\mu_idn_i$ $dG = -SdT + Vdp + \sum_{i=1}^{N}\mu_idn_i$ $dA = -SdT - PdV + \sum_{i=1}^{N}\mu_idn_i$ where μi is the chemical potential of species i and dni is the change in number of moles of substance i. Importance/Relevance of Fundamental Equations The differential fundamental equations describe U, H, G, and A in terms of their natural variables. The natural variables become useful in understanding not only how thermodynamic quantities are related to each other, but also in analyzing relationships between measurable quantities (i.e. P, V, T) in order to learn about the thermodynamics of a system. Below is a table summarizing the natural variables for U, H, G, and A: Thermodynamic Quantity Natural Variables U (internal energy) S, V H (enthalpy) S, P G (Gibbs energy) T, P A (Helmholtz energy) T, V Maxwell Relations The fundamental thermodynamic equations are the means by which the Maxwell relations are derived1,4. The Maxwell Relations can, in turn, be used to group thermodynamic functions and relations into more general "families"2,3. See the sample problems and the Maxwell Relation section for details. Problems 1. If the assumptions made in the derivations above were not made, what would effect would that have? Try to think of examples were these assumptions would be violated. Could the definitions, principles, and laws used to derive the fundamental equations still be used? Why or why not? 2. For what kind of system does the number of moles not change? This said, do the fundamental equations without n-dependence apply to a wide range of processes and systems? 3. Derive the Maxwell Relations. 4. Derive the expression $\left (\dfrac{\partial H}{\partial P} \right)_{T,n} = -T \left(\dfrac{\partial V}{\partial T} \right)_{P,n} +V$ Then apply this equation to an ideal gas. Does the result seem reasonable? 5. Using the definition of Gibbs energy and the conditions observed at phase equilibria, derive the Clapeyron equation. Answers 1. If it was not assumed that PV-work was the only work done, then the work term in the second law of thermodynamics equation would include other terms (e.g. for electrical work, mechanical work). If reversible processes were not assumed, the Principle of Clausius could not be used. One example of such situations could the movement of charged particles towards a region of like charge (electrical work) or an irreversible process like combustion of hydrocarbons or friction. 2. In general, a closed system of non-reacting components would fit this description. For example, the number of moles would not change for a closed system in which a gas is sealed (to prevent leaks) in a container and allowed to expand/is contracted. 3. See the Maxwell Relations section. 4. $(\dfrac{\partial H}{\partial P})_{T,n} = 0$ for an ideal gas. Since there are no interactions between ideal gas molecules, changing the pressure will not involve the formation or breaking of any intermolecular interactions or bonds. 5. See the third outside link. Contributors and Attributions • Andreana Rosnik, Hope College
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Thermodynamics/Energies_and_Potentials/Differential_Forms_of_Fundamental_Equations.txt
When a process occurs at constant pressure, the heat evolved (either released or absorbed) is equal to the change in enthalpy. Enthalpy ($H$) is the sum of the internal energy ($U$) and the product of pressure and volume ($PV$) given by the equation: $H = U + PV$ When a process occurs at constant pressure, the heat evolved (either released or absorbed) is equal to the change in enthalpy. Enthalpy is a state function which depends entirely on the state functions $T$, $P$ and $U$. Enthalpy is usually expressed as the change in enthalpy ($\Delta H$) for a process between initial and final states: $\Delta H = ΔU + ΔPV$ If temperature and pressure remain constant through the process and the work is limited to pressure-volume work, then the enthalpy change is given by the equation: $\Delta H = ΔU + PΔV$ Also at constant pressure the heat flow ($q$) for the process is equal to the change in enthalpy defined by the equation: $\Delta H = q$ By looking at whether q is exothermic or endothermic we can determine a relationship between $\Delta H$ and $q$. If the reaction absorbs heat it is endothermic meaning the reaction consumes heat from the surroundings so $q > 0$ (positive). Therefore, at constant temperature and pressure, by the equation above, if q is positive then $\Delta H$ is also positive. And the same goes for if the reaction releases heat, then it is exothermic, meaning the system gives off heat to its surroundings, so $q < 0$ (negative). If $q$ is negative, then $\Delta H$ will also be negative. Enthalpy Change Accompanying a Change in State When a liquid vaporizes the liquid must absorb heat from its surroundings to replace the energy taken by the vaporizing molecules in order for the temperature to remain constant. This heat required to vaporize the liquid is called enthalpy of vaporization (or heat of vaporization). For example, the vaporization of one mole of water the enthalpy is given as: ΔH = 44.0 kJ at 298 K When a solid melts, the required energy is similarly called enthalpy of fusion (or heat of fusion). For example, one mole of ice the enthalpy is given as: ΔH = 6.01 kJ at 273.15 K $\Delta{H} = \Delta{U} + p\Delta{V} \label{1}$ Enthalpy can also be expressed as a molar enthalpy, $\Delta{H}_m$, by dividing the enthalpy or change in enthalpy by the number of moles. Enthalpy is a state function. This implies that when a system changes from one state to another, the change in enthalpy is independent of the path between two states of a system. If there is no non-expansion work on the system and the pressure is still constant, then the change in enthalpy will equal the heat consumed or released by the system (q). $\Delta{H} = q \label{2}$ This relationship can help to determine whether a reaction is endothermic or exothermic. At constant pressure, an endothermic reaction is when heat is absorbed. This means that the system consumes heat from the surroundings, so $q$ is greater than zero. Therefore according to the second equation, the $\Delta{H}$ will also be greater than zero. On the other hand, an exothermic reaction at constant pressure is when heat is released. This implies that the system gives off heat to the surroundings, so $q$ is less than zero. Furthermore, $\Delta{H}$ will be less than zero. Effect of Temperature on Enthalpy When the temperature increases, the amount of molecular interactions also increases. When the number of interactions increase, then the internal energy of the system rises. According to the first equation given, if the internal energy ($U$) increases then the $\Delta{H}$ increases as temperature rises. We can use the equation for heat capacity and Equation 2 to derive this relationship. $C = \dfrac{q}{\Delta{T}} \label{3}$ Under constant pressure, substitute Equation \ref{2} into Equation \ref{3}: $C_p = \left( \dfrac{\Delta{H}}{\Delta{T}} \right)_P \label{4}$ where the subscript $P$ indicates the derivative is done under constant pressure. The Enthalpy of Phase Transition Enthalpy can be represented as the standard enthalpy, $\Delta{H}^{o}$. This is the enthalpy of a substance at standard state. The standard state is defined as the pure substance held constant at 1 bar of pressure. Phase transitions, such as ice to liquid water, require or absorb a particular amount of standard enthalpy: • Standard Enthalpy of Vaporization ($\Delta{H^o_{vap}}$) is the energy that must be supplied as heat at constant pressure per mole of molecules vaporized (liquid to gas). • Standard Enthalpy of Fusion ($\Delta{H^o_{fus}}$) is the energy that must be supplied as heat at constant pressure per mole of molecules melted (solid to liquid). • Standard Enthalpy of Sublimation ($\Delta{H^o_{sub}}$) is the energy that must be supplied as heat at constant pressure per mole of molecules converted to vapor from a solid. $\Delta{H^o_{sub}} = \Delta{H^o_{fus}} + \Delta{ H^o_{vap}}$ The enthalpy of condensation is the reverse of the enthalpy of vaporization and the enthalpy of freezing is the reverse of the enthalpy of fusion. The enthalpy change of a reverse phase transition is the negative of the enthalpy change of the forward phase transition. Also the enthalpy change of a complete process is the sum of the enthalpy changes for each of the phase transitions incorporated in the process. Outside Links • Canagaratna, Sebastian G. "A Visual Aid in Enthalpy Calculations " J. Chem. Educ. 2000 77 1178. • Kennedy Sr., Alvin P. "Determination of Enthalpy of Vaporization Using a Microwave Oven " J. Chem. Educ. 1997 74 1231. • Treptow, Richard S. "How Thermodynamic Data and Equilibrium Constants Changed When the Standard-State Pressure Became 1 Bar " J. Chem. Educ. 1999 76 212. • Yi, L., Sheng-Lu, K., Song-Sheng, Qu. "Some Views in the Internal Energy and Enthalpy of Gases." J. Chem. Educ. 1995: 72, 408. Problems 1. Calculate the enthalpy (ΔH) for the process in which 45.0 grams of water is converted from liquid at 10° C to vapor at 25° C. Solution Part 1: Heating water from 10.0 to 25.0 °C ΔkJ = 45.0g H20 x (4.184J/gH2O °C) x (25.0 - 10.0) °C x 1kJ/1000J = 2.82 kJ Part 2: Vaporizing water at 25.0 °C ΔkJ = 45.0 g H2O x 1 mol H2O/18.02 g H2O x 44.0 kJ/1 mol H2O = 110 kJ Part 3: Total Enthalpy Change ΔH = 2.82 kJ + 110kJ Contributors and Attributions • Katherine Hurley (UCD), Jennifer Shamieh (UCD) Enthalpy This page explains what an enthalpy change is, and then gives a definition and brief comment for three of the various kinds of enthalpy change that you will come across. Enthalpy changes Enthalpy change is the name given to the amount of heat evolved or absorbed in a reaction carried out at constant pressure. It is given the symbol ΔH, read as "delta H". Standard enthalpy changes Standard enthalpy changes refer to reactions done under standard conditions, and with everything present in their standard states. Standard states are sometimes referred to as "reference states". Standard conditions Standard conditions are: • 298 K (25°C) • a pressure of 1 bar (100 kPa). • where solutions are involved, a concentration of 1 mol dm-3 Standard states For a standard enthalpy change everything has to be present in its standard state. That is the physical and chemical state that you would expect to find it in under standard conditions. That means that the standard state for water, for example, is liquid water, H2O(l) - not steam or water vapour or ice. Oxygen's standard state is the gas, O2(g) - not liquid oxygen or oxygen atoms. For elements which have allotropes (two different forms of the element in the same physical state), the standard state is the most energetically stable of the allotropes. For example, carbon exists in the solid state as both diamond and graphite. Graphite is energetically slightly more stable than diamond, and so graphite is taken as the standard state of carbon. Similarly, under standard conditions, oxygen can exist as O2 (simply called oxygen) or as O3 (called ozone - but it is just an allotrope of oxygen). The O2 form is far more energetically stable than O3, so the standard state for oxygen is the common O2(g). The symbol for standard enthalpy changes The symbol for a standard enthalpy change is ΔH°, read as "delta H standard" or, perhaps more commonly, as "delta H nought". Standard enthalpy change of reaction, ΔH°r Remember that an enthalpy change is the heat evolved or absorbed when a reaction takes place at constant pressure. The standard enthalpy change of a reaction is the enthalpy change which occurs when equation quantities of materials react under standard conditions, and with everything in its standard state. That needs exploring a bit. Here is a simple reaction between hydrogen and oxygen to make water: • First, notice that the symbol for a standard enthalpy change of reaction is ΔH°r. For enthalpy changes of reaction, the "r" (for reaction) is often missed off - it is just assumed. • The "kJ mol-1" (kilojoules per mole) doesn't refer to any particular substance in the equation. Instead it refers to the quantities of all the substances given in the equation. In this case, 572 kJ of heat is evolved when 2 moles of hydrogen gas react with 1 mole of oxygen gas to form 2 moles of liquid water. • Notice that everything is in its standard state. In particular, the water has to be formed as a liquid. • And there is a hidden problem! The figure quoted is for the reaction under standard conditions, but hydrogen and oxygen don't react under standard conditions. Whenever a standard enthalpy change is quoted, standard conditions are assumed. If the reaction has to be done under different conditions, a different enthalpy change would be recorded. That has to be calculated back to what it would be under standard conditions. Fortunately, you don't have to know how to do that at this level. Standard enthalpy change of formation, ΔH°f The standard enthalpy change of formation of a compound is the enthalpy change which occurs when one mole of the compound is formed from its elements under standard conditions, and with everything in its standard state. The equation showing the standard enthalpy change of formation for water is: When you are writing one of these equations for enthalpy change of formation, you must end up with 1 mole of the compound. If that needs you to write fractions on the left-hand side of the equation, that is OK. (In fact, it is not just OK, it is essential, because otherwise you will end up with more than 1 mole of compound, or else the equation won't balance!) The equation shows that 286 kJ of heat energy is given out when 1 mole of liquid water is formed from its elements under standard conditions. Standard enthalpy changes of formation can be written for any compound, even if you can't make it directly from the elements. For example, the standard enthalpy change of formation for liquid benzene is +49 kJ mol-1. The equation is: If carbon won't react with hydrogen to make benzene, what is the point of this, and how does anybody know what the enthalpy change is? What the figure of +49 shows is the relative positions of benzene and its elements on an energy diagram: How do we know this if the reaction doesn't happen? It is actually very simple to calculate it from other values which we can measure - for example, from enthalpy changes of combustion (coming up next). We will come back to this again when we look at calculations on another page. Knowing the enthalpy changes of formation of compounds enables you to calculate the enthalpy changes in a whole host of reactions and, again, we will explore that in a bit more detail on another page. And one final comment about enthalpy changes of formation: The standard enthalpy change of formation of an element in its standard state is zero. That's an important fact. The reason is obvious . . . For example, if you "make" one mole of hydrogen gas starting from one mole of hydrogen gas you aren't changing it in any way, so you wouldn't expect any enthalpy change. That is equally true of any other element. The enthalpy change of formation of any element has to be zero because of the way enthalpy change of formation is defined. Standard enthalpy change of combustion, ΔH°c The standard enthalpy change of combustion of a compound is the enthalpy change which occurs when one mole of the compound is burned completely in oxygen under standard conditions, and with everything in its standard state. The enthalpy change of combustion will always have a negative value, of course, because burning always releases heat. Two examples: Notice: • Enthalpy of combustion equations will often contain fractions, because you must start with only 1 mole of whatever you are burning. • If you are talking about standard enthalpy changes of combustion, everything must be in its standard state. One important result of this is that any water you write amongst the products must be there as liquid water. Similarly, if you are burning something like ethanol, which is a liquid under standard conditions, you must show it as a liquid in any equation you use. • Notice also that the equation and amount of heat evolved in the hydrogen case is exactly the same as you have already come across further up the page. At that time, it was illustrating the enthalpy of formation of water. That can happen in some simple cases. Talking about the enthalpy change of formation of water is exactly the same as talking about the enthalpy change of combustion of hydrogen. Contributors and Attributions Jim Clark (Chemguide.co.uk)
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Thermodynamics/Energies_and_Potentials/Enthalpy/Assorted_Definitions.txt
One of the most confusing things about this is the way the words are used. These days, the term "bond enthalpy" is normally used, but you will also find it described as "bond energy" - sometimes in the same article. An even older term is "bond strength". So you can take all these terms as being interchangeable. Molecules A diatomic molecule is one that only contains two atoms. They could be the same (for example, Cl2) or different (for example, HCl). The bond dissociation enthalpy is the energy needed to break one mole of the bond to give separated atoms - everything being in the gas state. Important! The point about everything being in the gas state is essential; you cannot use bond enthalpies to do calculations directly from substances starting in the liquid or solid state. As an example of bond dissociation enthalpy, to break up 1 mole of gaseous hydrogen chloride molecules into separate gaseous hydrogen and chlorine atoms takes 432 kJ. The bond dissociation enthalpy for the H-Cl bond is +432 kJ mol-1. What happens if the molecule has several bonds, rather than just 1? Consider methane, CH4. It contains four identical C-H bonds, and it seems reasonable that they should all have the same bond enthalpy. However, if you took methane to pieces one hydrogen at a time, it needs a different amount of energy to break each of the four C-H bonds. Every time you break a hydrogen off the carbon, the environment of those left behind changes. And the strength of a bond is affected by what else is around it. In cases like this, the bond enthalpy quoted is an average value. In the methane case, you can work out how much energy is needed to break a mole of methane gas into gaseous carbon and hydrogen atoms. That comes to +1662 kJ and involves breaking 4 moles of C-H bonds. The average bond energy is therefore +1662/4 kJ, which is +415.5 kJ per mole of bonds. That means that many bond enthalpies are actually quoted as mean (or average) bond enthalpies, although it might not actually say so. Mean bond enthalpies are sometimes referred to as "bond enthalpy terms". In fact, tables of bond enthalpies give average values in another sense as well, particularly in organic chemistry. The bond enthalpy of, say, the C-H bond varies depending on what is around it in the molecule. So data tables use average values which will work well enough in most cases. That means that if you use the C-H value in some calculation, you can't be sure that it exactly fits the molecule you are working with. So don't expect calculations using mean bond enthalpies to give very reliable answers. You may well have to know the difference between a bond dissociation enthalpy and a mean bond enthalpy, and you should be aware that the word mean (or average) is used in two slightly different senses. But for calculation purposes, it isn't something you need to worry about. Just use the values you are given. Finding enthalpy changes of reaction from bond enthalpies Case 1: Everything present is gaseous Remember that you can only use bond enthalpies directly if everything you are working with is in the gas state. Using the same method as for other enthalpy sums We are going to estimate the enthalpy change of reaction for the reaction between carbon monoxide and steam. This is a part of the manufacturing process for hydrogen. $CO(g) + H_2O(g) \rightarrow CO_2 (g) +H_2(g)$ The bond enthalpies are : bond enthalpy (kJ mol-1) C&#8801;O in carbon monoxide +1077 C-O in carbon dioxide +805 O-H +464 H-H +436 So let's do the sum. Here is the cycle - make sure that you understand exactly why it is the way it is. And now equate the two routes, and solve the equation to find the enthalpy change of reaction. ΔH + 2(805) + 436 = 1077 + 2(464) ΔH = 1077 + 2(464) - 2(805) - 436 ΔH = -41 kJ mol-1 Using a short-cut method for simple cases You could do any bond enthalpy sum by the method above - taking the molecules completely to pieces and then remaking the bonds. If you are happy doing it that way, just go on doing it that way. However, if you are prepared to give it some thought, you can save a bit of time - although only in very simple cases where the changes in a molecule are very small. Example 2: Chlorine + Ethane For example, chlorine reacts with ethane to give chloroethane and hydrogen chloride gases (a ll of these are gases). Solution It is always a good idea to draw full structural formulae when you are doing bond enthalpy calculations. It makes it much easier to count up how many of each type of bond you have to break and make. If you look at the equation carefully, you can see what I mean by a "simple case". Hardly anything has changed in this reaction. You could work out how much energy is needed to break every bond, and how much is given out in making the new ones, but quite a lot of the time, you are just remaking the same bond. All that has actually changed is that you have broken a C-H bond and a Cl-Cl bond, and made a new C-Cl bond and a new H-Cl bond. So you can just work those out. Work out the energy needed to break C-H and Cl-Cl: +413 + 243 = +656 kJ mol-1 Work out the energy released when you make C-Cl and H-Cl: -346 - 432 = -778 kJ mol-1 So the net change is +656 - 778 = -122 kJ mol-1 Case 2: A Liquid is Present You can only use bond enthalpies directly if everything you are working with is in the gas state. If you have one or more liquids present, you need an extra energy term to work out the enthalpy change when you convert from liquid to gas, or vice versa. That term is the enthalpy change of vaporization, and is given the symbol ΔHvap or ΔHv. This is the enthalpy change when 1 mole of the liquid converts to gas at its boiling point with a pressure of 1 bar (100 kPa) (older sources might quote 1 atmosphere rather than 1 bar.) For water, the enthalpy change of vaporization is +41 kJ mol-1. That means that it take 41 kJ to change 1 mole of water into steam. If 1 mole of steam condenses into water, the enthalpy change would be -41 kJ. Changing from liquid to gas needs heat; changing gas back to liquid releases exactly the same amount of heat. You can only use bond enthalpies directly if everything you are working with is in the gas state. Example 3: Combustion of Methane To see how this fits into bond enthalpy calculations, we will estimate the enthalpy change of combustion of methane - in other words, the enthalpy change for this reaction: Notice that the product is liquid water. You cannot apply bond enthalpies to this. You must first convert it into steam. To do this you have to supply 41 kJ mol-1. The bond enthalpies you need are: bond enthalpy (kJ mol-1) C-H +413 O=O +498 C=O in carbon dioxide +805 O-H +464 The cycle looks like this: This obviously looks more confusing than the cycles we've looked at before, but apart from the extra enthalpy change of vaporization stage, it isn't really any more difficult. Before you go on, make sure that you can see why every single number and arrow on this diagram is there. In particular, make sure that you can see why the first 4 appears in the expression "4(+464)". That is an easy thing to get wrong. (In fact, when I first drew this diagram, I carelessly wrote 2 instead of 4 at that point!) That's the hard bit done - now the calculation: ΔH + 2(805) + 2(41) + 4(464) = 4(413) + 2(498) ΔH = 4(413) + 2(498) - 2(805) - 2(41) - 4(464) ΔH = -900 kJ mol-1 The measured enthalpy change of combustion is -890 kJ mol-1, and so this answer agrees to within about 1%. As bond enthalpy calculations go, that's a pretty good estimate.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Thermodynamics/Energies_and_Potentials/Enthalpy/Bond_Enthalpies.txt
The standard enthalpy change of neutralization is the enthalpy change when solutions of an acid and an alkali react together under standard conditions to produce 1 mole of water. Notice that enthalpy change of neutralization is always measured per mole of water formed. Enthalpy changes of neutralization are always negative - heat is released when an acid and and alkali react. For reactions involving strong acids and alkalis, the values are always very closely similar, with values between -57 and -58 kJ mol-1. That varies slightly depending on the acid-alkali combination (and also on what source you look it up in!). Why do strong acids reacting with strong alkalis give closely similar values? We make the assumption that strong acids and strong alkalis are fully ionized in solution, and that the ions behave independently of each other. For example, dilute hydrochloric acid contains hydrogen ions and chloride ions in solution. Sodium hydroxide solution consists of sodium ions and hydroxide ions in solution. The equation for any strong acid being neutralized by a strong alkali is essentially just a reaction between hydrogen ions and hydroxide ions to make water. The other ions present (sodium and chloride, for example) are just spectator ions, taking no part in the reaction. The full equation for the reaction between hydrochloric acid and sodium hydroxide solution is: $NaOH(aq) + HCl(aq) \rightarrow NaCl(aq) + H_2O (l)$ but what is actually happening is: $OH^-(aq) + H^+(aq) \rightarrow H_2O (l)$ If the reaction is the same in each case of a strong acid and a strong alkali, it is not surprising that the enthalpy change is similar. In a weak acid, such as acetic acid, at ordinary concentrations, something like 99% of the acid is not actually ionized. That means that the enthalpy change of neutralization will include other enthalpy terms involved in ionizing the acid as well as the reaction between the hydrogen ions and hydroxide ions. And in a weak alkali like ammonia solution, the ammonia is also present mainly as ammonia molecules in solution. Again, there will be other enthalpy changes involved apart from the simple formation of water from hydrogen ions and hydroxide ions. For reactions involving acetic acid or ammonia, the measured enthalpy change of neutralization is a few kJ less exothermic than with strong acids and bases. For example, one source which gives the enthalpy change of neutralization of sodium hydroxide solution with HCl as -57.9 kJ mol-1: $NaOH_{(aq)} + HCl_{(aq)} \rightarrow Na^+_{(aq)} + Cl^-_{(aq)} + H_2O$ the enthalpy change of neutralization for sodium hydroxide solution being neutralized by acetic acid is -56.1 kJ mol-1 : $NaOH_{(aq)} + CH_3COOH_{(aq)} \rightarrow Na^+_{(aq)} + CH_3COO^-_{(aq)} + H_2O$ For very weak acids, like hydrogen cyanide solution, the enthalpy change of neutralization may be much less. A different source gives the value for hydrogen cyanide solution being neutralized by potassium hydroxide solution as -11.7 kJ mol-1, for example. $NaOH_{(aq)} + HCN_{(aq)} \rightarrow Na^+_{(aq)} + CN^-_{(aq)} + H_2O$ Neutralizing Strong acids vs. Weak Acids The experimentally measured enthalpy change of neutralization is a few kJ less exothermic than with strong acids and bases.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Thermodynamics/Energies_and_Potentials/Enthalpy/Enthalpy_Change_of_Neutralization.txt
This page looks at the relationship between enthalpies of solution, hydration enthalpies and lattice enthalpies. Enthalpy change of solution The enthalpy change of solution is the enthalpy change when 1 mole of an ionic substance dissolves in water to give a solution of infinite dilution. Enthalpies of solution may be either positive or negative - in other words, some ionic substances dissolved endothermically (for example, NaCl); others dissolve exothermically (for example NaOH). An infinitely dilute solution is one where there is a sufficiently large excess of water that adding any more does not cause any further heat to be absorbed or evolved. So, when 1 mole of sodium chloride crystals are dissolved in an excess of water, the enthalpy change of solution is found to be +3.9 kJ mol-1. The change is slightly endothermic, and so the temperature of the solution will be slightly lower than that of the original water. Thinking about dissolving as an energy cycle Why is heat sometimes evolved and sometimes absorbed when a substance dissolves in water? To answer that it is useful to think about the various enthalpy changes that are involved in the process. You can think of an imaginary process where the crystal lattice is first broken up into its separate gaseous ions, and then those ions have water molecules wrapped around them. That is how they exist in the final solution. • The heat energy needed to break up 1 mole of the crystal lattice is the lattice dissociation enthalpy. • The heat energy released when new bonds are made between the ions and water molecules is known as the hydration enthalpy of the ion. The hydration enthalpy is the enthalpy change when 1 mole of gaseous ions dissolve in sufficient water to give an infinitely dilute solution. Hydration enthalpies are always negative. Factors affecting the size of hydration enthalpy Hydration enthalpy is a measure of the energy released when attractions are set up between positive or negative ions and water molecules. • With positive ions, there may only be loose attractions between the slightly negative oxygen atoms in the water molecules and the positive ions, or there may be formal dative covalent (co-ordinate covalent) bonds. • With negative ions, hydrogen bonds are formed between lone pairs of electrons on the negative ions and the slightly positive hydrogens in water molecules. The size of the hydration enthalpy is governed by the amount of attraction between the ions and the water molecules. • The attractions are stronger the smaller the ion. For example, hydration enthalpies fall as you go down a group in the Periodic Table. The small lithium ion has by far the highest hydration enthalpy in Group 1, and the small fluoride ion has by far the highest hydration enthalpy in Group 7. In both groups, hydration enthalpy falls as the ions get bigger. • The attractions are stronger the more highly charged the ion. For example, the hydration enthalpies of Group 2 ions (like Mg2+) are much higher than those of Group 1 ions (like Na+). Estimating enthalpies of solution from lattice enthalpies and hydration enthalpies The hydration enthalpies for calcium and chloride ions are given by the equations: The following cycle is for calcium chloride, and includes a lattice dissociation enthalpy of +2258 kJ mol-1. We have to use double the hydration enthalpy of the chloride ion because we are hydrating 2 moles of chloride ions. Make sure you understand exactly how the cycle works. So . . . ΔHsol = +2258 - 1650 + 2(-364) ΔHsol = -120 kJ mol-1 Whether an enthalpy of solution turns out to be negative or positive depends on the relative sizes of the lattice enthalpy and the hydration enthalpies. In this particular case, the negative hydration enthalpies more than made up for the positive lattice dissociation enthalpy.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Thermodynamics/Energies_and_Potentials/Enthalpy/Enthalpy_Change_of_Solution.txt
Solids can be heated to the point where the molecules holding their bonds together break apart and form a liquid. The most common example is solid ice turning into liquid water. This process is better known as melting, or heat of fusion, and results in the molecules within the substance becoming less organized. When a substance converts from a solid state to a liquid state, the change in enthalpy (\(ΔH\)) is positive. However, if the substance is transforming from a liquid state to a solid state the change in enthalpy (ΔH) is negative. This process is commonly known as the freezing, and results in the molecules within the substance becoming more ordered. Introduction Determining the heat of fusion is fairly straightforward. When a solid undergoes melting or freezing, the temperature stays at a constant rate until the entire phase change is complete. One can visualize this process by examining the heating/cooling chart. By drawing this chart before conducting a heat of fusion analysis, one can easily map out the required steps in completing the analysis. The equation for determining the enthalpy of fusion (\(ΔH\)) is listed below. \[\Delta{H}=n \,\Delta{H_{fus}}\] with • \(n\)= number of moles • \(\Delta{H_{fus}}\) the molar heat of the substance Example \(1\) Calculate the heat when 36.0 grams of water at 113 °C is cooled to 0 °C. Given • Heat of fusion= 6.0 kJ/mol • Heat of vaporization= 40.7 kJ/mol • Csp(s)=2.10 J/gK • Csp(l)=4.18 J/gK • Csp(g)=1.97 J/gK Answer \[q =- 110.6\, kJ\] Sublimation In some cases, the solid will bypass the liquid state and transition into the gaseous state. This direct transformation from solid to gas is called sublimation. The opposite reaction, when a gas directly transforms into a solid, is known as deposition. Therefore, these two processes can be summarized in the following equation: \[\Delta{H_{sub}}= \Delta{H_{fus}}+\Delta{H_{vap}}\] with • \(ΔH_{sub} is the change in heat in sublimation • \(ΔH_{fus}\) is the change in heat in fusion • \(ΔH_{vap}\) is the change in heat in vaporization Applications The heat of fusion process can be seen in countless applications and evidenced in the creation of many common household items. As mentioned in the opening paragraph, the most common application of the heat of fusion is the melting of ice to water. The vast majority of examples where heat of fusion is commonplace can be seen in the manufacturing industry. The following examples have been used for hundreds of years and are still perfected to this day. The processes of coin making, glassblowing, forging metal objects, and transforming blow molded plastics into household products all require heat of fusion to become final product. The change in your wallet, the glass vase on your fireplace mantel, and the plastic soda bottle from the vending machine all went through a heat of fusion manufacturing process. In coin making, solid zinc and copper (metals in American pennies) are placed into a casting furnace and heated by the heat of fusion process until they reach the liquid phase. Once in the liquid phase, the molten zinc and copper are poured into a mold, and cast into long bars. In the casting process, the molten metal transforms from the liquid phase to the solid phase, becoming a solid bar. The long bars are flattened by heavy machinery and stamped into thousands of coins. Without the heat of fusion process, a monetary system would not exist in the United States. Contributors and Attributions • Maxim Hnojewyj (UCD)
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Thermodynamics/Energies_and_Potentials/Enthalpy/Heat_of_Fusion.txt
The Heat of Reaction (also known and Enthalpy of Reaction) is the change in the enthalpy of a chemical reaction that occurs at a constant pressure. It is a thermodynamic unit of measurement useful for calculating the amount of energy per mole either released or produced in a reaction. Since enthalpy is derived from pressure, volume, and internal energy, all of which are state functions, enthalpy is also a state function. Introduction $ΔH$, or the change in enthalpy arose as a unit of measurement meant to calculate the change in energy of a system when it became too difficult to find the ΔU, or change in the internal energy of a system, by simultaneously measure the amount of heat and work exchanged. Given a constant pressure, the change in enthalpy can be measured as $ΔH=q$ See section on enthalpy for a more detailed explanation. The notation ΔHº or ΔHºrxn then arises to explain the precise temperature and pressure of the heat of reaction ΔH. The standard enthalpy of reaction is symbolized by ΔHº or ΔHºrxn and can take on both positive and negative values. The units for ΔHº are kiloJoules per mole, or kj/mol. ΔH and ΔHºrxn • Δ = represents the change in the enthalpy; (ΔHproducts -ΔHreactants) • a positive value indicates the products have greater enthalpy, or that it is an endothermic reaction (heat is required) • a negative value indicates the reactants have greater enthalpy, or that it is an exothermic reaction (heat is produced) • º = signifies that the reaction is a standard enthalpy change, and occurs at a preset pressure/temperature • rxn = denotes that this change is the enthalpy of reaction The Standard State: The standard state of a solid or liquid is the pure substance at a pressure of 1 bar ( 105 Pa) and at a relevant temperature. The ΔHºrxn is the standard heat of reaction or standard enthalpy of a reaction, and like ΔH also measures the enthalpy of a reaction. However, ΔHºrxn takes place under "standard" conditions, meaning that the reaction takes place at 25º C and 1 atm. The benefit of a measuring ΔH under standard conditions lies in the ability to relate one value of ΔHº to another, since they occur under the same conditions. How to Calculate ΔH Experimentally Enthalpy can be measured experimentally through the use of a calorimeter. A calorimeter is an isolated system which has a constant pressure, so ΔH=q=cpsp x m x (ΔT) How to calculate ΔH Numerically To calculate the standard enthalpy of reaction the standard enthalpy of formation must be utilized. Another, more detailed, form of the standard enthalpy of reaction includes the use of the standard enthalpy of formation ΔHºf: $ΔH^\ominus = \sum \Delta v_p \Delta H^\ominus_f\;(products) - \sum \Delta v_r \Delta H^\ominus_f\; (reactants)$ with • vp= stoichiometric coefficient of the product from the balanced reaction • vr= stoichiometric coefficient of the reactants from the balanced reaction • ΔHºf= standard enthalpy of formation for the reactants or the products Since enthalpy is a state function, the heat of reaction depends only on the final and initial states, not on the path that the reaction takes. For example, the reaction $A \rightarrow B$ goes through intermediate steps (i.e. $C \rightarrow D$), but A and B remain intact. Therefore, one can measure the enthalpy of reaction as the sum of the ΔH of the three reactions by applying Hess' Law. Additional Notes Since the ΔHº represents the total energy exchange in the reaction this value can be either positive or negative. • A positive ΔHº value represents an addition of energy from the reaction (and from the surroundings), resulting in an endothermic reaction. • A negative value for ΔHº represents a removal of energy from the reaction (and into the surroundings) and so the reaction is exothermic. Example $1$: the combustion of acetylene Calculate the enthalpy change for the combustion of acetylene ($\ce{C2H2}$) Solution 1) The first step is to make sure that the equation is balanced and correct. Remember, the combustion of a hydrocarbon requires oxygen and results in the production of carbon dioxide and water. $\ce{2C2H2(g) + 5O2(g) -> 4CO2(g) + 2H2O(g)}$ 2) Next, locate a table of Standard Enthalpies of Formation to look up the values for the components of the reaction (Table 7.2, Petrucci Text) 3) First find the enthalpies of the products: ΔHºf CO2 = -393.5 kJ/mole Multiply this value by the stoichiometric coefficient, which in this case is equal to 4 mole. vpΔHºf CO2 = 4 mol (-393.5 kJ/mole) = -1574 kJ ΔHºf H2O = -241.8 kJ/mole The stoichiometric coefficient of this compound is equal to 2 mole. So, vpΔHºf H2O = 2 mol ( -241.8 kJ/mole) = -483.6 kJ Now add these two values in order to get the sum of the products Sum of products (Σ vpΔHºf(products)) = (-1574 kJ) + (-483.6 kJ) = -2057.6 kJ Now, find the enthalpies of the reactants: ΔHºf C2H2 = +227 kJ/mole Multiply this value by the stoichiometric coefficient, which in this case is equal to 2 mole. vpΔHºf C2H2 = 2 mol (+227 kJ/mole) = +454 kJ ΔHºf O2 = 0.00 kJ/mole The stoichiometric coefficient of this compound is equal to 5 mole. So, vpΔHºf O2 = 5 mol ( 0.00 kJ/mole) = 0.00 kJ Add these two values in order to get the sum of the reactants Sum of reactants (Δ vrΔHºf(reactants)) = (+454 kJ) + (0.00 kJ) = +454 kJ The sum of the reactants and products can now be inserted into the formula: ΔHº = Δ vpΔHºf(products) - ? vrΔHºf(reactants) = -2057.6 kJ - +454 kJ = -2511.6 kJ Practice Problems 1. Calculate ΔH if a piece of metal with a specific heat of .98 kJ·kg−1·K−1 and a mass of 2 kg is heated from 22oC to 28oC. 2. If a calorimeter's ΔH is +2001 Joules, how much heat did the substance inside the cup lose? 3. Calculate the ΔH of the following reaction: CO2 (g) + H2O (g) --> H2CO3 (g) if the standard values of ΔHf are as follows: CO2 (g): -393.509 KJ /mol, H2O (g) : -241.83 KJ/mol, and H2CO3(g) : -275.2 KJ/mol. 4. Calculate ΔH if a piece of aluminum with a specific heat of .9 kJ·kg−1·K−1 and a mass of 1.6 kg is heated from 286oK to 299oK. 5. If the calculated value of ΔH is positive, does that correspond to an endothermic reaction or an exothermic reaction? Solutions 1. ΔH=q=cpsp x m x (ΔT) = (.98) x (2) x (+6o) = 11.76 kJ 2. Since the heat gained by the calorimeter is equal to the heat lost by the system, then the substance inside must have lost the negative of +2001 J, which is -2001 J. 3. ΔHº = ∑ΔvpΔHºf(products) - ∑Δ vrΔHºf(reactants) so this means that you add up the sum of the ΔH's of the products and subtract away the ΔH of the products: (-275.2kJ) - (-393.509kJ + -241.83kJ) = (-275.2) - (-635.339) = +360.139 kJ. 4. ΔH=q=cpsp x m x (ΔT) = (.9) x (1.6) x (13) = 18.72 kJ 5. Endothermic, since a positive value indicates that the system GAINED heat. Contributors and Attributions • Rachel Martin (UCD), Eleanor Yu (UCD)
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Thermodynamics/Energies_and_Potentials/Enthalpy/Heat_of_Reaction.txt
The molar heat (or enthalpy) of sublimation is the amount of energy that must be added to a mole of solid at constant pressure to turn it directly into a gas (without passing through the liquid phase). Sublimation requires all the forces are broken between the molecules (or other species, such as ions) in the solid as the solid is converted into a gas. The heat of sublimation is generally expressed as $\Delta H_{sub}$ in units of Joules per mole or kilogram of substance. Introduction Sublimation is the process of changing a solid into a gas without passing through the liquid phase. To sublime a substance, a certain energy must be transferred to the substance via heat (q) or work (w). The energy needed to sublime a substance is particular to the substance's identity and temperature and must be sufficient to do all of the following: 1. Excite the solid substance so that it reaches its maximum heat (energy) capacity (q) in the solid state. 2. Sever all the intermolecular interactions holding the solid substance together 3. Excite the unbonded atoms of the substance so that it reaches its minimum heat capacity in the gaseous state Decomposing $\Delta H_{sub}$ Although the process of sublimation does not involve a solid evolving through the liquid phase, the fact that enthalpy is a state function allows us to construct a "thermodynamic cycle" and add the various energies associated with the solid, liquid, and gas phases together (e.g., Hess' Law). The energies involved in sublimation can be expressed by the sum of the enthalpy changes for each step: $\Delta H_{sub} = \Delta E_{therm_{s}} + \Delta E_{bond_{solid \rightarrow liquid}} +\Delta E_{therm_{solid}}+\Delta E_{bond_{l \rightarrow \, g}}$ Recall that for state functions, only the initial and final states of the substance are important. Say for example that state A is the initial state and state B is the final state. How a substance goes from state A to state B does not matter so much as what state A and what state B are. Concerning the state function of enthalpy, the energies associated with enthalpies (whose associated states of matter are contiguous to one another) are additive. Though in sublimation a solid does not pass through the liquid phase on its way to the gas phase, it takes the same amount of energy that it would to first melt (fuse) and then vaporize. ΔEthermal (state of matter) A change in thermal energy is indicated by a change in temperature (in Kelvin) of a substance at any particular state of matter. Change in thermal energy is expressed by the equation $\Delta E_{therm}=C_p \times \Delta T$ with • $C_p=\text{heat capacity} _{\text{(of a particular state of matter)}}$ • $\Delta T = T_{(final)}-T_{(initial)}$ For more information on heat capacity and specific heat capacity, see heat capacity. ΔEbond (going from state 1 to state 2) Bond energy is the amount of energy that a group of atoms must absorb so that it can undergo a phase change (going from a state of lower energy to a state of higher energy). It is measured $\Delta E_{bond}=\Delta H_{substance_{\text{phase change}}}*\Delta mass_{(substance)}$ in which $\Delta H_{substance_{phase change}}$ is the enthalpy associated with a specific substance at a specific phase change. Common types of enthalpies include the heat of fusion (melting) and the heat of vaporization. Recall that fusion is the phase change that occurs between the solid state and the liquid state, and vaporization is the phase change that occurs between the liquid state and the gas state. Note that if the substance has more than type of intramolecular force holding the solid together, then the substance must absorb enough energy to break all the different types of intermolecular forces before the substance can sublime. 1 $\Delta H_{sub}$ is always greater than $\Delta H_{vap}$ Vaporization is the transfer of molecules of a substance from the liquid phase to the gas phase. Sublimation is the transfer of molecules from the solid phase to the gas phase. The solid phase is at a lower energy than the liquid phase: that is why substances always release heat when freezing, hence $\Delta E_{fus \, (s \rightarrow l)} > 0$. Hence, although both sublimation and evaporation involve changing a substance into its gaseous state, the enthalpy change associated with sublimation is always greater than that of vaporization. This is because solid have less energy than those of a liquid, meaning it is takes more energy to excite a solid to its gaseous phase than it does to excite a liquid to its gaseous phase. Another way to look this phenomena is to take a look at the different energies involved with the heat of sublimation: 1. $\Delta E_{therm\, (s)}$ 2. $\Delta E_{fus \, (s \rightarrow l)}$ 3. $\Delta E_{therm \, (l)}$ and 4. $\Delta E_{vap \, (l\rightarrow g) }$ Already we know that ΔEbond=ΔH(phase change)*Δm(changed substance) and ΔEbond(l-g)=ΔH(l-g)*Δm(gas created). Hence, $\Delta E_{vap \, (l\rightarrow g) }$ is actually one component of $\Delta H_{sub}$. Example 1 Consider the sublimation of ice: $H_2O_{(s)} \rightarrow H_2O_{(g)}$ Sublimation can be decomposed involve two steps (assuming no change of temperature, i.e., no heat capacity issues): • Step 1: The melting of solid water to generate liquid water $H_2O_{(s)} \rightarrow H_2O_{(l)}$ • Step 2: The evaporation of liquid water to generate gaseous water $H_2O_{(l)} \rightarrow H_2O_{(g)}$ The enthalpy change of Step 1 is the molar heat of fusion, $\Delta H_{fus}$ and the enthalpy change of Step 2 is the molar heat of vaporization, $\Delta H_{vap}$. Combining these two equations and canceling out anything that appears on both sides of the equation (i.e., liquid water), we're back to the sublimation equation: Step 1 + Step 2 = Sublimation Therefore the heat of sublimation, $\Delta H_{sub}$ is equal to the sum of the heats of fusion and vaporization: $\Delta H_{fus} + \Delta H_{vap} = \Delta H_{sub}$ Hence, unless $\Delta H_{fus}$ is equal to or less than zero (which it NEVER is), $\Delta H_{sub}$ must be greater than $\Delta H_{vap}$. Where does the added energy go? Energy can be observed in many different ways. As shown above, ΔEtot can be expressed as ΔEthermal + ΔEbond. Another way in which ΔEtot can be expressed is change in potential energy, ΔPE, plus change in kinetic energy, ΔKE. Potential energy is the energy associated with random movement, whereas kinetic energy is the energy associated with velocity (movement with direction). ΔEtot = ΔEthermal + ΔEbond and ΔEtot = ΔPE + ΔKE are related by the equations ΔPE = (0.5)ΔEthermal + ΔEbond ΔKE = (0.5)ΔEthermal for substances in the solid and liquid states. Note that ΔEthermal is divided between ΔPE and ΔKE for substances in the solid and liquid states. This is because the intermolecular and intramolecular forces that exist between the atoms of the substance (i.e. atomic bond, van der Waals forces, etc) have not yet been dissociated and prevent the atomic particles from moving freely about the atmosphere (with velocity). Potential energy is just a way to have energy, and it generally describes the random movement that occurs when atoms are forced to be close to one another. Likewise, kinetic energy is just another way to have energy, which describes an atom's vigorous struggle to move and to break away from the group of atoms. The thermal energy that is added to the substance is thus divided equally between the potential and the kinetic energies because all aspects of the atoms' movement must be excited equally However, once the intermolecular and intramolecular forces which restrict the atoms' movement are dissociated (when enough energy has been added), potential energy no longer exists (for monatomic gases) because the atoms of the substance are no longer forced to vibrate and be in contact with other atoms. When a group of atoms is in the gaseous state, it's atoms can devote all their energies into moving away from one another (kinetic energy). Practical Applications of the Heat of Sublimation The heat of sublimation can be useful in determining the effectiveness of medicines. Medicine is often administered in pill (solid) form, and the substances which they contain can sublime over time if the pill absorbs too much energy over time. Often times you may see the phrase "avoid excessive heat on the bottles of common painkillers (e.g. Advil). This is because in high temperature conditions, the pills can absorb heat energy, and sublimation can occur. Practice Problems 1. If the heat of fusion for H2O is 333.5 kJ/kg, the specific heat capacity of H2O(l) is 4.18 J/(g*K), the heat of vaporization for H2O is 2257 kJ/kg, then calculate the heat required to convert 1.00 kg of H2O(s) with the initial temperature of 273 K into steam at 373 K. Hint: 273 K is the solid-liquid phase change temperature and 373 K is the liquid-gas phase change temperature. 2. Using the information given in question one, calculate the heat of sublimation for 1.00 mole H2O when the initial temperature of the solid is 273 K. Hint: molar mass of H2O is ~18.0 g/mol or 0.018 kg/mol. 3. Using the information given in question one, calculate the heat of sublimation for 1.00 kg H2O when the initial temperature is 200 K. The specific heat capacity for H2O(s) is 2.05 kJ/(kg*K). 4. If the heat of fusion for Au is 12.6 kJ/mol, the specific heat capacity of Au(l) is 25.4 J/(mol*K), the heat of vaporization for Au is 1701 kJ/kg, then calculate the heat of sublimation for 1.00 mol of Au(s) with the initial temperature, 1336 K. Hint: 1336 K is the solid-liquid phase change temperature, and 3243 K is the liquid-vapor phase change temperature. 5. If the heat of sublimation for Cu is 349.9 kJ/mol, the specific heat capacity of Cu(l) is .0245 kJ/(mol*K), the heat of vaporization for Cu is 300.3 kJ/mol, then calculate the heat of fusion at 1357 K for 1.00 mol of Cu(s) with the temperature (Hint: 1357 K is the solid-liquid phase change temperature, and 2835 K is the liquid-vapor phase change temperature). Solutions 1. Break the evolution down into the constituent steps: • Melting of 1 kg of H2O (ice) at $T_i=273\; K$: $(333.5\; kJ/\cancel{kg})(1.0\; \cancel{ kg})= 333.5\;kJ$ • Heating up 1 kg of H2O (water) from $T_i=273\;K$ to $T_f=373\;K$: $(1.0\; \cancel{ kg})(4.18\; \cancel{J}/\cancel{g} \cdot K)\left(\dfrac{1000\;\cancel{g}}{1 \; \cancel{kg}}\right) \left(\dfrac{1\;kJ}{1000 \; \cancel{J}}\right) (373 \;K - 273 \;K)=418\;kJ$ • Boiling1 kg of H2O (water) into vapor: $(2257 \;kJ/\cancel{kg})(1.0 \;\cancel{kg}) = 2257\; kJ$ • Add them all together to get the total enthalpy added $= 333.5\;kJ + 418\;kJ + 2257\; kJ = 3008.5\; kJ$ 2. $\Delta H_{sub}$ for 1 mol H2O (at Ti=273K)= (3008.5 kJ/kg)(0.018 kg/mol) = 54.153 kJ/mol 3. $\Delta H_{sub}$ for 1 kg H2O (at Ti=200K)= 3008.5 kJ/kg + (2.05 kJ/K*kg)(1.0kg)(273-200K) = 3158.15 kJ/kg 4. $\Delta H_{sub}$ for 1 mol Au (at Ti=1336K)= (12.6 kJ/mol)(1\;mol) + (.0254 kJ/mol*K)(3243-1336\;K) + (1701 kJ/kg)(0.197 kg/mol) = 396.1 kJ/mol 5. $\Delta H_{fus}$ for Cu (at T=1356K) = 349.9 kJ/mol - (0.0245 kJ/mol*K)(2843-1357\;K) - (300.3 kJ/mol)(1\;mol) = 13.2 kJ/mol Footnotes 1 Dmitry Bedrov, Oleg Borodin, Grant D. Smith, Thomas D. Sewell, Dana M. Dattelbaum, and Lewis L. Steven. "A molecular dynamics simulation study of crystalline 1,3,5-triamino-2,4,6-trinitrobenzene as a function of pressure and temperature." THE JOURNAL OF CHEMICAL PHYSICS 131, 2009. 2 Advil bottle. 24 Ibuprofen tablets, 200mg. EXP 12/08. 3 Pascal Taulelle, Georges Sitja, Gerard Pepe, Eric Garcia, Christian Hoff, and Stephane Veesler. "Measuring Enthalpy of Sublimation for Active Pharmaceutical Ingredients: Validate Crystal Energy and Predict Crystal Habit." Crystal Growth & Design (2009): 4706–4709. Print. Contributors and Attributions • Kasey Nakajima (UCD)
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Thermodynamics/Energies_and_Potentials/Enthalpy/Heat_of_Sublimation.txt
Because the molecules of a liquid are in constant motion and possess a wide range of kinetic energies, at any moment some fraction of them has enough energy to escape from the surface of the liquid to enter the gas or vapor phase. This process, called vaporization or evaporation, generates a vapor pressure above the liquid. The Heat of Vaporization (also called the Enthalpy of Vaporization) is the heat required to induce this phase change. Since vaporization requires heat to be added to the system and hence is an endothermic process, therefore $\Delta H_{vap} > 0$ as defined: $\Delta H_{vap} = H_{vapor} - H_{liquid}$ where • $\Delta H_{vap}$ is the change in enthalpy of vaporization • $H_{vapor}$ is the enthalpy of the gas state of a compound or element • $H_{liquid}$ is the enthalpy of the liquid state of a compound or element Heat is absorbed when a liquid boils because molecules which are held together by intermolecular attractive interactions and are jostled free of each other as the gas is formed. Such a separation requires energy (in the form of heat). In general the energy needed differs from one liquid to another depending on the magnitude of the intermolecular forces. We can thus expect liquids with strong intermolecular forces to have larger enthalpies of vaporization. The list of enthalpies of vaporization given in the Table T5 bears this out. Example $1$ If a liquid uses 50 Joules of heat to vaporize one mole of liquid, then what would be the enthalpy of vaporization? Solution The heat in the process is equal to the change of enthalpy, which involves vaporization in this case $q_{tot} = \Delta_{vap}$ so $q_{tot} = 50 \; J= \Delta_{vap}$ So the enthalpy of vaporization for one mole of substance is 50 J. Kinetic energy does not change The kinetic energy of the molecules in the gas and the silquid are the same since the vaporization process occues at constant temperature. However, the add thermal energy is used to break the potential energies of the intermolecular forces in the liquid, to generate molecules in the gas that are free of potential energy (for an ideal gass). Thus, while $H_{vapor} > H_{liquid}$, the kinetic energies of the molecules are equal. The Enthalpy of Condensation Condensation is the opposite of vaporization, and therefore $\Delta H_{condensation}$ is also the opposite of $\Delta H_{vap}$. Because $\Delta H_{vap}$ is an endothermic process, where heat is lost in a reaction and must be added into the system from the surroundings, $\Delta H_{condensation}$ is an exothermic process, where heat is absorbed in a reaction and must be given off from the system into the surroundings. \begin{align} ΔH_{condensation} &= H_{liquid} - H_{vapor} \[4pt] &= -ΔH_{vap} \end{align} Because $ΔH_{condensation}$, also written as $ΔH_{cond}$, is an exothermic process, its value is always negative. Moreover, $ΔH_{cond}$ is equal in magnitude to $ΔH_{vap}$, so the only difference between the two values for one given compound or element is the positive or negative sign. Example $2$ 2.055 liters of steam at 100°C was collected and stored in a cooler container. What was the amount of heat involved in this reaction? The $ΔH_{vap}$ of water = 44.0 kJ/mol. Solution 1. First, convert 100°C to Kelvin. °C + 273.15 = K 100.0 + 273.15 = 373.15 K 2. Find the amount involved (in moles). \begin{align*} n_{water} &= \dfrac{PV}{RT} \[4pt] &= \dfrac{(1.0\; atm)(2.055\; L)}{(0.08206\; L\; atm\; mol^{-1} K^{-1})(373.15\; K)} \[4pt] &= 0.0671\; mol \end{align*} 3. Find $ΔH_{cond}$ $ΔH_{cond} = -ΔH_{vap} \nonumber$ so $ΔH_{cond} = -44.0\; kJ/ mol \nonumber$ 4. Using the $ΔH_{cond}$ of water and the amount in moles, calculate the amount of heat involved in the reaction. To find kJ, multiply the $ΔH_{cond}$ by the amount in moles involved. \begin{align*} (ΔH_{cond})(n_{water}) &= (-44.0\; kJ/mol)(0.0671\; mol) \[4pt] &= -2.95\; kJ \end{align*} Definitions of Terms Vaporization (or Evaporation) the transition of molecules from a liquid to a gaseous state; the molecules on a surface are usually the first to undergo a phase change. • Enthalpy: the amount of heat consumed or released in a system at constant pressure • Kinetic Energy: the energy of a moving object measured in Joules • Endothermic: when heat is added to a system from the surroundings, due to a decrease in heat in a reaction • Exothermic: when heat from a system is given off into its surroundings, due to the increase in heat in a reaction • Heat of Vaporization: the amount of heat required to evaporate a liquid • System and its Surroundings: the system is the area in which a reaction takes place and the surroundings is the area that interacts with the system • Condensation: the transition of molecules from a gaseous or vapor state to a liquid • State Function (or Function of State): any property, such as temperature, pressure, enthalpy or mass, that has a unique value or number for the specific state of a system Contributors and Attributions • Nandini Bapat (UCD)
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Thermodynamics/Energies_and_Potentials/Enthalpy/Heat_of_Vaporization.txt
Discussion Questions • What is the energy of hydration? • How is hydration energy related to lattice energy? • What is enthalpy of solvation? The formation of a solution involves the interaction of solute with solvent molecules. Many different liquids can be used as solvents for liquid solutions, and water is the most commonly used solvent. When water is used as the solvent, the dissolving process is called hydration. The interaction between water molecules and sodium ion is illustrated as one of the diagram below. This is a typical ion-dipole interaction. At the molecular level, the ions interact with water molecules from all directions in a 3-dimensional space. This diagram depicts the concept of interaction only. The above diagram also display hydrogen-bonding, dipole-dipole, ion-induced dipole, and dipole-induced dipole interactions. In the absence of these interactions, solvation takes place due to dispersion. Definitions of these terms are obvious from the diagrams. The meaning of the words used in the term also hints the nature of the interactions. What is the Enthalpy of Hydration? Enthalpy of hydration, $\Delta H_{hyd}$, of an ion is the amount of heat released when a mole of the ion dissolves in a large amount of water forming an infinite dilute solution in the process, $M^{z+}_{(g)} + mH_2O \rightarrow M^{z+}_{(aq)} \label{1}$ where Mz+(aq) represents ions surrounded by water molecules and dispersed in the solution. The approximate hydration energies of some typical ions are listed here. Figure $1$ illustrates the point that as the atomic numbers increases, so do the ionic size, leading to a decrease in absolute values of enthalpy of hydration. Figure $1$: Enthalpy of Hydration ($\Delta H_{hyd}\; kJ/mol$) of Some Typical Ions Ion $\Delta H_{hyd}$ Ion $\Delta H_{hyd}$ Ion $\Delta H_{hyd}$ H+ -1130 Al3+ -4665 Fe3+ -4430 -- Li+ -520 Be2+ -2494 F- -505 Na+ -406 Mg2+ -1921 Cl- -363 K+ -322 Ca2+ -1577 Br- -336 Rb+ -297 Sr2+ -1443 I- -295 Cs+ -276 Ba2+ -1305 ClO4- -238 -- Cr2+ -1904 Mn2+ -1841 Fe2+ -1946 Co2+ -1996 Ni2+ -2105 Cu2+ -2100 Zn2+ -2046 Cd2+ -1807 Hg2+ -1824 From the above table, an estimate can be made for the hydration energy of sodium chloride. The hydration energy of an ionic compound consists of two inseparable parts. The first part is the energy released when the solvent forms a coordination compound with the ions. This energy released is called the Enthalpy of ligation, $\Delta H_{lig}$. The processes related to these energies are shown below: $M^{z+} + nL \rightarrow ML_n^{z+} \;\;\; \Delta H_{lig} \label{2}$ $ML_n^{z+} + solvent \rightarrow ML^{z+}_{n(sol)} \;\;\; \Delta H_{disp} \label{3}$ The second step is to disperse the ions or hydrated ions into the solvent medium, which has a dielectric constant different from vacuum. This amount of energy is called energy of dispersion, $\Delta H_{disp}$. Therefore, $\Delta H_{hyd} = \Delta H_{disp} + \Delta H_{lig} \label{4}$ This idea is brought up just to point out that the formation of aqua complex ions is part of the hydration process, even though the two energies are not separable. When stronger coordination is made between the ions and other ligands, they replace the coordinated water molecules if they are present. In the presence of NH3 molecules, they replace the water of Cu(H2O)62+: $Cu(H_2O)_6^{2+} + 6NH_3 \rightarrow Cu(NH_3)_6^{2+} + 6H_2O \label{5}$ Relating Hydration Energy to Lattice Enthalpy In the discussion of lattice energy, we consider the ions separated into a gas form whereas in the dissolution process, the ions are also separated, but this time into ions dispersed in a medium with solvent molecules between ions. The medium or solvent has a dielectric constant. The molar enthalpy of solution, $\Delta H_{sol}$, is the energy released when one mole solid is dissolved in a solvent. This quantity, the enthalpy of crystallization, and energy of hydration forms a cycle. Taking the salt $\ce{NaCl}$ as an example, the following relationship is obvious, $\Delta H_{sol} =\Delta H_{lattice} + \Delta H_{hydration} \label{4A}$ from the following diagram. The term enthalpy of crystallization is used in this diagram instead of lattice energy so that all the arrows point downward. Note that enthalpy of crystallization $H_{cryst}$, and energy of crystallization, $E_{cryst}$ refer to the same quantity, and they are used interchangably. The enthalpies of solution for some salts can be positive values, in these cases the temperatures of the solution decrease as the substances dissolve; the dissolving is an endothermic reaction. The energy levels of solids and solutions reverse in order of height. What is Enthalpy of Solution? The molar enthalpy of solution, $\Delta H_\ce{sol}$, is the energy released when one mole of solid is dissolved in a solvent. Sometimes the enthalpy of hydration is also (mis)understood as $\Delta H_\ce{sol}$. When apply these values, make sure you understand the process involved. Figure $2$: Enthalpy of Solution $\Delta H_\ce{sol}$ kJ/mol) of Some Common Electrolytes Substance $\Delta H_{sol}$ Substance $\Delta H_{sol}$ AlCl3(s) -373.63 H2SO4(l) -95.28 LiNO3(s) -2.51 LiCl(s) -37.03 NaNO3(s) 20.50 NaCl(s) 3.88 KNO3(s) 34.89 KCl(s) -17.22 NaOH(s) -44.51 NH4Cl(s) 14.77 These values indicates that when aluminum chloride and sulfuric acid are dissolved in water, much heat is released. Due to the very small value of enthalpies of solution, the temperature changes are hardly noticed when LiNO3 and NaCl are dissolving. Example $1$: Madelung constant of Sodium Chloride The lattice energy of NaCl calculated using the Madelung constant of the NaCl structure type is +788 kJ/mol. The estimated enthalpy of hydration for sodium and chloride ions are -406 and -363 kJ/mol respectively. Estimate the enthalpy of solvation for NaCl. Solution Using the cycle in Figure $2$ and Equation \ref{4A}, we have $\Delta H_{hyd} = \Delta H_{lattice} + \Delta H_{sol} \nonumber$ $\Delta H_{sol} = -769 - (788)\; kJ = -19\, kJ/mol \nonumber$ DISCUSSION A positive value indicates an endothermic reaction. However, the value is small, and depending on the source of data, the estimated value may change. This value of 19 kJ/mol is too high compared to the value given earlier for NaCl of 3.88 kJ/mol, due to a high value of lattice energy used. Example $2$ The enthalpy of crystallization for KCl is -715 kJ/mol. The enthalpies of hydration for potassium and chloride are -322 and -363 kJ/mol respectively. From these values, estimate the enthalpy of solution for KCl. Solution The enthalpy of hydration for KCl is estimated to be $\Delta H_{hyd}= -322 + (-363) = -685 kJ/mol \nonumber$ Thus, the enthalpy of solution is $\Delta H_{sol}= -685 - (-715) = 30 kJ/mol \nonumber$ DISCUSSION The enthalpy of solution given above is -17.22. The two values here indicates that dissolving $\ce{KCl}$ into water is an endothermic reaction or change. Should temperature decrease or increase when $\ce{KCl}$ dissolves? Questions 1. What types of interaction are present when CaCl2 dissolves in ethanol? 2. What is the major type of interaction when toluene dissolves in benzene? 3. Which ion has a larger absolute value of enthalpy of hydration, Na+ or Ca2+? 4. Which ion is larger, Na+ or Cl-? 5. When KNO3 is dissolving in water, will the temperature decrease or increase? Solutions 1. Skill - Identify the type of interactions in the solvation process. 2. Discussion - Toluene and benzene form ideal solution in that total vapor pressure of the solution is the sum of the partial pressures of benzene and toluene. These two compounds are so much a like that their molecules disperse into each other. The major driving force for solution is entropy. 3. Skill - The ion-dipole interaction is stronger between Ca2+ and water molecules than between Na+ and water molecules. 4. Discussion - What are their electronic configuration? The sodium ion has the same electronic configuration as Ne, but the chloride ion has the same electronic configuration as Ar, which is larger than Ne. Which one release more energy when hydrated? 5. Discussion - Among the substances listed in the table above, KNO3 absorbed the most energy per mole when dissolved. Kirchhoff Law Kirchhoff's Law describes the enthalpy of a reaction's variation with temperature changes. In general, enthalpy of any substance increases with temperature, which means both the products and the reactants' enthalpies increase. The overall enthalpy of the reaction will change if the increase in the enthalpy of products and reactants is different. Introduction At constant pressure, the heat capacity is equal to change in enthalpy divided by the change in temperature. $c_p = \dfrac{\Delta H}{\Delta T} \label{1}$ Therefore, if the heat capacities do not vary with temperature then the change in enthalpy is a function of the difference in temperature and heat capacities. The amount that the enthalpy changes by is proportional to the product of temperature change and change in heat capacities of products and reactants. A weighted sum is used to calculate the change in heat capacity to incorporate the ratio of the molecules involved since all molecules have different heat capacities at different states. $H_{T_f}=H_{T_i}+\int_{T_i}^{T_f} c_{p} dT \label{2}$ If the heat capacity is temperature independent over the temperature range, then Equation \ref{1} can be approximated as $H_{T_f}=H_{T_i}+ c_{p} (T_{f}-T_{i}) \label{3}$ with • $c_{p}$ is the (assumed constant) heat capacity and • $H_{T_{i}}$ and $H_{T_{f}}$ are the enthalpy at the respective temperatures. Equation \ref{3} can only be applied to small temperature changes, (<100 K) because over a larger temperature change, the heat capacity is not constant. There are many biochemical applications because it allows us to predict enthalpy changes at other temperatures by using standard enthalpy data. Contributors and Attributions • Janki Patel (UCD), Kostia Malley (UCD) Simple Measurement of Enthalpy Changes of Reaction This page is just a brief introduction to simple measurements of enthalpy changes of reaction which can be easily carried out in a lab. There are also pointers to other sources of information. Contributors and Attributions Jim Clark (Chemguide.co.uk)
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Thermodynamics/Energies_and_Potentials/Enthalpy/Hydration.txt
Entropy is a state function that is often erroneously referred to as the 'state of disorder' of a system. Qualitatively, entropy is simply a measure how much the energy of atoms and molecules become more spread out in a process and can be defined in terms of statistical probabilities of a system or in terms of the other thermodynamic quantities. Entropy is also the subject of the Second and Third laws of thermodynamics, which describe the changes in entropy of the universe with respect to the system and surroundings, and the entropy of substances, respectively. • ‘Disorder’ in Thermodynamic Entropy Boltzmann’s sense of “increased randomness” as a criterion of the final equilibrium state for a system compared to initial conditions was not wrong.; it was his surprisingly simplistic conclusion: if the final state is random, the initial system must have been the opposite, i.e., ordered. “Disorder” was the consequence, to Boltzmann, of an initial “order” not — as is obvious today — of what can only be called a “prior, lesser but still humanly-unimaginable, large number of accessible microstate • Microstates Dictionaries define “macro” as large and “micro” as very small but a macrostate and a microstate in thermodynamics aren't just definitions of big and little sizes of chemical systems. Instead, they are two very different ways of looking at a system. A microstate is one of the huge number of different accessible arrangements of the molecules' motional energy* for a particular macrostate. • Simple Entropy Changes - Examples Several Examples are given to demonstrate how the statistical definition of entropy and the 2nd law can be applied. Phase Change, gas expansions, dilution, colligative properties and osmosis. • Statistical Entropy Entropy is a state function that is often erroneously referred to as the 'state of disorder' of a system. Qualitatively, entropy is simply a measure how much the energy of atoms and molecules become more spread out in a process and can be defined in terms of statistical probabilities of a system or in terms of the other thermodynamic quantities. • Statistical Entropy - Mass, Energy, and Freedom The energy or the mass of a part of the universe may increase or decrease, but only if there is a corresponding decrease or increase somewhere else in the universe. The freedom in that part of the universe may increase with no change in the freedom of the rest of the universe. There might be decreases in freedom in the rest of the universe, but the sum of the increase and decrease must result in a net increase. • The Molecular Basis for Understanding Simple Entropy Change Entropy Boltzmann was brilliant, undoubtedly a genius, far ahead of his time in theory. Of course he was not infallible. Most important for us moderns to realize, he was still very limited by the science of his era; dominant physical chemist and later Nobelist Ostwald named his estate “Energie” but did not believe in the physical reality of molecules nor in Boltzmann’s treatment of them. Some interesting but minor details that are not too widely known: Even though Boltzmann died in 1906, there is no evidence that he ever saw, and thus certainly never calculated entropy values via the equation Planck published in a 1900 article, S= R/N ln W. It was first printed in 1906 in a book by Planck as \( S=k_{B}lnW \), and subsequently carved on Boltzmann’s tombstone. Planck’s nobility in allowing R/N to be called ‘Boltzmann’s constant’, kB, was uncharacteristic of most scientists of that day, as well as now. The important question is “what are the bases for Boltzmann’s introduction of order to disorder as a key to understanding spontaneous entropy change?” That 1898 idea came from two to three pages of a conceptual description, a common language summary, that follow over 400 pages of detailed theory in Brush’s translation of Boltzmann’s 1896-1898 “Lectures on Gas Theory” (University of California Press, 1964). The key paragraph should be quoted in full. (The preceding and following phrases and sentences, disappointingly, only expand on it or support it without additional meaningful technical details or indications of Boltzmann’s thought processes. I have inserted an explanatory clause from the preceding paragraph in brackets, and put in italics Boltzmann’s surprisingly naïve assumptions about all or most initial states as “ordered”.) “In order to explain the fact that the calculations based on this assumption [“…that by far the largest number of possible states have the characteristic properties of the Maxwell distribution…”] correspond to actually observable processes, one must assume that an enormously complicated mechanical system represents a good picture of the world, and that all or at least most of the parts of it surrounding us are initially in a very ordered — and therefore very improbable — state. When this is the case, then whenever two of more small parts of it come into interaction with each other, the system formed by these parts is also initially in an ordered state and when left to itself it rapidly proceeds to the disordered most probable state.” (Final paragraph of #87, p. 443.) [Pitzer’s calculation of a mole of any substance at near 0 K shows that none can be more ordered than having the possibility of 1026,000,000,000,000,000,000 different accessible microstates! (Pitzer, Thermodynamics, 3rd edition, 1995, p. 67.)] Thus, today we know that no system above 0 K has any "order" in correct thermodynamic descriptions of systems of energetic molecules. The common older textbook comparison of orderly crystalline ice to disorderly liquid water is totally deceptive, It is a visual "Boltzmann error" not a proper thermodynamic evaluation. If liquid water at 273 K, with its 101,991,000,000,000,000,000,000,000 accessible microstates (quantized molelcular arrangements) is considered "disorderly", how can ice at 273 K that has 101,299,000,000,000,000,000,000,000 accessible microstates be considered "orderly"? Obviously, using such common words is inappropriate in measuring energetic microstates and thus in discussing entropy change conceptually. That slight, innocent paragraph of a sincere man — but before modern understanding of qrev/T via knowledge of molecular behavior (Boltzmann believed that molecules perhaps could occupy only an infinitesimal volume of space), or quantum mechanics, or the Third Law — that paragraph and its similar nearby words are the foundation of all dependence on “entropy is a measure of disorder”. Because of it, uncountable thousands of scientists and non-scientists have spent endless hours in thought and argument involving ‘disorder’and entropy in the past century. Apparently never having read its astonishingly overly-simplistic basis, they believed that somewhere there was some profound base. Somewhere. There isn’t. Boltzmann was the source and no one bothered to challenge him. Why should they? Boltzmann’s concept of entropy change was accepted for a century primarily because skilled physicists and thermodynamicists focused on the fascinating relationships and powerful theoretical and practical conclusions arising from entropy’s relation to the behavior of matter. They were not concerned with conceptual, non-mathematical answers to the question, “What is entropy, really?” that their students occasionally had the courage to ask. Their response, because it was what had been taught to them, was “Learn how to calculate changes in entropy. Then you will understand what entropy ‘really is’.” There is no basis in physical science for interpreting entropy change as involving order and disorder. The original definition of entropy (change) involves a transfer of heat from a thermal reservoir to a system via a virtually reversible energy flow process. Although Clausius described it and his equation of dqrev/T or qrev/T as a “Verwandlung” or “transformation”, he limited it and “disgregation” to discussions of fusion or vaporization where the “disgregation values” changed. Thus, Clausius was observing phase change, but he made no statements about “orderly crystalline substances” being transformed into “disorderly” liquids, an obvious claim for him to make from his observation. Unfortunately, Clausius did not see that his dq, an amount of ‘heat’ energy, initially relatively localized in a thermal reservoir, was transformed in any process that allowed heat to become more spread out in space. That is what happens when a warm metal bar is placed in contact with a similar barely cooler metal bar — or when any system is warmed by its slightly warmer surroundings. The final state of the “universe” in both of these examples is at equilibrium and at a uniform temperature. The internal energy of the atoms or molecules in the reservoir has become less localized, more dispersed in the greater final three-dimensional space than it was in the initial state. (More profoundly, of course, that energy has become more dispersed in phase-space, and spread over more energy levels in the once-cooler object than was its dispersal decreased in the once-hotter surroundings.) That is also what happens when ideal gases A and B with their individually different internal energy content (S0 values) but comparably energetic, constantly moving molecules in adjacent chambers are allowed access to one another’s chambers at 298 K. With no change in temperature, they will mix spontaneously because, on the lowest level of interpretation, the translational energy of the A and B molecules can thereby become more spread out in the larger volume. On a more sophisticated level, their energy is more widely distributed in phase-space. From the quantum-mechanical view of the occupancy of energy levels by individual molecules, each type of molecule has additional energy levels in the greater volume, because the energy levels become closer together. But the same causal description of energy spontaneously spreading out can be used as in the naïve view of seeing mobile molecules always moving to occupy newly available 3-D volume: the energy of the molecules is more dispersed, more spread out, now in terms of dispersal over more energy levels. (Of course, this energy dispersal can best be described in terms of additional accessible microstates. The greater the number of possible arrangements of molecular energies over energy levels, the greater the entropy increase — because the system in any one arrangement at one instant has more probability of being in a different arrangement the next instant. The total energy of the system is unchanged over time, but there is a continuing ‘temporal dance’ of the system over a minute fraction of the hyper-astronomic number of accessible arrangements.) The increase of entropy in either A or B can readily be shown to be equal to R ln VFinal/VInitial , or more fundamentally, —nR(xi ln xi). This result is not specific to gases, of course. What is shown to be significant by the basic equation is that any separation of molecules of one type from its own kind is an entropy increase due to spreading out of its intrinsic internal energy in greater space, both 3-D and phase-space. Further, this increased dispersal of energy is interpretable in terms of the increased number of accessible arrangements of the system’s energy at any instant and thus, a greater number of chances for change in the next instant — a greater ‘temporal dance’ by the system over greater possibilities and a consequent entropy increase. Boltzmann’s sense of “increased randomness” as a criterion of the final equilibrium state for a system compared to initial conditions was not wrong. What failed him (and succeeding generations) was his surprisingly simplistic conclusion: if the final state is random, the initial system must have been the opposite, i.e., ordered. “Disorder” was the consequence, to Boltzmann, of an initial “order” not — as is obvious today — of what can only be called a “prior, lesser but still humanly-unimaginable, large number of accessible microstates” Clearly, a great advantage of introducing chemistry students to entropy increase as due to molecular energy spreading out in space, if it is not constrained, begins with the ready parallels to spontaneous behavior of kinds of energy that are well-known to beginners: the light from a light bulb, the sound from a stereo, the waves from a rock dropped in a swimming pool, the air from a punctured tire. However, its profound “added value” is its continued pertinence at the next level in the theoretical interpretation of energy dispersal in thermal or non-thermal events, i.e., when the quantization of molecular energies on energy levels, their distributions, and accessible microstates become the focus. When a system is heated, and its molecules move more rapidly, their probable distributions on energy levels change so that previous higher levels are more occupied and additional high levels become accessible. The molecular energy of the heated system therefore has become more widely spread out on energy levels. The dispersal of energy on energy levels is comparable in adiabatic processes that some authors characterize as involving “positional” or “configurational” entropy. When a larger volume is made available to ideal components in a system — by expansion of a gas, by fluids mixing (or even by a solute dissolving) — the energy levels of the final state of each constituent are closer together, denser than in the initial state. This means that more energy levels are occupied in the final state despite no change in the total energy of any constituent.. Thus, the initial energy of the system has become more spread out, more widely dispersed on more energy levels in the final state.. The ‘Boltzmann’ equation for entropy is S = kB ln W, where W is the number of different ways or microstates in which the energy of the molecules in a system can be arranged on energy levels. Then, ΔS would equal kB ln WFinal / WInitial for the thermal or expansion or mixing processes just mentioned. A most important ΔS value in chemistry is the standard state entropy for a mole of any substance at 298 K, S0, that can be determined by calorimetric measurement of increments of heat/T added reversibly to the substance from 0 K to 298 K. Any transition state or phase change/T is also added. Obviously, therefore, considerable energy is ‘stored’ in any substance on many different energy levels when it is in its standard state. (A. Jungermann, J. Chem. Educ. 2006, 83, 1686-1694.) Further, for example, if energy from 398 K surroundings is spread out to a mole of nitrogen at 298 K, the nitrogen’s molecules become differently arranged on previous energy levels and spread to some higher levels. If, while at any fixed temperature, the nitrogen is allowed to expand into a vacuum or to mix with another gas, its energy levels in the new larger volume will be closer together. Even in fixed volume or steady temperature situations, the constantly colliding molecules in a mole of any gas are clearly not just in one unique arrangement on energy levels for more than an instant. They are continually changing from one arrangement to another due to those collisions — within the unchanged total energy content at a given temperature and a distribution on energy levels consistent with a Boltzmann distribution. Thus, because WInitial at 0 K is arbitrarily agreed to be 1, nitrogen’s S0 of 191.6 J/K mole = 1.380 x 10-23 J/K ln WFinal. Then, WFinal = 10 to the exponent of 6,027,000,000,000,000,000,000,000 microstates, a number of possible arrangements for the nitrogen molecules at 298.15 K that is humanly beyond comprehension — except in the terms of manipulating or comparing that number mathematically. It should be emphasized that these gigantic numbers are significant guides mathematically and physically and conceptually — i.e., that a greater or smaller such number indeed indicates a difference in real physical systems of molecules and we should sense that magnitude is significant. However, conceptually, we should also realize that in real time, it is impossible for a system to be more than a few quadrillion different states in a few seconds, perhaps spending most time in a few billion or million, It is impossible even in almost-infinite time for a system to explore all possible microstates or that even a tiny fraction of the possible microstates could be explored in a century. (It is even less probable that a gigantic number would be visited because the most probably and frequently occupied microstates constitute an extremely narrow peak in a ‘probability spectrum’.) However, the concepts are clear. At one instant, all of the molecules are only in one energetic arrangement — an instantaneous ‘freeze-frame’ photo of molecular energies on energy levels (that is an abstraction derived from an equally impossible photo of the actual molecules with their speeds and locations in space at one instant.) Then, in the next instant, a collision between even two molecules will change the arrangement into a different microstate. In the next instant to another. Then to another. (Leff has called this sequence of instantaneous changes “a temporal dance” by the system over some of its possible accessible microstates.) Even though the calculated number of possible microstates is so great that there is no chance that more than a small fraction of that number could ever be explored or “danced in” over finite time, that calculated number influences how many chances there are for the system’s energy arrangement to be in at the next moment. The greater the number of microstates, the more chances a system has for its energy next to be in a different microstate. In this sense, the greater the number of microstates possible for a system, the less probable it is that it could return to a previously visited microstate and thus, the more dispersed is its energy over time. (In no way is this evaluation and conclusion a novel/radical introduction of time into thermodynamic considerations! It is simply that in normal thermo measurements lasting seconds, minutes or hours, the detailed behavior of molecules in maintaining a macro state is of no interest.) For example, heating a mole of nitrogen only one degree, say from the standard state of 298.15 K to 299.15 K, in view of its heat capacity of 29 J/K, results in an entropy increase of 0.097 J/K, increasing the microstates from 6,027 x 1024 to 6,034 x 1024. Thus, even a slight macro change in a system is signified by an equal change in the number of microstates — in the number of chances for the system to be in a different microstate in the next instant than in the previous moment.. The greater the number of microstates possible for a system in a given state, the more probable is the dispersal of that system’s energy in the sense of its being in a different microstate in the next instant. A greater dispersal of energy in a system means, in terms of its microstates, a ‘temporal dance’ over a greater number of possible microstates than if there were a smaller number of microstates. The most frequently used example to show entropy increase as greater “disorder” in elementary chemistry texts for many years was that of ice melting to water. Sparkling orderly crystalline ice to disorderly mobile liquid water — that is indeed a striking visual and crystallographic impression, but appearance is not the criterion of entropy change. Entropy increase is dependent on the dispersal of energy — in three-dimensional space (an easily understandable generality for all beginning students.) Then, more capable students can be led to see how entropy increase due to heat transfer involves molecular energies occupying more and higher energy levels while entropy increase in gas expansion and all mixing is characterized by occupancy of denser energy levels within the original energy span of the system. Finally, advanced students can be shown that any increase in entropy in a final system or universe that has a larger number of microstates than the initial system/universe as the ultimate correlation of entropy increase with theory, quantitatively derivable from molecular thermodynamics. Crystalline ice at 273 K has an S0 of 41.34 J/K mol, and thus, via S = kB ln W, there are 10 to an exponent of 1,299,000,000,000,000,000,000,000 possible accessible microstates for ice. Because the S0 for liquid water at 273 K = 63.34 J/K mole, there are 10 to an even larger exponent of 1,991,000,000,000,000,000,000,000 accessible microstates for water. Does this clearly show that water is “disorderly” compared to crystalline ice? Of course not. For ice to have fewer accessible microstates than water at the same temperature means primarily — so far as entropy considerations are concerned — that any pathway to change ice to water will result in an increase in entropy in the system and therefore is favored thermodynamically. Gibbs’ use of the phrase “mixed-upness” is totally irrelevant to ‘order-disorder’ in thermodynamics or any other discussion. It comes from a posthumous fragment of writing, unconnected with any detailed argument or logical support for the many fundamental procedures and concepts developed by Gibbs Finally, the idea that there is any ‘order’ or simplicity in the distribution of energy in an initial state of any real substance under real conditions is destroyed by Pitzer’s calculation of numbers for microstates in his “Thermodynamics” (Third edition, 1995, p. 67). As near to 0 K and thus, to as ‘practical’ a zero entropy as can be achieved in a laboratory, Pitzer shows that there must be 1026,000,000,000,000,000,000 of possible accessible microstates for any substance. Contributors and Attributions • Frank L. Lambert, Professor Emeritus, Occidental College
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Thermodynamics/Energies_and_Potentials/Entropy/Disorder_in_Thermodynamic_Entropy.txt
Dictionaries define “macro” as large and “micro” as very small but a macrostate and a microstate in thermodynamics aren't just definitions of big and little sizes of chemical systems. Instead, they are two very different ways of looking at a system. (Admittedly, a macrostate always has to involve an amount of matter large enough for us to measure its volume or pressure or temperature, i.e. in “bulk”. But in thermodynamics, a microstate isn't just about a smaller amount of matter', it is a detailed look at the energy that molecules or other particles have.) A microstate is one of the huge number of different accessible arrangements of the molecules' motional energy* for a particular macrostate. *Motional energy includes the translational, rotational, and vibrational modes of molecular motion. In calculations involving entropy, the ΔH of any phase change in a substance (“phase change energy”) is added to motional energy, but it is unaltered in ordinary entropy change (of heating, expansion, reaction, etc.) unless the phase itself is changed. A macrostate is the thermodynamic state of any system that is exactly characterized by measurement of the system's properties such as P, V, T, H and number of moles of each constituent. Thus, a macrostate does not change over time if its observable properties do not change. In contrast, a microstate for a system is all about time and the energy of the molecules in that system. "In a system its energy is constantly being redistributed among its particles. In liquids and gases, the particles themselves are constantly redistributing in location as well as changing in the quanta (the individual amount of energy that each molecule has) due to their incessantly colliding, bouncing off each other with (usually) a different amount of energy for each molecule after the collision.. Each specific way, each arrangement of the energy of each molecule in the whole system at one instant is called a microstate." One microstate then is something like a theoretical "absolutely instantaneous photo" of the location and momentum of each molecule and atom in the whole macrostate. (This is talking in ‘classical mechanics’ language where molecules are assumed to have location and momentum. In quantum mechanics the behavior of molecules is only described in terms of their energies on particular energy levels. That is a more modern view that we will use.) In the next instant the system immediately changes to another microstate. (A molecule moving at an average speed of around a thousand miles an hour collides with others about seven times in a billionth of a second. Considering a mole of molecules (6 x 1023) traveling at a very large number of different speeds, the collisions occur — and thus changes in energy of trillions of molecules occurs — in far less than a trillionth of a second. That's why it is wise to talk in terms of “an instant”!) To take a photo like that may seem impossible and it is. In the next instant — and that really means in an extremely short time — at least a couple of moving molecules out of the 6 x 10 23 will hit one another.. But if only one molecule moves a bit slower because it had hit another and made that other one move an exactly equal amount faster — then that would be a different microstate. (The total energy hasn't changed when molecular movement changes one microstate into another. Every microstate for a particular system has exactly the total energy of the macrostate because a microstate is just an instantaneous quantum energy-photo of the whole system.) That's why, in an instant for any particular macrostate, its motional energy* has been rearranged as to what molecule has what amount of energy. In other words, the system — the macrostate — rapidly and successively changes to be in a gigantic number of different microstates out of the “gazillions” of accessible microstates, (In solids, the location of the particles is almost the same from instant to instant, but not exactly, because the particles are vibrating a tiny amount from a fixed point at enormous speeds.) N2 and O2 molecules are at 298 K are gases, of course, and have a very wide range of speeds, from zero to more than two thousand miles an hour with an average of roughly a thousand miles an hour. They go only about 200 times their diameter before colliding violently with another molecule and losing or gaining energy. Occasionally, two molecules colliding head on at exactly the same speed would stop completely before being hit by another molecule and regaining some speed.) In liquids, the distance between collisions is very small, but the speeds are about the same as in a gas at the same temperature. Now we know what a microstate is, but what good is something that we can just imagine as an impossible fast camera shot? The answer is loud and clear. We can calculate the numbers for a given macrostate and we find that microstates give us answers about the relation between molecular motion and entropy — i.e., between molecules (or atoms or ions) constantly energetically speeding, colliding with each other, moving distances in space (or, just vibrating rapidly in solids) and what we measure in a macrostate as its entropy. As you have read elsewhere, entropy is a (macro) measure of the spontaneous dispersal of energy, how widely spread out it becomes (at a specific temperature). Then, because the number of microstates that are accessible for a system indicates all the different ways that energy can be arranged in that system, the larger the number of microstates accessible, the greater is a system's entropy at a given temperature. It is not that the energy of a system is smeared or spread out over a greater number of microstates that it is more dispersed. That can't occur because all the energy of the macrostate is always in only one microstate at one instant. The macrostate's energy is more "spread out" when there are larger numbers of microstates for a system because at any instant all the energy that is in one microstate can be in any one of the now-larger total of microstates, a greatly increased number of choices, far less chance of being “localized” — i.e., just being able to jump around from one to only a dozen other microstates or'only' a few millions or so! More possibilities mean more chances for the system to be in one of MANY more different microstates — that is what is meant by "the system's total energy can be more dispersed or spread out”: more choices/chances. That might be fine, but how can we find out how many microstates are accessible for a macrostate? (Remember, a macrostate is just any system whose thermodynamic qualities of P, V, T, H, etc. have been measured so the system is exactly defined.) Fortunately, Ludwig Boltzmann gives us the answer in S = kB ln W, where S is the value of entropy in joules/mole at T, kB is Boltzmann's constant of 1.4 x 10-23 J/K and W is the number of microstates. Thus, if we look in “Standard State Tables” listing the entropy of a substance that has been determined experimentally by heating it from 0 K to 298 K, we find that ice at 273 K has been calculated to have an Soof 41.3 J/K mol. Inserting that value in the Boltzmann equation gives us a result that should boggle one's mind because it is among the largest numbers in science. (The estimated number of atoms in our entire galaxy is around 1070 while the number for the whole universe may be about 1080. A very large number in math is 10100 and called "a googol" — not Google!) Crystalline ice at 273 K has 101,299,000,000,000,000,000,000,000 accessible microstates. (Writing 5,000 zeroes per page, it would take not just reams of paper, not just reams piled miles high, but light years high of reams of paper to list all those microstates!) Entropy and entropy change are concerned with the energy dispersed in a system and its temperature, qrev/T. Thus, entropy is measured by the number of accessible microstates, in any one of which the system's total energy might be at one instant, not by the orderly patterns of the molecules aligned in a crystal. Anyone who discusses entropy and calls "orderly" the energy distribution among those humanly incomprehensible numbers of different microstates for a crystalline solid — such as we have just seen for ice — is looking at the wrong thing. Liquid water at the same temperature of ice, 273 K has an So of 63.3 J/K . Therefore, there are 101,991,000,000,000,000,000,000,000 accessible microstates for water. Contributors and Attributions • Frank L. Lambert, Professor Emeritus, Occidental College
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Thermodynamics/Energies_and_Potentials/Entropy/Microstates.txt
The Entropy of a Substance at a Temperature, T The entropy of a substance at any temperature T is not complex or mysterious. It is simply a measure of the total amount of energy that had to be dispersed within the substance (from the surroundings) from 0 K to T, incrementally and reversibly and divided by T for each increment, so the substance could exist as a solid or (with additional reversible energy input for breaking intermolecular bonds in phase changes) as a liquid or as a gas at the designated temperature. Because the heat capacity at a given temperature is the energy dispersed in a substance per unit temperature, integration from 0 K to T of Cp/T dT (+ q/T for any phase change) yields the total entropy. This result, of course, is equivalent to the area under the curve to T in Figure 5. Phase Change: Fusion and Vaporization To change a solid to a liquid at its melting point requires large amounts of energy to be dispersed from the warmer surroundings to the solid for breaking the intermolecular bonds to the degree required for existence of the liquid at the fusion temperature. (“To the degree required” has special significance in the melting of ice. Many, but not all of the hydrogen bonds in crystalline ice are broken. The rigid tetrahedral structure is no longer present in liquid water but the presence of a large number of hydrogen bonds is shown by the greater density of water than ice due to the even more compact hydrogen-bonded clusters of H2O.) Quantitatively, the entropy increase in this isothermal dispersal of energy from the surroundings is ΔHFusion /T. Because melting involves bond-breaking, it is an entropy increase in the potential energy of the substance involved. (This potential energy remains unchanged in a substance throughout heating, expansion, mixing, subsequent phase change to a vapor, or mixing. Of course, it is released when the temperature of the system drops below the freezing/melting point).The process is isothermal, and therefore there is no energy transferred to the system to increase motional energy. However, a change in the motional energy — not an increase in the quantity of energy — occurs from the transfer of vibrational energy in the ice crystal to the liquid. When the liquid forms, there is rapid breaking of hydrogen bonds (trillionths of a second) and forming new ones with adjacent molecules. This might be compared to a fantastically huge dance in which the individual participants don't move very far (takes a water molecule >12 hours to move a cm at 298K) but they are holding hands and then releasing to grab new partners far more frequently than billions of times a second. Thus, that previous motional energy of intermolecular vibration that was in the crystal is now distributed among a far greater number of new translational energy levels, and that means that there are many more accessible microstates than in the solid. Similarly, a liquid at its vaporization temperature has the same energy as its gas molecules. (All of the enthalpy of vaporization is needed to break intermolecular bonds in the liquid.) However in the case of liquid to vapor, there is a huge expansion (a thousand times increase)in volume. Therefore, this means closer energy levels, far more than were available for the motional energy in the liquid — and a greatly increased number of microstates for the vapor. The Expansion of a Gas Into a Vacuum. The Mixing of Ideal Fluids. Dissolving solutes. The entropy effects in gas expansion into a vacuum, as described previously, are qualitatively similar to gases mixing.. From a macro viewpoint, the initial energy of each constituent becomes more dispersed in the new larger volume provided by the combined volumes of the components. Then, on a molecular basis, because the density (closeness) of energy levels increases in the larger volume, and therefore there is greater dispersal of the molecular energies on those additional levels, there are more possible arrangements (more microstates) for the mixture than for the individual constituents. Thus, the mixing process for gases is actually a spontaneous process due to an increase in volume. The entropy increases (6). Meyer vigorously pointed toward this same statement in "the spontaneity of processes generally described as "mixing", that is, combination of two different gases initially each at pressure, p, and finally at total pressure, p , has absolutely nothing to do with the mixing itself of either" (7). The same cause — of volume increase of the system resulting in a greater density of energy levels — obviously cannot apply to liquids in which there is little or no volume increase when they are mixed. However, as Craig has well said, "The "entropy of mixing" might better be called the "entropy of dilution" (8b). By this one could mean that the molecules of either constitutent in the mixture become to some degree separated from one another and thus their energy levels become closer together by an amount determined by the amount of the other constituent that is added. Whether or not this is true, ‘configurational entropy' is the designation in statistical mechanics for considering entropy change when two or more substances are mixed to form a solution. The model uses combinatorial methods to determine the number of possible “cells” (that are identified as microstates, and thus each must correspond to one accessible arrangement of the energies of the substances involved). This number is shown to depend on the mole fractions of each component in the final solution and thus the entropy is found to be: ΔS = - R (n1 ln X1 + n2 ln X2 ) with n1 and n2 the moles of pure solute and solvent, and X1 and X2 the mole fractions of solute and solvent (8a, 9). In general chemistry texts, configurational entropy is called ‘positional entropy' and is contrasted to the classic entropy of Clausius that is then called ‘thermal entropy'. The definition of Clausius is fundamental; positional entropy is derivative in that its conclusions can be derived from thermal entropy concepts/procedures, but the reverse is not possible. Most important is the fact that positional entropy in texts often is treated as just that: the positions of molecules in space determine the entropy of the system, as though their locations — totally divorced from any motion or any energy considerations — were causal in entropy change. This is misleading. Any count of ‘positions in space' or of ‘cells' implicitly includes the fact that molecules being counted are particles with energy. Although the initial energy of a system is unchanged when it increases in volume or when constituents are mixed to form it, that energy is more dispersed, less localized after the processes of expansion or of mixing. Entropy increase always involves an increase in energy dispersal at a specific temperature. Colligative Properties "Escaping tendency" or chemical potentials or graphs that are complex to a beginner are often used to explain the freezing point depression and boiling point elevation of solutions. These topics can be far more clearly explained by first describing that an entropy increase occurs when a non-volatile solute is added to a solvent — the solvent's motional energy becomes more dispersed compared to the pure solvent, just as it does when any non-identical liquids are mixed. (This is the fundamental basis for a solvent's decreased "escaping tendency" when it is in a solution. If the motional energy of the solvent in a solution is less localized, more spread out, the solvent less tends to “escape” from the liquid state to become a solid when cooled or a vapor when heated.) Considering the most common example of aqueous solutions of salts: Because of its greater entropy in a solution (i.e., its energy more ‘spread out' at 273.15 K and less tending to have its molecules ‘line up' and give out that energy in forming bonds of solid ice), liquid water containing a solute that is insoluble in ice is not ready for equilibrium with solid ice at 273.15 K. Somehow, the more-dispersed energy in the water of the solution must be decreased for the water to change to ice. But that is easy, conceptually — all that has to be done is to cool the solution below 273 K because, contrary to making molecules move move rapidly and spread their energy when heated, cooling a pure liquid or a solution obviously will make them move more slowly and their motional energy become less spread out, more like the energy in crystalline ice. Interestingly, the heat capacity of water of an aqueous solution (75 J/mol) is about twice that of ice (38 J/mol). That means that when the temperature of the surroundings decrease by one degree, the solution disperses much more energy to the surroundings than the ice — and the motional energy of water and any possible ice becomes closer. Therefore, when the temperature decreases a degree or so (and finally to –1.86 C/kg.mol!), the solution's higher entropy has rapidly decreased to be in thermodynamic equilibium with ice and the surroundings amd freezing can begin. As energy (the enthalpy of fusion) continues to be dispersed to the cold surroundings, the liquid water freezes. Of course, the preceding explanation could have been framed in terms of entropy change or numbers of microstates, but keeping the focus on what is happening to molecular motion, on energy and its dispersal is primary rather than derivative. The elevation of boiling points of solutions having a non-volatile solute is as readily rationalized as is freezing point depression. The more dispersed energy (and greater entropy) of a solvent in the solution means that the expected thermodynamic equilibrium (the equivalence of the solvent's vapor pressure at the normal boiling point with atmospheric pressure) cannot occur at that boiling point. For example, the energy in water in a solution at 373 K is more widely dispersed due to the increased number of microstates for a solution than for pure water at 373 K. Therefore, water molecules in a solution less tend to leave the solution for the vapor phase than from pure water. Energy must be transferred from the surroundings to an aqueous solution to increase its energy content, thereby compensating for the greater dispersion of the water's energy due to its being in a solution, and to allow water molecules to escape readily to the surroundings. (More academically, “to raise the vapor pressure of the water in the solution”.) As energy is dispersed to the solution from the surroundings and the temperature rises above 373 K, at some point a new equilibrium temperature for phase transition is reached. The water then boils because the greater vapor pressure of the water in the solution with its increased motional energy now equals the atmospheric pressure. Osmosis Although the hardware system of osmosis in the chemistry laboratory and in industry is unique, the process that occurs in it is merely a special case of the mixing of a solvent with a solution — ‘special' because of the existence of such marvels as semi-permeable membranes through which a solvent can pass but a solute cannot. As would be deduced from the discussion about mixing two liquids or mixing a solid solute and a liquid, the solvent in a solution has a greater entropy than a sample of pure solvent. Its energy is more dispersed in the solution. Thus, if there is a semi-permeable membrane between a solution made with a particular solvent and some pure solvent, the solvent will spontaneously move through the membrane to the solution because its energy becomes more dispersed in the solution and thus, its entropy becomes increased if it mixes with the solution. Contributors and Attributions • Frank L. Lambert, Professor Emeritus, Occidental College
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Thermodynamics/Energies_and_Potentials/Entropy/Simple_Entropy_Changes_-_Examples.txt
Entropy is a state function that is often erroneously referred to as the 'state of disorder' of a system. Qualitatively, entropy is simply a measure how much the energy of atoms and molecules become more spread out in a process and can be defined in terms of statistical probabilities of a system or in terms of the other thermodynamic quantities. Entropy is also the subject of the Second and Third laws of thermodynamics, which describe the changes in entropy of the universe with respect to the system and surroundings, and the entropy of substances, respectively. Statistical Definition of Entropy Entropy is a thermodynamic quantity that is generally used to describe the course of a process, that is, whether it is a spontaneous process and has a probability of occurring in a defined direction, or a non-spontaneous process and will not proceed in the defined direction, but in the reverse direction. To define entropy in a statistical manner, it helps to consider a simple system such as in Figure $1$. Two atoms of hydrogen gas are contained in a volume of $V_1$. Since all the hydrogen atoms are contained within this volume, the probability of finding any one hydrogen atom in $V_1$ is 1. However, if we consider half the volume of this box and call it $V_2$,the probability of finding any one atom in this new volume is $\frac{1}{2}$, since it could either be in $V_2$ or outside. If we consider the two atoms, finding both in $V_2$, using the multiplication rule of probabilities, is $\frac{1}{2} \times \frac{1}{2} =\frac{1}{4}.$ For finding four atoms in $V_2$ would be $\frac{1}{2} \times \frac{1}{2} \times \frac{1}{2} \times \frac{1}{2}= \frac{1}{16}.$ Therefore, the probability of finding N number of atoms in this volume is $\frac{1}{2}^N$. Notice that the probability decreases as we increase the number of atoms. If we started with volume $V_2$ and expanded the box to volume $V_1$, the atoms would eventually distribute themselves evenly because this is the most probable state. In this way, we can define our direction of spontaneous change from the lowest to the highest state of probability. Therefore, entropy $S$ can be expressed as $S=k_B \ln{\Omega} \label{1}$ where $\Omega$ is the probability and $k_B$ is a proportionality constant. This makes sense because entropy is an extensive property and relies on the number of molecules, when $\Omega$ increases to $W^2$, S should increase to 2S. Doubling the number of molecules doubles the entropy. So far, we have been considering one system for which to calculate the entropy. If we have a process, however, we wish to calculate the change in entropy of that process from an initial state to a final state. If our initial state 1 is $S_1=K_B \ln{\Omega}_1$ and the final state 2 is $S_2=K_B\ln{\Omega}_2$, $\Delta S=S_2-S_1=k_B \ln \dfrac{\Omega_2}{\Omega_1} \label{2}$ using the rule for subtracting logarithms. However, we wish to define $\Omega$ in terms of a measurable quantity. Considering the system of expanding a volume of gas molecules from above, we know that the probability is proportional to the volume raised to the number of atoms (or molecules), $\alpha V^{N}$. Therefore, $\Delta S=S_2-S_1=k_B \ln \left(\dfrac{V_2}{V_1} \right)^N=Nk_B \ln \dfrac{V_2}{V_1} \label{3}$ We can define this in terms of moles of gas and not molecules by setting the $k_{B}$ or Boltzmann constant equal to $\frac{R}{N_A}$, where R is the gas constant and $N_A$ is Avogadro's number. So for a expansion of an ideal gas and holding the temperature constant, $\Delta S=\dfrac{N}{N_A} R \ln \left(\dfrac{V_2}{V_1} \right)^N=nR\ln \dfrac{V_2}{V_1} \label{4}$ because $\frac{N}{N_A}=n$, the number of moles. This is only defined for constant temperature because entropy can change with temperature. Furthermore, since S is a state function, we do not need to specify whether this process is reversible or irreversible. Thermodynamic Definition of Entropy Using the statistical definition of entropy is very helpful to visualize how processes occur. However, calculating probabilities like $\Omega$ can be very difficult. Fortunately, entropy can also be derived from thermodynamic quantities that are easier to measure. Recalling the concept of work from the first law of thermodynamics, the heat (q) absorbed by an ideal gas in a reversible, isothermal expansion is $q_{rev}=nRT\ln \dfrac{V_2}{V_1} \; . \label{5}$ If we divide by T, we can obtain the same equation we derived above for $\Delta S$: $\Delta S=\dfrac{q_{rev}}{T}=nR\ln \dfrac{V_2}{V_1} \;. \label{6}$ We must restrict this to a reversible process because entropy is a state function, however the heat absorbed is path dependent. An irreversible expansion would result in less heat being absorbed, but the entropy change would stay the same. Then, we are left with $\Delta S> \dfrac{q_{irrev}}{T}$ for an irreversible process because $\Delta S=\Delta S_{rev}=\Delta S_{irrev} .$ This apparent discrepancy in the entropy change between an irreversible and a reversible process becomes clear when considering the changes in entropy of the surrounding and system, as described in the second law of thermodynamics. It is evident from our experience that ice melts, iron rusts, and gases mix together. However, the entropic quantity we have defined is very useful in defining whether a given reaction will occur. Remember that the rate of a reaction is independent of spontaneity. A reaction can be spontaneous but the rate so slow that we effectively will not see that reaction happen, such as diamond converting to graphite, which is a spontaneous process. The Second Law as Energy Dispersion Energy of all types -- in chemistry, most frequently the kinetic energy of molecules (but also including the phase change/potential energy of molecules in fusion and vaporization, as well as radiation) changes from being localized to becoming more dispersed in space if that energy is not constrained from doing so. The simplest example stereotypical is the expansion illustrated in Figure 1. The initial motional/kinetic energy (and potential energy) of the molecules in the first bulb is unchanged in such an isothermal process, but it becomes more widely distributed in the final larger volume. Further, this concept of energy dispersal equally applies to heating a system: a spreading of molecular energy from the volume of greater-motional energy (“warmer”) molecules in the surroundings to include the additional volume of a system that initially had “cooler” molecules. It is not obvious, but true, that this distribution of energy in greater space is implicit in the Gibbs free energy equation and thus in chemical reactions. “Entropy change is the measure of how more widely a specific quantity of molecular energy is dispersed in a process, whether isothermal gas expansion, gas or liquid mixing, reversible heating and phase change, or chemical reactions.” There are two requisites for entropy change. 1. It is enabled by the above-described increased distribution of molecular energy. 2. It is actualized if the process makes available a larger number of arrangements for the system’s energy, i.e., a final state that involves the most probable distribution of that energy under the new constraints. Thus, “information probability” is only one of the two requisites for entropy change. Some current approaches regarding “information entropy” are either misleading or truly fallacious, if they do not include explicit statements about the essential inclusion of molecular kinetic energy in their treatment of chemical reactions. Contributors and Attributions • Frank L. Lambert, Professor Emeritus, Occidental College • Konstantin Malley (UCD)
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Thermodynamics/Energies_and_Potentials/Entropy/Statistical_Entropy.txt
The universe is made up of bits of matter, and the bits of matter have mass and energy. Neither mass nor energy may be created or destroyed, but in some special cases interconversions between mass and energy may occur. The things that happen in nature occur because matter is trying to gather to itself as much mass as it can (gravity) and to release as much of its energy as it can (the Sun, for example). This gathering of mass and loss of energy requires a price to be paid - the price of freedom. It seems that mass, energy, and freedom are the coins of the universe. Mass and energy must be conserved: if one part of the universe gathers more "wealth" as mass or energy, those must come from somewhere else in the universe, so there is always competition in the trading of these commodities. Freedom, on the other hand, is not conserved. However, there are some strange restraints on freedom. The freedom of the universe can take many forms, and appears to be able to increase without limit. However, the total freedom of the universe is not allowed to decrease. The energy or the mass of a part of the universe may increase or decrease, but only if there is a corresponding decrease or increase somewhere else in the universe. The freedom in that part of the universe may increase with no change in the freedom of the rest of the universe. There might be decreases in freedom in the rest of the universe, but the sum of the increase and decrease must result in a net increase. There can be a decrease in the freedom in one part of the universe, but ONLY if there is an equal or greater increase in the rest of the universe. There can be a decrease in the freedom in one part of the universe, but ONLY if there is an equal or greater increase in the rest of the universe Most of us have a general idea of what mass and energy are, and we may have a fair understanding of how we can quantify them, or to say how much of them we have. Freedom is a more complicated concept. The freedom within a part of the universe may take two major forms: the freedom of the mass and the freedom of the energy. The amount of freedom is related to the number of different ways the mass or the energy in that part of the universe may be arranged while not gaining or losing any mass or energy. We will concentrate on a specific part of the universe, perhaps within a closed container. If the mass within the container is distributed into a lot of tiny little balls (atoms) flying blindly about, running into each other and anything else (like walls) that may be in their way, there is a huge number of different ways the atoms could be arranged at any one time. Each atom could at different times occupy any place within the container that was not already occupied by another atom, but on average the atoms will be uniformly distributed throughout the container. If we can mathematically estimate the number of different ways the atoms may be arranged, we can quantify the freedom of the mass. If somehow we increase the size of the container, each atom can move around in a greater amount of space, and the number of ways the mass may be arranged will increase. Now let us turn to the freedom of the energy within the container. If the mass is in the form of atoms flying around, energy is only in the form of the kinetic energy of these atoms, and the energy of the electrons in the atoms moving around their nucleus (and sometimes we have to consider the neutrons, protons, and other stuff moving around in the nucleus). An atom's kinetic energy is related to its mass and its velocity. The energy in the container is the sum of the kinetic energies of all of the atoms, but the velocity is not the same for each atom, and the atoms are continually exchanging this energy through collisions with each other and through collisions with the walls. In the same way that the mass may have freedom in the number of ways the atoms may be arranged in space, the energy may have freedom in the number of ways that the velocities and directions of the atoms may be arranged. The velocities of the molecules are closely related to the temperature. The energy and freedom of gaseous atoms appear only in velocities and directions, and is called translational energy and translational freedom. In the case of molecules (a molecule is a group of two or more atoms held together by chemical bonds) there are additional freedoms and additional forms of energy. Bonds between atoms act as springs allowing the atoms to vibrate within the molecule, so that the molecules may contain different levels of vibrational energy and vibrational freedom. Additionally, the entire molecule may rotate on different axes, allowing different levels of rotational energy and rotational freedom. While the freedom of mass is related to the volume in which the mass is distributed, the freedom of energy is related to the temperature. An increase in the temperature of a gas leads directly to an increase in energy, and this can only occur if there is a decrease in energy somewhere else in the universe. We say that energy is transferred to the gas and the container (we will call that the "system")) from somewhere in the remainder of the universe (we will call that the "surroundings"). The increase in energy is accompanied by an increase in the energetic freedom of the system. The thermodynamic term for quantifying freedom is entropy, and it is given the symbol $S$. Like freedom, the entropy of a system increases with the temperature and with volume. The effect of volume is more easily seem in terms of concentration, especially in the case of mixtures. For a certain number of atoms or molecules, an increase in volume results in a decrease in concentration. Therefore, the entropy of a system increases as the concentrations of the components decrease. The part of entropy which is determined by energetic freedom is called thermal entropy, and the part that is determined by concentration is called configurational entropy. The units of entropy are the same as those of heat capacity and of the gas law constant. The product of entropy (or a change in entropy) and the absolute temperature has the same units as energy (or a change in energy). • The First Law of Thermodynamics states that the energy of the universe is constant. • The Second Law of Thermodynamics states that the entropy of the universe cannot decrease. Example $1$: thermal entropy If the temperature of a gas is increased while the volume remains constant, the energy of the gas is increased and there is an increase in energetic freedom or thermal entropy. There is no change in concentration, and no change in configurational entropy. This process involves: Increase in Temperature, Increase in Energy, and Increase in Thermal Entropy Results: No Change in Volume or Concentration and No Change in Configurational Entropy Example $2$: configurational entropy If the volume of the gas is increased while the temperature remains constant, the energy of the gas does not change and there is no change in thermal entropy. The increase in volume lowers the concentration and there is an increase in configurational entropy. This process involves: No Change in Temperature, No Change in Energy, and No Change in Thermal Entropy Results: Increase in Volume, Decrease in Concentration, and Increase in Configurational Entropy The opening paragraph stated that matter tends to draw more matter to itself and tries to reduce its energy. There is also the tendency to increase its freedom and thus its entropy. There is a natural conflict between these tendencies to lower energy and increase entropy, since a reduction in energy is usually accompanied by a reduction in freedom and entropy. For changes in which the initial and final temperatures are the same, these are combined into a net tendency for a system to change, in which the symbol $U$ is used for energy and $T$ is the absolute temperature. $ΔU - TΔS \label{eq10}$ in which $ΔU$ and $ΔS$ represent the changes in energy and entropy that would be measured for the change IF it occurred. If this net quantity is positive, the change cannot occur without some additional help. If this quantity is negative, the change might occur but there is no guarantee that it will occur. However, when the quantity is negative, the reverse change cannot occur without additional help. Changes that actually occur when this quantity is less than zero are said to be spontaneous or irreversible. Definition: Equilibrium If the quantity in Equation \ref{eq10} is equal to zero, the change will not occur, but it could be pushed either forward or backward with very little additional help. This condition is described as equilibrium, and the change is said to be reversible. Physical States Crystal: A crystalline solid has very little movement and it is in a very low energy state. Movement of the atoms or molecules is limited to vibrations around a fixed point, so there is very little thermal entropy. The atoms/molecules are very close together (high concentration) and they are arranged in a very specific configuration (the crystal structure) which is repeated over and over within the crystal. There is very little freedom in the ways that the mass can be arranged, so the crystalline state has very little configurational entropy. Liquid: When a solid melts, the atoms/molecules begin to move around, and perhaps also rotating and vibrating. The liquid has considerably more energy than the solid and thus has more thermal entropy. The volume does not change appreciably when a solid melts. Normally there is a small increase in volume on melting (the solid sinks in the liquid), but a few materials (water is one of them) show a decrease in volume on melting (the solid floats in the liquid). Normally there is a small increase in configurational entropy, but for materials like water there is a small decrease. Overall, there is an increase in both energy and entropy when a solid melts. Gas: When a liquid vaporizes, the atoms/molecules receive a huge increase in energy - so large that it seems like they are no longer subject to gravity. There is a large increase in volume and the concentration becomes very small. There is a correspondingly large increase in freedom, so that both the thermal entropy and the configurational entropy are greatly increased. Phase Transitions Melting (fusion): Both energy and entropy increase on melting, so ΔU and ΔS are positive for fusion. At low temperatures (below the melting point) the positive ΔU contributes more than $TΔS$ so the quantity $ΔU - TΔS$ is positive, and melting cannot occur. As the temperature is raised however, both ΔU and ΔS increase but TΔS increases much more rapidly than ΔU. The quantity above will eventually become equal to zero at some temperature (the melting point) and the solid will spontaneously melt at any higher temperature. The opposite of melting is freezing. Boiling (vaporization): Both $ΔU$ and $ΔS$ are large and positive for vaporization. The configurational entropy of the gas is related to the concentration of the gas molecules, so ΔS is greater for smaller concentrations of molecules in the vapor. This allows a balance between $ΔU$ and $TΔS$ at low temperatures, providing the concentration of gas molecules (and the vapor pressure) is sufficiently low. As the temperature is raised, this balance is maintained by an increase in the concentration of the gas molecules (and an increase in vapor pressure). This results in a wide range of temperatures for a liquid and its vapor to be in equilibrium, with the vapor pressure increasing as the temperature is raised. The temperature at which the vapor pressure is exactly 1 atmosphere is defined as the normal boiling point of the liquid. The opposite of vaporization is condensation. Sublimation: In the same way that $ΔU$ and $TΔS$ can balance for a liquid with its vapor at very low temperature, a similar balance can occur for a solid and its vapor below the melting point. For most solids, the vapor pressure is so low that we aren't concerned about it. However, we can see the effect in snow disappearing from a rooftop on a cold day without ever melting. This process is called sublimation. For a few solids (carbon dioxide or "dry ice" is one) the vapor pressure reaches 1 atmosphere at a temperature below the melting point, so that we never see the liquid at atmospheric pressure. The liquid state can be observed, however, at higher pressures and pressurized cylinders of carbon dioxide usually contain a mixture of the liquid and the gas. The opposite of sublimation is deposition. Solutions When pure materials in some physical state are mixed to form a solution in the same physical state at the same temperature and pressure, the process is called mixing. The energy change on mixing may be either positive or negative, but the entropy change is always positive. The entropy change is due mainly to the decrease in the concentrations of the individual components in the change from a limited volume in the pure state to a much larger volume in the mixed state. No Change in Temperature Small Change in Energy Small Change in Thermal Entropy --> Small Change in Volume Decrease in Concentration of Red Molecules Decrease in Concentration of Blue Molecules Increase in Configurational Entropy When the mixed components are roughly equal in concentration, they are usually just called components (component A and component B, or components 1 & 2). For these solutions, concentrations are usually expressed in mole fractions, and occasionally in mass fractions or volume fractions. When the concentration of one component in a liquid solution is much greater than the others, it is usually called the solvent, and the less-concentrated components are called solutes. For these solutions, the concentration of the solute may be expressed as mole fraction, molality (moles of solute per kilogram of solvent), or molarity (moles of solute per liter of solution). If needed, the concentration of solvent is usually expressed as mole fraction. Solubility of Solutes Before mixing, the pure solute may have been a gas, a liquid, or a solid. The first bit of solute that dissolves will have an extremely low concentration, so the change in configurational entropy for that dissolution will be very large if the pure solute is a liquid or a solid. If the pure solute is a gas, the change in configurational entropy will depend on the concentration of the gas. In all cases, the change in thermal entropy may be positive or negative, but the configurational entropy usually dominates at very low concentrations. We can get a "feel" for the energy change as something dissolves. If the solution becomes colder as the solute dissolves, the energy change is positive since energy will have to be transferred to the solution in order to bring it back to the original temperature. If the solution becomes hotter as the solute dissolves, the energy change is negative since energy will have to be removed from the solution in order to return to the initial temperature. If the energy change is positive and unfavorable the positive entropy change may allow a small amount of the solute to dissolve. However, as more of the solute dissolves the concentration increases and the entropy change decreases until the energy change is exactly balanced by $TΔS$ and no more of the solute will dissolve. The concentration at this point determines the solubility of the solute in the solvent at this temperature. When there is a positive energy change for dissolution, an increase in temperature increases the effect of the positive entropy change and the solubility of the solute increases. In the case of gases dissolving in liquids, the solubility of the gas depends on the concentration of the gas molecules. As the concentration in the gas phase increases, the concentration in the liquid phase increases. At low concentrations there is a direct proportionality between the concentrations in the two phases. This relationship is generally known as Henry's Law. Its most common form is for the concentration in the vapor phase to be represented by the partial pressure of the solute and for the concentration in the liquid phase to be represented by the mole fraction of the solute. Henry's Law is normally associated with the solute in dilute solutions, but it also applies to the solvent in these dilute solutions. The application to the solvent is a special case of Henry's Law, called Raoult's Law. Raoult's Law states that the vapor pressure (or partial pressure) of the solvent becomes equal to the mole fraction of the solvent multiplied by the vapor pressure of the pure solvent when the solutes become very dilute. Colligative Properties (Solvent) Colligative properties of solutions are properties which depend only on the concentration of the solvent, and are independent of what the solute might be. Raoult's Law is the most basic example of a colligative property, because the vapor pressure (or partial pressure) of the solvent is determined only by the mole fraction of the solvent in a sufficiently dilute solution. This is usually stated as vapor pressure lowering by a non-volatile solute: The difference between the vapor pressure of the solution and that of the pure solvent is proportional to the mole fraction of solute. All colligative properties are based on the fact that the concentration of the solvent is greatest when the solvent is pure, and the concentration of the solvent decreases as solute is added. When applied to freezing and melting, the entropy change is greater for the frozen pure solvent to melt when a solute is present than when the liquid phase is pure, and the solvent can melt at a lower temperature. This freezing point depression depends only on the concentration of the solvent and not on the nature of the solute, and is therefore a colligative property. In a similar fashion, when a solution containing a solvent and a non-volatile solute is heated to near its boiling point at 1 atmosphere pressure, the entropy of the solvent in the solution is greater than in the pure liquid. This makes the entropy change for vaporization from the solution less than from the pure liquid, and the solvent will not boil until the temperature is higher than the normal boiling point. This boiling point elevation is also a colligative property. In some cases, a solvent may be separated from a solution containing a solute by a semi-permeable membrane. The solvent can move through the pores in this membrane, but the solute is unable to move through the pores because of its size or perhaps its charge or polarity. This movement of the solvent is called osmosis. Since the concentration of the solvent is lower in the solution than in the pure liquid, the solvent molecules have greater entropy in the solution, and there is a tendency for the molecules to move through the membrane. As the solvent molecules move, the liquid level of the solution becomes higher than that of the pure liquid, and eventually the pressure becomes sufficient to prevent any more solvent molecules from crossing the membrane. This pressure difference between the solution and the pure solvent is called the osmotic pressure. Osmotic pressure is a colligative property. Entropy and ORDER vs. DISORDER This discussion has carefully avoided associating entropy with order and disorder. Instead, the focus has been on different types of freedom. Many textbooks use the order/disorder interpretation, and refer to examples such as the entropy of a shuffled deck of cards, or the entropy of a messy desk (or room) in comparison to a neat one. While these are strong images, the concept is basically incorrect. The example with a deck of cards assumes that there is some order of the cards which represents perfect order. This simply happens to be the order given to a sealed deck by the manufacturer, namely each suit increasing from the Ace sequentially to the King, with alternating colors of the suits. That is no more orderly than starting with the 2 and increasing through the King to the Ace, or the reverse order. The deck could be arranged with all four Aces in some order of suits, followed by the four 2's in the same order of suits, etc. - that is also ordered. A new manufacturer may decide to package the decks in some order that has significance only to that manufacturer. Does that arrangement then become the perfect order for his product? Perhaps we should also be concerned with whether the cards are all facing the same direction. The point is that there are many possible arrangements of the cards in a deck. When the cards are randomly and thoroughly shuffled, each of these arrangements has equal probability of occurring. When an arrangement is established, there is no freedom for the deck to acquire any other arrangement unless someone shuffles it or rearranges it in some manner. Since there is no freedom for that arrangement, there is no entropy associated with a single arrangement. On the other hand, if a very large number of decks of cards are thoroughly and randomly shuffled, there are many possibilities for the arrangements of the cards within the different decks. When this idea of different arrangements of the cards is applied to atoms and molecules in solids and liquids, the number of possible arrangements create freedom of arrangement, a type of configurational entropy - not for any specific deck, but for the group of decks - or the ensemble (technically, the ensemble is the huge imaginary group of all of these decks arranged in all of the possible arrangements) of decks. The number of possible arrangements of the 52 cards in a deck is 52! (fifty-two factorial) which is equal to 52 x 51 x 50 x 49 x...x 3 x 2 x 1, an astronomical number. If these cards were atoms or molecules, the entropy due to this incredible amount of freedom would be related to the number of randomized decks multiplied by the natural logarithm of 52!. We can calculate the entropy per deck by dividing the entropy of the group of decks by the number of decks, but it is important to differentiate between the entropy per deck (which is a property of the group) and the entropy of a single deck (which really has no meaning in chemical systems).
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Thermodynamics/Energies_and_Potentials/Entropy/Statistical_Entropy_-_Mass%2C_Energy%2C_and_Freedom.txt
Quantization of the motional energy of molecules Early in their discussion of kinetic-molecular theory, most general chemistry texts have a Figure of the greatly increased distribution of molecular speeds at higher temperatures in gases than at moderate temperatures. When the temperature of a gas is raised (by transfer of energy from the surroundings of the system), there is a great increase in the velocity, $v$, of many of the gas molecules (Figure $1$). From 1/2mv2, this means that there has also been a great increase in the translational energies of those faster moving molecules. Finally, we can see that an input of energy not only causes the gas molecules in the system to move faster — but also to move at very many different fast speeds. (Thus, the energy in a heated system is more dispersed, spread out in being in many separate speeds rather than more localized in fewer moderate speeds.) A symbolic indication of the different distributions of the translational energy of each molecule of a gas on low to high energy levels in a 36-molecule system is in Figure $2$, with the lower temperature gas as Figure $\PageIndex{2A}$ and the higher temperature gas as Figure $\PageIndex{2B}$. These and later Figures in this section are symbolic because, in actuality, this small number of molecules is not enough to exhibit thermodynamic temperature. For further simplification, rotational energies that range from zero in monatomic molecules to about half the total translational energy of di- and tri-atomic molecules (and more for most polyatomic) at 300 K are not shown in the Figures. If those rotational energies were included, they would constitute a set of energy levels (corresponding to a spacing of ~10-23 J) each with translational energy distributions of the 36 molecules (corresponding to a spacing of ~10-37 J). These numbers show why translational levels, though quantized, are considered virtually continuous compared to the separation of rotational energies. The details of vibrational energy levels — two at moderate temperatures (on the ground state of which would be almost all the rotational and translational levels populated by the molecules of a symbolic or real system) — can also be postponed until physical chemistry. At this point in the first year course, depending on the instructor's preference, only a verbal description of rotational and vibrational motions and energy level spacing need be introduced. By the time in the beginning course that students reach thermodynamics, five to fifteen chapters later than kinetic theory, they can accept the concept that the total motional energies of molecules includes not just translational but also rotational and vibrational movements (that can be sketched simply below). A microstate is one of many arrangements of the molecular energies (i.e., ‘the molecules on each particular energy level') for the total energy of a system. Thus, Figure $\PageIndex{2A}$ is one microstate for a system with a given energy and Figure $\PageIndex{1B}$ is a microstate of the same system but with a greater total energy. Figure $\PageIndex{3A}$ (just a repeat of Figure $\PageIndex{2A}$, for convenience) is a different microstate than the microstate for the same system shown in Figure $\PageIndex{3B}$; the total energy is the same in $\PageIndex{3A}$ and $\PageIndex{3B}$, but in Figure $\PageIndex{3B}$ the arrangement of energies has been changed because two molecules have changed their energy levels, as indicated by the arrows. A possible scenario for that different microstate in Figure $3$ is that these two molecules on the second energy level collided at a glancing angle such that one gained enough energy to be on the third energy level, while the other molecule lost the same amount of energy and dropped down to the lowest energy level. In the light of that result of a single collision and the billions of collisions of molecules per second in any system at room temperature, there can be a very large number of microstates even for this system of just 36 molecules in Figures $2$ and $3$. (This is true despite the fact that not every collision would change the energy of the two molecules involved, and thus not change the numbers on a given energy level. Glancing collisions could occur with no change in the energy of either participant.) For any real system involving 6 x 1023 molecules, however, the number of microstates becomes humanly incomprehensible for any system, even though we can express it in numbers, as will now be developed. The quantitative entropy change in a reversible process is given by $ΔS = \dfrac{q_{rev}}{T}$ (Irreversible processes involving temperature or volume change or mixing can be treated by calculations from incremental steps that are reversible.) According to the Boltzmann entropy relationship, $ΔS = k_B \ln \dfrac{\Omega_{Final}}{\Omega_{Initial}}$ where $k_B$ is Boltzmann's constant and $\Omega_{Final}$ or $\Omega_{Initial}$ is the count of how many microstates correspond to the Final or Initial macrostates, respectively. The number of microstates for a system determines the number of ways in any one of which that the total energy of a macrostate can be at one instant. Thus, an increase in entropy means a greater number of microstates for the Final state than for the Initial. In turn, this means that there are more choices for the arrangement of a system's total energy at any one instant, far less possibility of localization (such as cycling back and forth between just 2 microstates), i.e., greater dispersal of the total energy of a system because of so many possibilities. An increase in entropy means a greater number of microstates for the Final state than for the Initial. In turn, this means that there are more choices for the arrangement of a system's total energy at any one instant. Delocalization vs. Dispersal Some instructors may prefer “delocalization” to describe the status of the total energy of a system when there are a greater number of microstates rather than fewer, as an exact synonym for “dispersal” of energy as used here in this article for other situations in chemical thermodynamics. The advantage of uniform use of ‘dispersal' is its correct common-meaning applicability to examples ranging from motional energy becoming literally spread out in a larger volume to the cases of thermal energy transfer from hot surroundings to a cooler system, as well as to distributions of molecular energies on energy levels for either of those general cases. Students of lesser ability should be able to grasp what ‘dispersal' means in three dimensions, even though the next steps of abstraction to what it means in energy levels and numbers of microstates may result in more of a ‘feeling' than a preparation for physical chemistry that it can be for the more able. Of course, dispersal of the energy of a system in terms of microstates does not mean that the energy is smeared or spread out over microstates like peanut butter on bread! All the energy of the macrostate is always in only one microstate at one instant. It is the possibility that the total energy of the macrostate can be in any one of so many more different arrangements of that energy at the next instant — an increased probability that it could not be localized by returning to the same microstate — that amounts to a greater dispersal or spreading out of energy when there are a larger number of microstates (The numbers of microstates for chemical systems above 0 K are astounding. For any substance at a temperature about 1-4 K, there are 1026,000,000,000,000,000,000 microstates (5). For a mole of water at 273.15 K, there are 102,000,000,000,000,000,000,000,000 microstates and when it is heated to be just one degree warmer, that number is increased 1022 times to 102,010,000,000,000,000,000,000,000 microstates. For comparison, an estimate of the number of atoms in the entire universe is ‘only' about 1070, while a googol, considered a large number in mathematics, is `only' 10100.) Summarizing, when a substance is heated, its entropy increases because the energy acquired and that previously within it can be far more dispersed on the previous higher energy levels and on those additional high energy levels that now can be occupied. This in turn means that there are many many more possible arrangements of the molecular energies on their energy levels than before and thus, there is a great increase in accessible microstates for the system at higher temperatures. A concise statement would be that when a system is heated, there are many more microstates accessible and this amounts to greater delocalization or dispersal of its total energy. (The common comment "heating causes or favors molecular disorder" is an anthropomorphic labeling of molecular behavior that has more flaws than utility. There is virtual chaos, so far as the distribution of energy for a system (its number of microstates) is concerned, before as well as after heating at any temperature above 0 K and energy distribution is at the heart of the meaning of entropy and entropy change. ) (5). Isothermal Expansion When the volume of a gas is increased by isothermal expansion into an evacuated container, an entropy increase occurs but not for the same reason as when a gas or other substance is heated. There is no change in the quantity of energy of the system in such an expansion of the gas; $dq$ is zero. Instead, there is a spontaneous dispersal or spreading out of that energy in space. This change in entropy can be calculated in macrothermodynamics from the equivalent $q_{rev}$ to the work required to reversibly compress a mole of the gas back to its original volume, i.e., RT ln (V2 /V1 ), and then $ΔS = R \ln \left(\dfrac{V_2}{V_1}\right)$ From the viewpoint of molecular thermodynamics, a few general chemistry texts use quantum mechanics to show that when a gas expands in volume, its energy levels become closer together in any small range of energy. Symbolically in Figure $\PageIndex{4B}$, a doubling of volume doubles the number of energy levels and increases the possibilities for energy dispersal because of these additional levels for the same molecular energies in Figure $\PageIndex{4A}$. Due to this increased possibility for energy dispersal — a spread over twice as many energy levels — the entropy of the system increases. Then, as could be expected from any changes in the population of energy levels for a system, there are also far greater numbers of possible arrangements of the molecular energies on those additional levels, and thus many more microstates for the system. This is the ultimate quantitative measure of an entropy increase in molecular thermodynamics (and second law of thermodynamics), any spontaneous process will result in a greater $k_B \ln \dfrac{\Omega_{Final}}{\Omega_{Initial}}$ which is just the entropy of the reaction $\Delta S$. When two dissimilar ideal gases mix and the volume increases, or when dissimilar liquids mix with or without a volume change, the number of energy levels that can be occupied by the molecules of each component increases. Thus, for somewhat different reasons there are similar results in this progression: additional energy levels for population by molecules (or ‘by molecular energies'), increased possibilities for motional energy dispersal on those energy levels, a far greater number of different arrangements of the molecules' energies on the energy levels, and the final result of many more accessible microstates for the system. There is an increase in entropy in the mixture. This is also the case when solutes of any type dissolve (mix) in a solvent. The entropy of the solvent increases (as does that of the solute). This phenomenon is especially important because it is the basis of colligative effects that will be discussed later. Contributors and Attributions • Frank L. Lambert, Professor Emeritus, Occidental College
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Thermodynamics/Energies_and_Potentials/Entropy/The_Molecular_Basis_for_Understanding_Simple_Entropy_Change.txt
Free energy is a composite function that balances the influence of energy vs. entropy. Free Energy Learning Objectives • To get an overview of Gibbs energy and its general uses in chemistry. • Understand how Gibbs energy pertains to reactions properties • Understand how Gibbs energy pertains to equilibria properties • Understand how Gibbs energy pertains to electrochemical properties Gibbs free energy, denoted $G$, combines enthalpy and entropy into a single value. The change in free energy, $\Delta G$, is equal to the sum of the enthalpy plus the product of the temperature and entropy of the system. $\Delta G$ can predict the direction of the chemical reaction under two conditions: 1. constant temperature and 2. constant pressure. If $ΔG$ is positive, then the reaction is nonspontaneous (i.e., an the input of external energy is necessary for the reaction to occur) and if it is negative, then it is spontaneous (occurs without external energy input). Introduction Gibbs energy was developed in the 1870’s by Josiah Willard Gibbs. He originally termed this energy as the “available energy” in a system. His paper published in 1873, “Graphical Methods in the Thermodynamics of Fluids,” outlined how his equation could predict the behavior of systems when they are combined. This quantity is the energy associated with a chemical reaction that can be used to do work, and is the sum of its enthalpy (H) and the product of the temperature and the entropy (S) of the system. This quantity is defined as follows: $G= H-TS \label{1.1}$ or more completely as $G= U+PV-TS \label{1.2}$ where • $U$ is internal energy (SI unit: joule) • $P$ is pressure (SI unit: pascal) • $V$ is volume (SI unit: $m^3$) • $T$is temperature (SI unit: kelvin) • $S$ is entropy (SI unit: joule/kelvin) • $H$ is the enthalpy (SI unit: joule) Gibbs Energy in Reactions Spontaneous - is a reaction that is consider to be natural because it is a reaction that occurs by itself without any external action towards it. Non spontaneous - needs constant external energy applied to it in order for the process to continue and once you stop the external action the process will cease. When solving for the equation, if change of G is negative, then it's spontaneous. If change of G if positive, then it's non spontaneous. The symbol that is commonly used for FREE ENERGY is G. can be more properly consider as "standard free energy change" In chemical reactions involving the changes in thermodynamic quantities, a variation on this equation is often encountered: $\underset{\text {change in free energy} }{\Delta G } = \underset{ \text {change in enthalpy}}{ \Delta H } - \underset{\text {(temperature) change in entropy}}{T \Delta S} \label{1.3}$ Example 1.1 Calculate ∆G at 290 K for the following reaction: $\ce{2NO(g) + O2(g) \rightarrow 2NO2(g)} \nonumber$ Given • ∆H = -120 kJ • ∆S = -150 JK -1 Solution now all you have to do is plug in all the given numbers into Equation 3 above. Remember to divide $\Delta S$ by 1000 $J/kJ$ so that after you multiply by temperature, $T$, it will have the same units, $kJ$, as $\Delta H$. $\Delta S = -150 \cancel{J}/K \left( \dfrac{1\; kJ}{1000\;\cancel{J}} \right) = -0.15\; kJ/K \nonumber$ and substituting into Equation 3: \begin{align*} ∆G &= -120\; kJ - (290 \;\cancel{K})(-0.150\; kJ/\cancel{K}) \[4pt] &= -120 \;kJ + 43 \;kJ \[4pt] &= -77\; kJ \end{align*} Exercise 1.1: The Haber Process What is the $\Delta G$ for this formation of ammonia from nitrogen and hydrogen gas. $\ce{N_2 + 3H_2 \rightleftharpoons 2NH_3} \nonumber$ The Standard free energy formations: NH3 =-16.45 H2=0 N2=0 Answer $\Delta G=-32.90\;kJ \;mol^{-1} \nonumber$ Since the changes of entropy of chemical reaction are not measured readily, thus, entropy is not typically used as a criterion. To obviate this difficulty, we can use $G$. The sign of ΔG indicates the direction of a chemical reaction and determine if a reaction is spontaneous or not. • $\Delta G < 0$: reaction is spontaneous in the direction written (i.e., the reaciton is exergonic) • $\Delta G =0$: the system is at equilibrium and there is no net change either in forward or reverse direction. • $\Delta G > 0$: reaction is not spontaneous and the process proceeds spontaneously in the reserve direction. To drive such a reaction, we need to have input of free energy (i.e., the reaction is endergonic) The factors affect $\Delta G$ of a reaction (assume $\Delta H$ and $\Delta S$ are independent of temperature): $\Delta H$ $\Delta S$ $\Delta G$ Example + + at low temperature: + , at high temperature: - 2HgO(s) -> 2Hg (l) + O2 (g) + - at all temperature: + 3O2 (g) ->2O3 (g) - + at all temperature: - 2H2O2 (l) -> 2H2O (l) + O2 (g) - - at low temperature: - , at high temperature: + NH3 (g) + HCl (g) -> NH4Cl (s) Note: 1. $\Delta G$ depends only on the difference in free energy of products and reactants (or final state and initial state). $\Delta G$ is independent of the path of the transformation and is unaffected by the mechanism of a reaction. 2. $\Delta G$ cannot tell us anything about the rate of a reaction. The standard Gibbs energy change $\Delta G^o$ (at which reactants are converted to products at 1 bar) for: $aA + bB \rightarrow cC + dD \label{1.4}$ $\Delta r G^o = c \Delta _fG^o (C) + d \Delta _fG^o (D) - a \Delta _fG^o (A) - b \Delta _fG^o (B) \label{1.5}$ $\Delta _fG^0 = \sum v \Delta _f G^0 (\text {products}) - \sum v \Delta _f G^0 (\text {reactants}) \label{1.6}$ The standard-state free energy of reaction ( $\Delta G^o$) is defined as the free energy of reaction at standard state conditions: $\Delta G^o = \Delta H^o - T \Delta S^o \label{1.7}$ Note • If $\left | \Delta H \right | >> \left | T\Delta S \right |$: the reaction is enthalpy-driven • If $\Delta H$ << $T\Delta S$: the reaction is entropy-driven Standard-State Free Energy of Formation • The partial pressure of any gas involved in the reaction is 0.1 MPa. • The concentrations of all aqueous solutions are 1 M. • Measurements are generally taken at a temperature of 25° C (298 K). The standard-state free energy of formation is the change in free energy that occurs when a compound is formed from its elements in their most thermodynamically stable states at standard-state conditions. In other words, it is the difference between the free energy of a substance and the free energies of its constituent elements at standard-state conditions: $\Delta G^o = \sum \Delta G^o_{f_{products}} - \sum \Delta G^o_{f_{reactants}} \label{1.8}$ Example 1.2 Used the below information to determine if $NH_4NO_{3(s)}$ will dissolve in water at room temperature. Compound $\Delta H_f^o$ $\Delta S_f^o$ $NH_4NO_{3(s)}$ -365.56 151.08 $NH^+_{4(aq)}$ -132.51 113.4 $NO_{3(aq)}^-$ 205.0 146.4 Solution This question is essentially asking if the following reaction is spontaneous at room temperature. $\ce{NH4NO3(s) \overset{H_2O} \longrightarrow NH4(aq)^{+} + NO3(aq)^{-}} \nonumber$ This would normally only require calculating $\Delta{G^o}$ and evaluating its sign. However, the $\Delta{G^o}$ values are not tabulated, so they must be calculated manually from calculated $\Delta{H^o}$ and $\Delta{S^o}$ values for the reaction. • Calculate $\Delta{H^o}$: $\Delta H^o = \sum n\Delta H^o_{f_{products}} - \sum m\Delta H^o_{f_{reactants}} \nonumber$ $\Delta H^o= \left[ \left( 1\; mol\; NH_3\right)\left(-132.51\;\dfrac{kJ}{mol} \right) + \left( 1\; mol\; NO_3^- \right) \left(-205.0\;\dfrac{kJ}{mol}\right) \right] \nonumber$ $- \left[ \left(1\; mol\; NH_4NO_3 \right)\left(-365.56 \;\dfrac{kJ}{mol}\right) \right] \nonumber$ $\Delta H^o = -337.51 \;kJ + 365.56 \; kJ= 28.05 \;kJ \nonumber$ • Calculate $\Delta{S^o}$: $\Delta S^o = \sum n\Delta S^o_{f_{products}} - \sum S\Delta H^o_{f_{reactants}} \nonumber$ $\Delta S^o= \left[ \left( 1\; mol\; NH_3\right)\left(113.4 \;\dfrac{J}{mol\;K} \right) + \left( 1\; mol\; NO_3^- \right) \left(146.6\;\dfrac{J}{mol\;K}\right) \right] \nonumber$ $- \left[ \left(1\; mol\; NH_4NO_3 \right)\left(151.08 \;\dfrac{J}{mol\;K}\right) \right] \nonumber$ $\Delta S^o = 259.8 \;J/K - 151.08 \; J/K= 108.7 \;J/K \nonumber$ • Calculate $\Delta{G^o}$: These values can be substituted into the free energy equation $T_K = 25\;^oC + 273.15K = 298.15\;K \nonumber$ $\Delta{S^o} = 108.7\; \cancel{J}/K \left(\dfrac{1\; kJ}{1000\;\cancel{J}} \right) = 0.1087 \; kJ/K \nonumber$ $\Delta{H^o} = 28.05\;kJ \nonumber$ Plug in $\Delta H^o$, $\Delta S^o$ and $T$ into Equation 1.7 $\Delta G^o = \Delta H^o - T \Delta S^o \nonumber$ $\Delta G^o = 28.05\;kJ - (298.15\; \cancel{K})(0.1087\;kJ/ \cancel{K}) \nonumber$ $\Delta G^o= 28.05\;kJ - 32.41\; kJ \nonumber$ $\Delta G^o = -4.4 \;kJ \nonumber$ This reaction is spontaneous at room temperature since $\Delta G^o$ is negative. Therefore $NH_4NO_{3(s)}$ will dissolve in water at room temperature. Example 1.3 Calculate $\Delta{G}$ for the following reaction at $25\; ^oC$. Will the reaction occur spontaneously? $NH_{3(g)} + HCl_{(g)} \rightarrow NH_4Cl_{(s)} \nonumber$ given for the reaction • $\Delta{H} = -176.0 \;kJ$ • $\Delta{S} = -284.8\;J/K$ Solution calculate $\Delta{G}$ from the formula $\Delta{G} = \Delta{H} - T\Delta{S} \nonumber$ but first we need to convert the units for $\Delta{S}$ into kJ/K (or convert $\Delta{H}$ into J) and temperature into Kelvin • $\Delta{S} = -284.8 \cancel{J}/K \left( \dfrac{1\, kJ}{1000\; \cancel{J}}\right) = -0.284.8\; kJ/K$ • $T=273.15\; K + 25\; ^oC = 298\;K$ The definition of Gibbs energy can then be used directly $\Delta{G} = \Delta{H} - T\Delta{S} \nonumber$ $\Delta{G} = -176.0 \;kJ - (298 \cancel{K}) (-0.284.8\; kJ/\cancel{K}) \nonumber$ $\Delta{G} = -176.0 \;kJ - (-84.9\; kJ) \nonumber$ $\Delta{G} = -91.1 \;kJ \nonumber$ Yes, this reaction is spontaneous at room temperature since $\Delta{G}$ is negative. Gibbs Energy in Equilibria Let's consider the following reversible reaction: $A + B \leftrightharpoons C + D \label{1.9}$ The following equation relates the standard-state free energy of reaction with the free energy at any point in a given reaction (not necessarily at standard-state conditions): $\Delta G = \Delta G^o + RT \ln Q \label{1.10}$ • $\Delta G$ = free energy at any moment • $\Delta G^o$ = standard-state free energy • R is the ideal gas constant = 8.314 J/mol-K • T is the absolute temperature (Kelvin) • $\ln Q$ is natural logarithm of the reaction quotient At equilibrium, ΔG = 0 and Q=K. Thus the equation can be arranged into: $\Delta{G} = \Delta{G}^o + RT \ln \dfrac{[C][D]}{[A][B]} \label{1.11}$ with • $\Delta{G}^o$ = standard free energy change • $R$ = gas constant = 1.98 * 10-3 kcal mol-1 deg-10 • $T$ = is usually room temperature = 298 K • $K = \dfrac{[C][D]}{[A][B]}$ The Gibbs free energy $\Delta{G}$ depends primarily on the reactants' nature and concentrations (expressed in the $\Delta{G}^o$ term and the logarithmic term of Equation 1.11, respectively). At equilibrium, $\Delta{G} = 0$: no driving force remains $0 = \Delta{G}^{o'} + RT \ln \dfrac{[C][D]}{[A][B]} \label{1.12}$ $\Delta{G}^{o} = -RT \ln\dfrac{[C][D]}{[A][B]} \label{1.13}$ The equilibrium constant is defined as $K_{eq} = \dfrac{[C][D]}{[A][B]} \label{1.14}$ When $K_{eq}$ is large, almost all reactants are converted to products. Substituting $K_{eq}$ into Equation 1.14, we have: $\Delta{G}^{o} = -RT \ln K_{eq} \label{1.15}$ or $\Delta{G}^{o} = -2.303RT log_{10} K_{eq} \label{1.16}$ Rearrange, $K_{eq} = 10^{-\Delta{G}^{o}/(2.303RT)} \label{1.17}$ This equation is particularly interesting as it relates the free energy difference under standard conditions to the properties of a system at equilibrium (which is rarely at standard conditions). Table 1.1: Converting $K_{ea}$ to $\Delta{G}$ $K_{eq}$ $\Delta{G_o}\; (kcal/mole)$ $10^{-5}$ 6.82 $10^{-4}$ 5.46 $10^{-3}$ 4.09 $10^{-2}$ 2.73 $10^{-1}$ 1.36 1 0 $10^{1}$ -1.36 $10^{2}$ -2.73 $10^{3}$ -4.09 $10^{4}$ -5.46 $10^{5}$ -6.82 Example 1.4 What is $\Delta{G}^{o}$ for isomerization of dihydroxyacetone phosphate to glyceraldehyde 3-phosphate? If at equilibrium, we have $K_{eq} = 0.0475$ at 298 K and pH 7. We can calculate: $\Delta{G}^{o} = -2.303\;RT log_{10} K_{eq}= (-2.303) * (1.98 * 10^{-3}) * 298 * (log_{10} 0.0475) = 1.8 \;kcal/mol \nonumber$ Given: • The initial concentration of dihydroxyacetone phosphate = $2 \times 10^{-4}\; M$ • The initial concentration of glyceraldehyde 3-phosphate = $3 \times 10^{-6}\; M$ Solution From equation 2: $\Delta{G}$ = 1.8 kcal/mol + 2.303 RT log10(3*10-6 M/2*10-4 M) = -0.7 kcal/mol Note Under non-standard conditions (which is essential all reactions), the spontaneity of reaction is determined by $\Delta{G}$, not $\Delta{G}^{o'}$. Gibbs Energy in Electrochemistry The Nernst equation relates the standard-state cell potential with the cell potential of the cell at any moment in time: $E = E^o - \dfrac {RT}{nF} \ln Q \label{1.18}$ with • $E$ = cell potential in volts (joules per coulomb) • $n$ = moles of electrons • $F$ = Faraday's constant: 96,485 coulombs per mole of electrons By rearranging this equation we obtain: $E = E^o - \dfrac {RT}{nF} \ln Q \label{1.19}$ multiply the entire equation by $nF$ $nFE = nFE^o - RT \ln Q \label{1.20}$ which is similar to: $\Delta G = \Delta G^o + RT \ln Q \label{1.21}$ By juxtaposing these two equations: $nFE = nFE^o - RT \ln Q \label{1.22}$ $\Delta G = \Delta G^o + RT \ln Q \label{1.23}$ it can be concluded that: $\Delta G = -nFE \label{1.24}$ Therefore, $\Delta G^o = -nFE^o \label{1.25}$ Some remarks on the Gibbs "Free" Energy • Free Energy is not necessarily "free": The appellation “free energy” for G has led to so much confusion that many scientists now refer to it simply as the Gibbs energy. The “free” part of the older name reflects the steam-engine origins of thermodynamics with its interest in converting heat into work: ΔG is the maximum amount of energy which can be “freed” from the system to perform useful work. By "useful", we mean work other than that which is associated with the expansion of the system. This is most commonly in the form of electrical work (moving electric charge through a potential difference), but other forms of work (osmotic work, increase in surface area) are also possible. • Free Energy is not energy: A much more serious difficulty with the Gibbs function, particularly in the context of chemistry, is that although G has the units of energy (joules, or in its intensive form, J mol–1), it lacks one of the most important attributes of energy in that it is not conserved. Thus although the free energy always falls when a gas expands or a chemical reaction takes place spontaneously, there need be no compensating increase in energy anywhere else. Referring to G as an energy also reinforces the false but widespread notion that a fall in energy must accompany any change. But if we accept that energy is conserved, it is apparent that the only necessary condition for change (whether the dropping of a weight, expansion of a gas, or a chemical reaction) is the redistribution of energy.The quantity –ΔG associated with a process represents the quantity of energy that is “shared and spread”, which as we have already explained is the meaning of the increase in the entropy. The quotient –ΔG/T is in fact identical with ΔStotal, the entropy change of the world, whose increase is the primary criterion for any kind of change. • Free Energy is not even "real": G differs from the thermodynamic quantities H and S in another significant way: it has no physical reality as a property of matter, whereas H and S can be related to the quantity and distribution of energy in a collection of molecules (e.g., the third law of thermodynamics). The free energy is simply a useful construct that serves as a criterion for change and makes calculations easier.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Thermodynamics/Energies_and_Potentials/Free_Energy/Gibbs_%28Free%29_Energy.txt
Helmholtz energy function (Hermann Ludwig Ferdinand von Helmholtz) $A$ (for arbeit): $\left.A\right.=U-TS$ where U is the internal energy, T is the temperature and S is the entropy. (TS) is a conjugate pair. The differential of this function is $\left.dA\right.=dU-TdS-SdT$ From the second law of thermodynamics one obtains $\left.dA\right.=TdS -pdV -TdS-SdT$ thus one arrives at $\left.dA\right.=-pdV-SdT.$ For A(T,V) one has the following total differential $dA=\left(\dfrac{\partial A}{\partial T}\right)_V dT + \left(\dfrac{\partial A}{\partial V}\right)_T dV$ The following equation provides a link between classical thermodynamics and statistical mechanics: $\left.A\right.=-k_B T \ln Q_{NVT}$ where $k_B$ is the Boltzmann constant, T is the temperature, and $Q_{NVT}$ is the canonical ensemble partition function. Quantum correction A quantum correction can be calculated by making use of the Wigner-Kirkwood expansion of the partition function, resulting in (Eq. 3.5 in [1]): $\dfrac{A-A_{ {\mathrm{classical}} }}{N} = \dfrac{\hbar^2}{24m(k_BT)^2} \langle F^2 \rangle$ where $\langle F^2 \rangle$ is the mean squared force on any one atom due to all the other atoms. What are Free Energies Free energy is a composite function that balances the influence of energy vs. entropy. To first define "free" energy, we shall examine the backgrounds of this term, what definitions carry it, and which specific definitions we, as chemists, will choose to refer to: 1. Fossil fuels, global warming, and the usual popular controversies in popular science have led people to cry out for the pursuit of clean and renewable "free" energy (no doubt referring to long-term monetary cost). A most valiant cause; absolutely disregard it here. Forget you even read this definition. We'll even start at listing at 1. again! 2. If a system is isothermal and closed, with constant pressure, it is describable by the Gibbs Energy, known also by a plethora of nicknames such as "free energy", "Gibbs free energy", "Gibbs function", and "free enthalpy". Because this module is located under "Gibbs Energy", we'll focus on this energy; the Helmholtz will be briefly mentioned. 3. If a system is isothermal and closed, with constant volume, it is describable by the Helmholtz Energy, known also by an unnecessary amount of aliases such as "Helmholtz function", "work function", "Helmholtz free energy", and our favorite, "free energy". It will be mentioned in passing. 4. Having two energies both called "free energy" is like having two brothers named Jack. More specifically, they'd be twin brothers; the Gibbs and Helmholtz Energies describe situations with equations easily confused with each other. It's no wonder the IUPAC (the International Union of Pure and Applied Chemistry) officially refers to the two as Gibbs Energy and Helmholtz Energy, respectively. This should not be a surprise, because that's what they were originally named in the first place! Just keep in mind that some outdated or unsophisticated texts might still use the pseudonyms mentioned above (guised as, say, the title of a module). Gibbs Energy The Gibbs Energy is named after a Josiah William Gibbs, an American physicist in the late 19th century who greatly advanced thermodynamics; his work now serves as a foundation for this branch of science. This energy can be said to be the greatest amount of work (other than expansion work) a system can do on its surroundings, when it operates at a constant pressure and temperature. First, a modeling of the Gibbs Energy by way of equation: \[G = U + PV - TS\] Where: Of course, we know that \(U + PV\) can also be defined as: \[U + PV = H\] Where: Which leads us to a form of how the Gibbs Energy is related to enthalpy: \[G = H - TS\] All of the members on the right side of this equation are state functions, so G is a state function as well. The change in G is simply: \[\Delta{G} = \Delta{H} - T\Delta{S}\] How This Equation Was Reached We will start with an equation for the total entropy change of the universe. Our goal is to whittle it down to a practical form, like a caveman shaping a unwieldy block of stone into a useful hand held tool! \[\Delta{S_{universe}} = \Delta{S_{system}} + \Delta{S_{surroundings}}\] An equation with variables of such scope is difficult to work with. We want to do away with the vagueness, and rewrite a more focused equation. We'll consider the case where temperature and pressure is constant. Here we go: 1. \(\Delta{S_{surroundings}}\) can be rewritten as \( \dfrac{\Delta{H}}{T} \). The heat, qp, that the system affects the surroundings with is the negative of the \[(Delta{H}\) for the system. Because \(-q_p = -\Delta{H}_{sys}\), the change in the entropy of the surroundings will be \(\Delta{S}_{surroundings} = \Delta{H_{sys}}/T\). 1. The equation becomes \[\Delta{S_{univ}} = \Delta{S_{sys}} + (-\Delta{H_{surr}}/T)\] A simple substitution. 1. The equation becomes \[T\Delta{S_{univ}} = \Delta{H_{surr}} - T\Delta{S_{sys}}\] Multiply both sides by T. 1. With the mighty powers of whoever discovering stuff getting to name it, we set \[-T\Delta{S_{univ}}\] equal to a great big \[\Delta{G}\] for the almighty Gibbs. Finally, we achieve the equation, \[\Delta{G} = \Delta{H} - T\Delta{S}\]. Reasoning Behind the Equation As a quick note, let it be said that the name "free energy", other than being confused with another energy exactly termed, is also somewhat of a misnomer. The multiple meanings of the word "free" can make it seem as if energy can be transferred at no cost; in fact, the word "free" was used to refer to what cost the system was free to pay, in the form of turning energy into work. \(\Delta{G}\) is useful because it can tell us how a system, when we're given only information on it, will act. \[\Delta{G} < 0\] indicates a spontaneous* change to occur. \[\Delta{G} > 0\] indicates an absence of spontaneousness. \[\Delta{G} = 0\] indicates a system at equilibrium. The Gibbs Energy reaches the minimum value when equilibrium is reached. Here, it is represented as a graph, where x represents the extent of how far the reaction has occurred. The minimum of the function has to be smooth, because \(G\) must be differentiable (its first derivative has to exist at the minimum). It was briefly mentioned that \[\Delta{G}\] is the energy available to be converted to work. The definition is self evident from the equation. Look at \[\Delta{G} = \Delta{H} - T\Delta{S}\]. Recall that \(\Delta{H}\) is the total energy that can be made into heat. \[T\Delta{S}\] is the energy not available to be converted to work. By a reordering of the Gibbs Energy equation: \[\Delta{H} = \Delta{G} - T\Delta{S}\] Expressed in words: the energy available to be turned into heat = \[\Delta{G}\] - the energy that is not free to do work. This lets us see that \(\Delta{G}\) MUST be the energy free to do work. Why constant temperature and pressure? It just so happens that these are regularly occurring factors in the laboratory, making this equation practical to use, and useful as well, for chemists. An example of Gibbs Energy in the real world is the oxidation of glucose; \[\Delta{G}\] in this case is equal to 2870 kJ, or 686 Calories. For living cells, this is the primary energy reaction. Helmholtz Free Energy The Helmholtz free energy is deemed as a thermodynamic potential which calculates the “useful” work retrievable from a closed thermodynamic system at a constant temperature and volume. For such a system, the negative of the difference in the Helmholtz energy is equal to the maximum amount of work extractable from a thermodynamic process in which both temperature and volume are kept constant. In these conditions, it is minimized and held constant at equilibrium. The Helmholtz free energy was originally developed by Hermann von Helmotz and is generally denoted by the letter A, or the letter F . In physics, the letter F is mostly used to denote the Helmholtz energy, which is often called the Helmholtz function or simple term “free energy." Introduced by German physicist Hermann Helmholtz in 1882, Helmholtz free energy is the thermodynamic potential found in a system of constant species with constant temperautre and constant volume, given by the formula: \[ΔA = ΔE – TΔS\] • A = Helmholtz Free Energy in Joule • E = Energy of the System in Joule • T = Absolute Temperature in KElvin • S = Entropy in Joule/Kelvin Summarily, the Helmholtz free energy is also the measure of an isothermal-isochoric closed system’s ability to do work. If any external field is missing, the Helmholtz free energy formula becomes: \[ΔA = ΔU –TΔS\] • A = Helmholtz Free Energy in Joule • U = Internal Energy in Joule • T = Absolute Temperature in Kelvin • S = Entropy in Joule/Kelvin The internal energy (U) can be said to be the amount of energy required to create a system in the nonexistant changes of temperature (T) or volume (V). However, if the system is created in an environment of temperature, T, then some of the energy can be captured by spontaneous heat transfer between the environment and system. The amount of this spontaneous energy transfer is TΔS where S is the final entropy of the system. In that case, you don't have to put in as much energy. Note that if a more disordered, resulting in higher entropy, the final state is created, where less work is required to create the system. The Helmholtz free energy becomes a measure of the sum of energy you have to put in to generate a system once the spontaneous energy transfer of the system from the environment is taken into account. Helmholtz Free Energy is generally used in Physics, denoted with the leter F, while Chemistry uses, G, Gibbs' Free Energy. Relating Helmholtz Energy to Gibbs Energy The Helmholtz Energy is given by the equation: \[A = U - TS\] It is comparable to Gibbs Energy in this way: \[G = A + PV\] The Helmholtz Energy is used when having a constant pressure is not feasible. Along with internal energy and enthalpy, the Helmholtz Energy and Gibbs Energy make up the quad group called the thermodynamic potentials; these potentials are useful for describing various thermodynamic events. TS represents energy from surroundings, and PV represents work in expansion. If needed, refer to the links above to refresh your memory on enthalpy and internal energy. Contributors and Attributions • Alexander Shei
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Thermodynamics/Energies_and_Potentials/Free_Energy/Helmholtz_%28Free%29_Energy.txt
The internal energy of a system is identified with the random, disordered motion of molecules; the total (internal) energy in a system includes potential and kinetic energy. This is contrast to external energy which is a function of the sample with respect to the outside environment (e.g. kinetic energy if the sample is moving or potential energy if the sample is at a height from the ground etc). The symbol for Internal Energy Change is\( ΔU\). Energy on a smaller scale • Internal energy includes energy on a microscopic scale • It is the sum of all the microscopic energies such as: 1. translational kinetic energy 2. vibrational and rotational kinetic energy 3. potential energy from intermolecular forces Example One gram of water at zero °Celsius compared with one gram of copper at zero °Celsius do NOT have the same internal energy because even though their kinetic energies are equal, water has a much higher potential energy causing its internal energy to be much greater than the copper's internal energy. Internal Energy Change Equations ΔU = q+w where q is heat and w is work An isolated system cannot exchange heat or work with its surroundings making the change in internal energy equal to zero. ΔUisolated system = 0 Energy is Conserved ΔUsystem = -ΔUsurroundings The signs of internal energy • Energy entering the system is POSITIVE (+), meaning heat is absorbed, q>0. Work is thus done on the system, w>0 • Energy leaving the system is NEGATIVE (-), meaning heat is given off by the system, q<0 and work is done by the system, w<0 • Since ΔUisolated system = 0, ΔUsystem = -ΔUsurroundings and energy is conserved. Quick Notes • A system contains ONLY internal Energy • a system does NOT contain energy in the form of heat or work • Heat and work only exist during a change in the system • Internal energy is a state function Outside Links • Levine, Ira N. "Thermodynamic internal energy of an ideal gas of rigid rotors." J. Chem. Educ. 1985: 62, 53. Contributors and Attributions • Lorraine Alborzfar (UCD) Potential Energy Potential Energy is the energy due to position, composition, or arrangement. Also, it is the energy associated with forces of attraction and repulsion between objects. Any object that is lifted from its resting position has stored energy therefore it is called potential energy because it has a potential to do work when released. Introduction For example, when a ball is released from a certain height, it is pulled by gravity and the potential energy is converted to kinetic energy during the fall. As this energy converts from potential to kinetic, it is important to take into consideration that energy cannot be created nor destroyed (law of conservation of energy). This potential energy becomes kinetic energy as the ball accelerates towards the ground. The object's total energy can be found through the sum of these to energies. In an exothermic chemical reaction, potential energy is the source of energy. During an exothermic reaction bonds break and new bonds form and protons and electrons go from a structure of higher potential energy to lower potential energy. During this change, potential energy is converted to kinetic energy, which is the heat released in reactions. In an endothermic reaction the opposite occurs. The protons and electrons move from an area of low potential energy to an area of high. This takes in energy. Potential Energy on a molecular level: Energy stored in bonds and static interactions are: • Covalent bonds • Electrostatic forces • Nuclear forces Gravitational Potential Energy $PE= Fx$ where $F$ is the opposing force and $x$ is the distance moved. To calculate the potential energy of an object on Earth or within any other force field the formula $PE=mgh \label{pe1}$ with • $m$ is the mass of the object in kilograms • $g$ is the acceleration due to gravity. On Earth this is 9.8 meters/seconds2 • $h$ is the object's height. The height should be in meters. If the units above are used for the $m$, $g$, and $h$, then the final answer should be given in Joules. Example $1$ A 15 gram ball sits on top of a 2 m high refrigerator. What is the potential energy of the ball at the top of the refrigerator? Solution Use Equation \ref{pe1} with $m =15\, grams$. This mass however has to be in kilograms. The conversion to grams to kilograms is: 1,000 grams per 1 kg • $\text{height}=2\, m$ • $g=9.8 \, m/s^2$ $PE=(0.015 \, kg)(9.8 \, m/s^2)(2\,m)=0.294\, J \nonumber$ Example $2$ What is the mass of a cart full of groceries that is sitting on top of a 2 m hill if its gravitational potential energy is 0.3 J? Solution Use Equation \ref{pe1} $0.3\,J=(m)(9.8\, m/s^2)(2\,m) \nonumber$ and solve for mass $m=0.015 \,kg=15\, g. \nonumber$ Example $3$ A 200 gram weight is placed on top of a shelf with a potential energy of 5 J. How high is the weight resting? Solution $5\,J=\left(\dfrac{200\,g}{1000\,g/kg}\right)(9.8 m/s^2)(h) \nonumber$ and solve for height $h=2.55\, m \nonumber$ Coulombic Potential Energy The potential energy of two charged particles at a distance can be found through the equation: $E= \dfrac{q_1 q_2}{4π \epsilon_o r} \label{Coulomb}$ where • $r$ is distance • $q_1$ and $q_2$ are the charges • $ε_0= 8.85 \times 10^{-12} C^2/J\,m$ For charges with the same sign, $E$ has a + sign and tends to get smaller as $r$ increases. This can explain why like charges repel each other. Systems prefer a low potential energy and thus repel each other which increases the distance between them and lowers the potential energy. For charges with different charges, the opposite of what is stated above is true. E has a - sign which becomes even more negative as the opposite charged particles attract, or come closer together. Example $4$ Calculate the potential energy associated with two particles with charges of $3 \times 10^{-6}\, C$ and $3.9 \times 10^{-6}\, C$ are separated by a distance of $1\, m$ Solution Using Equation \ref{Coulomb} \begin{align*} E &=\dfrac{(3\times 10^{-6}\,C)(3.9 \times 10^{-6}\,C)}{4π \,8.85 \times 10^{-12} \,C^2/Jm} \[4pt] &=0.105 \,J \end{align*} Example $5$ Find the distance between two particles that have a potential energy of $0.2\, J$ and charges of $2.5 \times 10^{-6}\, C$ and $3.1 \times 10^{-6}\, C$. Solution \begin{align*} 0.2 &=\dfrac{(2.5 \times 10^{-6}\,C)(3.1 \times 10^{-6} \,C)}{4\pi (8.85 \times 10^{-12} \,C^2/Jm) r} \[4pt] &=\dfrac{(8.99 \times 10^9)(7.75 \times 10^{-11})}{r} \[4pt] &=\dfrac{0.6967}{r} \end{align*} cross multiply and solve for $r$ $r=3.5\, m \nonumber$ Includes all interactions in the system such as: in nucleus of atoms; in atoms; between atoms in a molecule (intra-molecular forces); and between different molecules (inter-molecular forces). Contributors and Attributions • Brittanie Harbick (UCD), Laura Suh (UCD), Amrit Paul Bains (UCD)
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Thermodynamics/Energies_and_Potentials/Internal_Energy.txt
Thermal Energy, also known as random or internal Kinetic Energy, due to the random motion of molecules in a system. Kinetic Energy is seen in three forms: vibrational, rotational, and translational. Vibrational is the energy caused by an object or molecule moving in a vibrating motion, rotational is the energy caused by rotating motion, and translational is the energy caused by the movement of one molecule to to another location. Thermal Energy and Temperature Thermal energy is directly proportional to the temperature within a given system (recall that a system is the subject of interest while the surroundings are located outside of the systems and the two interact via energy and matter exchange.) As a result of this relationship between thermal energy and the temperature of the system, the following applies:The more molecules present, the greater the movement of molecules within a given system, the greater the temperature and the greater the thermal energy + molecules = +movement = + temperature = + thermal energy As previously demonstrated, the thermal energy of a system is dependent on the temperature of a system which is dependent on the motion of the molecules of the system. As a result, the more molecules that are present, the greater the amount of movement within a given system which raises the temperature and thermal energy. Because of this, at a temperature of 0?C, the thermal energy within a given system is also zero. This means that a relatively small sample at a somewhat high temperature such as a cup of tea at its boiling temperature could have less thermal energy than a larger sample such as a pool that's at a lower temperature. If the cup of boiling tea is placed next to the freezing pool, the cup of tea will freeze first because it has less thermal energy than the pool. To keep definitions straight, remember the following • Temperature: Temperature is the average kinetic energy within a given object and is measured by three scales of measurement (Fahrenheit, Celsius, Kelvin) • Thermal Energy: Thermal energy is defined as the total of all kinetic energies within a given system. • Heat: It is important to remember that heat is caused by flow of thermal energy due to differences in temperature (heat flows from object at higher temperature to object at lower temperature), transferred through conduction/convection/radiation. Additionally thermal energy always flows from warmer areas to cooler areas. Thermal Energy and States of Matter Matter exists in three states: solid, liquid, or gas. When a given piece of matter undergoes a state change, thermal energy is either added or removed but the temperature remains constant. When a solid is melted, for example, thermal energy is what causes the bonds within the solid to break apart. Heat: the Transfer of Thermal Energy Heat can be given off in three different processes: conduction, convection, or radiation. Conduction occurs when thermal energy is transferred through the interaction of solid particles. This process often occurs when cooking: the boiling of water in a metal pan causes the metal pan to warm as well. Convection usually takes place in gases or liquids (whereas conduction most often takes place in solids) in which the transfer of thermal energy is based on differences in heat. Using the example of the boiling pot of of water, convection occurs as the bubbles rise to the surface and, in doing so, transfer heat from the bottom to the top. Radiation is the transfer of thermal energy through space and is responsible for the sunlight that fuels the earth. Thermal energy is a concept applicable in everyday life. For example, engines, such as those in cars or trains, do work by converting thermal energy into mechanical energy. Also, refrigerators remove thermal energy from a cool region into a warm region. On a larger scale, recent scientific research has been aiming to convert solar energy to thermal energy in order to create head and electricity. For example,scientific research centers such as NASA explore the uses and applications of thermal energy in order to provide for more efficient energy production. In 1990, for example, NASA extensively researched and explored the potentials of a hybrid power system which made use of Thermal Energy Storage (TES) devices. This power system would convert solar energy into thermal energy which would then be used to produce electrical power and heat. However, converting solar energy to thermal energy has been found to be much easier and much more feasible when systems are not in a state of thermodynamic equilibrium. Rather, scientists have proposed, a moving object or a running fluid can allow the energy to be converted into thermal energy. Thermal Energy and the 2nd Law of Thermodynamics The 2nd Law of Thermodynamics? states that whenever work is performed, the amount of entropy in the atmosphere is increased. Thus the flow of thermal energy is constantly increasing entropy. • Leah Hughes
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Thermodynamics/Energies_and_Potentials/THERMAL_ENERGY.txt
In thermodynamics, it is imperative to define a system and its surroundings because that concept becomes the basis for many types of descriptions and calculations. Contributors and Attributions • Hana Yokoyama-Hatch, Rajdev Purewal (UCD), Angad Oberoi (UCD), Manasa Suresh (UCD) Bond Energies Bond energies are limited in their application for the reasons discussed earlier. They were: • values are for gases only • many values are averages These are serious drawbacks if you want energy information about most reactions. Fortunately there is another way. Remember in our definition of enthalpy ($H$) we said it was a "state function". The net enthalpy change ($ΔH$ --which is the only kind of enthalpy quantity we can measure) is independent of path. What does this mean? If a process can be carried out in a single step, the enthalpy change for that step will be the same as for a series of steps which add up to give the same overall step. Example $1$: Additive of Heats The diagram shows two possible pathways for one reaction: $N_2 + 2 O_2 \rightarrow 2 NO_2$ The direct reaction of the elements nitrogen and oxygen in a 1:2 molar ratio will produce 2 moles of nitrogen dioxide and absorb 68 kJ. But it is also possible to begin with the same elements under different conditions (like a 1:1 molar ratio) and produce 2 moles of nitrogen monoxide. This process absorbs 180 kJ. The nitrogen monoxide can then react with additional oxygen to form 2 moles of nitrogen dioxide. This process releases 112 kJ. The sum of the two reactions involving nitrogen monoxide gives the production of nitrogen dioxide: $180 \;kJ + N_2 + O-2 \rightarrow 2 NO$ $\underline{2 NO + O_2 \rightarrow 2 NO_2 + 112\; kJ}$ $68 \;kJ + N_2 + 2 O_2 \rightarrow 2 NO_2$ And because that is true, the enthalpy changes are also additive: $180 \;kJ + (-112\; kJ) = 68\; kJ$ So with a sufficiently ambitious catalog of reaction enthalpy changes it is sometimes possible to calculate--rather than measure--the enthalpy change for a new reaction. Example $2$: Heat of Reaction Give the following data: $H_2 + ½ O_2 \rightarrow H_2O + 286\; kJ$ $N_2O_5 + H_2O \rightarrow 2HNO_3 + 77 \; kJ$ $N_2 + 3O_2 + H_2 \rightarrow 2HNO_3 + 348 \; kJ$ Calculate $\Delta{H_{rxn}}$ for $2N_2 + 5O_2 \rightarrow 2N_2O_5$ Solution To begin these problems, concentrate on the items in the balanced equation you want. Where are they found in the data available? $H_2 + ½ O_2 \rightarrow H_2O + 286\; kJ$ $\color{red} N_2O_5 \color{black} + H_2O \rightarrow 2HNO_3 + 77 \; kJ$ $\color{red} N_2 \color{black} + \color{red} 3O_2 \color{black} + H_2 \rightarrow 2HNO_3 + 348 \; kJ$ Once these are located, the equations needs to be adjusted so that the substances appear in the same amount as in the desired reaction and on the same sides. $H_2 + ½ O_2 \rightarrow H_2O + 286\; kJ$ $2 \times (2 HNO_3 + 77\; kJ \rightarrow \color{red} N_2O_5 \color{black} + 2 H_2O)$ $2 \times (\color{red}N_2 \color{black} + \color{red} 3 O_2 \color{black} + H_2 \rightarrow 2 HNO_3 + 348 \; kJ)$ Notice that this procedure did not fix the $O_2$ entirely. However, there are also things that we need to get rid of so that the equations will add up to give the desired reaction of $H_2$, $H_2O$, and $HNO_3$. If the first reaction is reversed and doubles, this will happen: $2 \times ( H_2O + 572\; kJ \rightarrow 2 H_2 + O_2)$ $2 \times (2 HNO_3 + 77\; kJ \rightarrow \color{red} N_2O_5 \color{black} + 2 H_2O$ $2 \times (\color{red}N_2 \color{black} + \color{red} 3 O_2 \color{black} + H_2 \rightarrow 2 HNO_3 + 348 \; kJ)$ Multiply all this out to get ${2 H_2O} + 572\; kJ \rightarrow \cancel{2 H_2} + {O_2}$ ${4 HNO_3} + 154\; kJ \rightarrow 2 \color{red} N_2O_5 \color{black} + 2 H_2O$ $2 \color{red} N_2 \color{black} + \color{red} O_2 \color{black} + 2 H_2 \rightarrow 4 HNO_3 + 696 \; kJ$ Then we add and simplify, much like we do with a series of half reactions $\cancel{2 H_2O} + 572\; kJ \rightarrow \cancel{2 H_2} + \cancel{O_2}$ $\cancel{4 HNO_3} + 154\; kJ \rightarrow 2 \color{red} N_2O_5 \color{black} + \cancel{2 H_2O}$ $\underline {2 \color{red} N_2 \color{black} + \cancel{6} \color{red} O_2 \color{black} + \cancel{2 H_2} \rightarrow \cancel{4 HNO_3} + 696 \; kJ}$ $30 \;kJ + 2 N_2 + 5 O_2 \rightarrow 2N_2O_5$ So what's the advantage? Theoretically every reaction can be "rewritten" as a series of processes involving elements forming individual compounds. It does not matter whether the reaction actually occurs that way, of course, because the enthalpies are additive (Hess's Law). The heats of formation represent those reactions and therefore can be used in their place to determine the overall enthalpy change in a reaction based on a mathematical statement. This additivity of heats of reaction (or reaction enthalpies) is generally known as Hess's Law. But it is also possible to state the equivalent of Hess's Law in purely mathematical terms with the introduction of an additional concept: standard enthalpy of formation. If elements in their standard states (normal atmospheric pressure) and $25^oC$ are defined as having no enthalpy of formation (i.e., it takes no energy to get an element the way it would normally be), then all compounds will have some enthalpy change associated with their formation from those elements. For example, when liquid water is formed from gaseous hydrogen and oxygen, we can write the following thermochemical equation: $H_2 + ½ O_2 \rightarrow H_2O\; + \;285.8\; kJ$ The 285.8 kJ is the enthalpy of formation for liquid water: the energy released when one mole of liquid water forms from its elements. The fact that the value is on the products side of the reaction shows this is an exothermic process. The value can also be written separately (as in a table). By convention, a negative sign is then applied to show that there is a net loss of enthalpy in the system as the reactants become products. That means heat is released to the surroundings. So we can say $ΔH_f^o = -285.8\; kJ/mol$ of water. All kinds of enthalpies of formation have been tabulated (Table T1). For an endothermic reaction the enthalpy of reaction would be written on the reactant side: $177.8 kJ \;+ \;CaCO_3 \rightarrow CaO + CO_2$ And when written in a table, the value would be $ΔH_{rxn} = +177.8\; kJ/mol$. All "heats of reaction" are molar and therefore proportional. Stoichiometric amounts of heat can be determined for a given amount of starting material just as any other stoichiometric calculation would be done. Example $2$: Acid/base neutralization The thermochemical equation for the acid/base neutralization reaction of hydrochloric acid with barium hydroxide solution is $2HCl + Ba(OH)_2 \rightarrow BaCl_2 + 2 H_2O + 118 \; kJ$ How much heat is produces if 34.5 g of HCl reacts with a stoichiometric amount of Barium hydroxide? Solution Step 1: Balance the Equation Given and confirmed Step 2: Find number of moles of $(34.5\; g) \times \dfrac{1 \; mol}{36.5\; g} = 0.945 \; mol \; HCl$ Step 3: Use ration to find kJ $(0.945 \;mol\; HCl) \times \dfrac{118\; kJ}{2 \; mol \;HCl} = 55.8 \; kJ$ So what's the advantage? Theoretically every reaction can be "rewritten" as a series of processes involving elements forming individual compounds. It does not matter whether the reaction actually occurs that way, of course, because the enthalpies are additive (Hess's Law). The heats of formation represent those reactions and therefore can be used in their place to determine the overall enthalpy change in a reaction based on a mathematical statement. $\sum{ΔH^o_{f\;products}} - \sum{ΔH^o_{f\;reactants}}= ΔH_{rxn}$ That looks fearsome but it simply says that the overall enthalpy change for a reaction is the difference between the heat content of the products and the heat content of the reactants. If the products end up with more stored energy than the reactants, the enthalpy change will be positive (endothermic). If the products end up with less stored energy than the reactants, the enthalpy change will be negative (exothermic). Example $3$: Thermite Reaction Thermite is a generic term for a mixture of metal and metal oxide used to generate tremendous heat. During the reaction one metal is reduced and the other is oxidized. The classic thermite mixture consists of iron(III) oxide and aluminum: $Fe_2O_3 + 2 Al \rightarrow Al_2O_3 + 2 Fe$ • The heat of formation for iron(III) oxide is -826 kJ/mol. • The heat of formation for aluminum oxide is -1676 kJ/mol. How much energy is released in this reaction? Solution $ΔH_{rxn} = -1676\; kJ - (-826 \;kJ) = -850 \;\text{kJ/mol of either oxide}$ The notion of looking at the enthalpy change as reflective of "stored energy" is important and suggestive. Observation indicates that chemically "stable" compounds tend to have very negative heats of formation. Carbon dioxide (-393.5 kJ/mol) and water (-285.5 kJ/mol) would be examples, but by no means the most extreme. Oxides of metals like iron(III) oxide and aluminum oxide have very negative heats of formation. In contrast, chemically "unstable" compounds tend to have rather positive heats of formation. These very reactive substances have energy stored within their bonds. Silver fulminate, $Ag_2C_2N_2O_2$, is a good example. The heat of formation is +180 kJ/mol. Example $4$: Silver fulminate Silver fulminate is one of a series of compounds containing transition metals and the "fulminate" group. All of the compounds are very unstable. When silver fulminate decomposes it does so according to the following reaction: $Ag_2C_2N_2O_2 \rightarrow 2 Ag + N_2 + 2 CO$ • The heat of formation for silver fulminate is +180 kJ/mol. • The heat of formation for carbon monoxide is -110.5 kJ/mol. How much energy is released in this reaction? If 0.0009 g of silver fulminate decomposes (as in the "Whipper Snappers" fireworks), how much energy is released? Solution $ΔH_{rxn} = 2(-110.5 kJ) - 180 kJ = -401\; kJ/mol$ So $0.0009\; g$ of silver fulminate is $3 \times 10^{-6}$ moles. $(3 \times 10^{-6} mol) \times (401\; kJ/mol) = 0.001 \;kJ$ Information like this seems to indicate that there might be a "preference" in Nature for reactions in which the enthalpy change is negative---loss of energy seems to breed chemical "stability". Substances which don't meet that criterion tend to react until they do. Or do they?
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Thermodynamics/Fundamentals_of_Thermodynamics/A_System_and_Its_Surroundings.txt
Thermodynamics is the study of the relationship between heat (or energy) and work. Enthalpy is a central factor in thermodynamics. It is the heat content of a system. The heat that passes into or out of the system during a reaction is the enthalpy change. Whether the enthalpy of the system increases (i.e. when energy is added) or decreases (because energy is given off) is a crucial factor that determines whether a reaction can happen. Sometimes, we call the energy of the molecules undergoing change the "internal enthalpy". Sometimes, we call it the "enthalpy of the system." These two phrases refer to the same thing. Similarly, the energy of the molecules that do not take part in the reaction is called the "external enthalpy" or the "enthalpy of the surroundings". Roughly speaking, the energy changes that we looked at in the introduction to thermodynamics were changes in enthalpy. We will see in the next section that there is another energetic factor, entropy, that we also need to consider in reactions. For now, we will just look at enthalpy. • Enthalpy is the heat content of a system. • The enthalpy change of a reaction is roughly equivalent to the amount of energy lost or gained during the reaction. • A reaction is favored if the enthalpy of the system decreases over the reaction. That last statement is a lot like the description of energetics on the previous page. If a system undergoes a reaction and gives off energy, its own energy content decreases. It has less energy left over if it gave some away. Why does the energy of a set of molecules change when a reaction occurs? To answer that, we need to think about what happens in a chemical reaction. In a reaction, there is a change in chemical bonding. Some of the bonds in the reactants are broken, and new bonds are made to form the products. It costs energy to break bonds, but energy is released when new bonds are made. Whether a reaction is able to go forward may depend on the balance between these bond-making and bond-breaking steps. • A reaction is exothermic if more energy is released by formation of new bonds than is consumed by breaking old bonds. • A reaction is exothermic if weaker bonds are traded for stronger ones. • A reaction is endothermic if bond-breaking costs more energy than what is provided in bond-making. Bond energies (the amount of energy that must be added in order to break a bond) are an important factor in determining whether a reaction will occur. Bond strengths are not always easy to predict, because the strength of a bond depends on a number of factors. However, lots of people have done lots of work measuring bond strengths, and they have collected the information in tables, so if you need to know how strong a bond is, you can just look up the information you need. Bond Bond Energy (kcal/mol) Bond Bond Energy (kcal/mol) H-H 104 O-H 111 C-C 83 C-H 99 O=O 119 N-H 93 N=N 226 C=O 180 For example, suppose you wanted to know whether the combustion of methane were an exothermic or endothermic reaction. I am going to guess that it's exothermic, because this reaction (and others like it) is used to provide heat for lots of homes by burning natural gas in furnaces. The "combustion" of methane means that it is burned in air, so that it reacts with oxygen. The products of burning hydrocarbons are mostly carbon dioxide and water. The carbon atom in methane (CH4) gets incorporated into a carbon dioxide molecule. The hydrogen atoms get incorporated into water molecules. There are four hydrogen atoms in methane, so that's enough to make two molecules of H2O. • Four C-H bonds must be broken in the combustion of methane. • Four new O-H bonds are made when the hydrogens from methane are added into new water molecules. • Two new C=O bonds are made when the carbon from methane is added into a CO2 molecule. The other piece of the puzzle is the oxygen source for the reaction. Oxygen is present in the atmosphere mostly as O2. Because we need two oxygen atoms in the CO2 molecule and two more oxygen atoms for the two water molecules, we need a total of four oxygen atoms for the reaction, which could be provided by two O2 molecules. • Two O=O bonds must be broken to provide the oxygen atoms for the products. Altogether, that's four C-H and two O=O bonds broken, plus two C=O and four O-H bonds made. That's 4 x 99 kcal/mol for the C-H bonds and 2 x 119 kcal/mol for the O=O bonds, a total of 634 kJ/mol added. The reaction releases 2 x 180 kcal/mol for the C=O bonds and 4 x 111 kcla/mol for the OH bonds, totaling 804 kcal/mol. Overall, there is 170 kcal/mol more released than is consumed. That means the reaction is exothermic, so it produces heat. It's probably a good way to heat your home. Problem TD2.1. Compare the combustion of ethane to the combustion of methane. 1. Write a reaction for the combustion of ethane, CH3CH3, to carbon dioxide and water. 2. How many carbon dioxide molecules would be produced from one molecule of ethane? 3. How many water molecules would be produced from one molecule of ethane? 4. How many oxygen molecules would be needed to provide oxygen atoms to accomplish the steps in questions (b) and (c)? 5. How much energy is consumed / produced by the reaction? Compare this result to the one for methane. Problem TD2.2. The Haber-Bosch process is used to make ammonia for fertilizer. It employs the reaction of hydrogen gas (H2) with atmospheric nitrogen (N2) in a 3:1 ratio to produce ammonia (NH3). 1. Write a reaction for the Haber-Bosch process. 2. How many ammonia molecules would be produced from one molecule of nitrogen? 3. How much energy is consumed / produced by the reaction?
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Thermodynamics/Fundamentals_of_Thermodynamics/Enthalpy_Changes_in_Reactions.txt
This page deals with the basic ideas about energy changes during chemical reactions, including simple energy diagrams and the terms exothermic and endothermic. Energy changes during chemical reactions Obviously, lots of chemical reactions give out energy as heat. Getting heat by burning a fuel is a simple example, but you will probably have come across lots of others in the lab. Other reactions need a continuous supply of heat to make them work. Splitting calcium carbonate into calcium oxide and carbon dioxide is a simple example of this. Any chemical reaction will involve breaking some bonds and making new ones. Energy is needed to break bonds, and is given out when the new bonds are formed. It is very unlikely that these two processes will involve exactly the same amount of energy - and so some energy will either be absorbed or released during a reaction. • A reaction in which heat energy is given off is said to be exothermic. • A reaction in which heat energy is absorbed is said to be endothermic. You can show this on simple energy diagrams. For an exothermic change: Notice that in an exothermic change, the products have a lower energy than the reactants. The energy that the system loses is given out as heat. The surroundings warm up. For an endothermic change: This time the products have a higher energy than the reactants. The system absorbs this extra energy as heat from the surroundings. Expressing exothermic and endothermic changes in numbers Here is an exothermic reaction, showing the amount of heat evolved: $C + O_2 \rightarrow CO_2 \;\;\; \Delta H = -394 \text{kJ} \, mol^{-1}$ This shows that 394 kJ of heat energy are evolved when equation quantities of carbon and oxygen combine to give carbon dioxide. The mol-1 (per mole) refers to the whole equation in mole quantities. How do you know that heat is evolved? That is shown by the negative sign. You always think of the energy change during a reaction from the point of view of the reactants. The reactants (carbon and oxygen) have lost energy during the reaction. When you burn carbon in oxygen, that is the energy which is causing the surroundings to get hotter. And here is an endothermic change: $CaCO_3 \rightarrow CaO + CO_2 \;\;\; \Delta H = +178 \text{kJ} \, mol^{-1}$ In this case, 178 kJ of heat are absorbed when 1 mole of calcium carbonate reacts to give 1 mole of calcium oxide and 1 mole of carbon dioxide. You can tell that energy is being absorbed because of the plus sign. A simple energy diagram for the reaction looks like this: The products have a higher energy than the reactants. Energy has been gained by the system - hence the plus sign. Whenever you write values for any energy change, you must always write a plus or a minus sign in front of it. Energetic Stability Chemists often express statements that something is energetically more stable than something else. For example, that oxygen, O2, is more energetically stable than ozone, O3. What does this mean? If you plot the positions of oxygen and ozone on an energy diagram, it looks like this: The lower down the energy diagram something is, the more energetically stable it is. If ozone converted into ordinary oxygen, heat energy would be released, and the oxygen would be in a more energetically stable form than it was before. So why does not ozone immediately convert into the more energetically stable oxygen? Similarly, if you mix gasoline and air at ordinary temperatures (when you are filling up a car, for example), why does not it immediately convert into carbon dioxide and water? It would be much more energetically stable if it turned into carbon dioxide and water - you can tell that, because lots of heat is given out when gasoline burns in air. But there is no reaction when you mix the two. For any reaction to happen, bonds have to be broken, and new ones made. Breaking bonds takes energy. There is a minimum amount of energy needed before a reaction can start - activation energy. If the molecules don't, for example, hit each other with enough energy, then nothing happens. We say that the mixture is kinetically stable, even though it may be energetically unstable with respect to its possible products. So a gasoline/air mixture at ordinary temperatures does not react, even though a lot of energy would be released if the reaction took place. gasoline and air are energetically unstable with respect to carbon dioxide and water - they are much higher up the energy diagram. But a gasoline and air mixture is kinetically stable at ordinary temperatures, because the activation energy barrier is too high.If you expose the mixture to a flame or a spark, then you get a major fire or explosion. The initial flame supplies activation energy. The heat given out by the molecules that react first is more than enough to supply the activation energy for the next molecules to react - and so on. The moral of all this is that you should be very careful using the word "stable" in chemistry!
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Thermodynamics/Fundamentals_of_Thermodynamics/Enthalpy_Changes_in_Reactions_II.txt
Entropy is another important aspect of thermodynamics. Enthalpy has something to do with the energetic content of a system or a molecule. Entropy has something to do with how that energy is stored. We sometimes speak of the energy in a system as being "partitioned" or divided into various "states". How this energy is divided up is the concern of entropy. By way of analogy, picture a set of mailboxes. You may have a wall of them in your dormitory or your apartment building. The mailboxes are of several different sizes: maybe there are a few rows of small ones, a couple of rows of medium sized ones, and a row of big mailboxes on the bottom. Instead of putting mail in these boxes, we're going to use them to hold little packages of energy. Later on, you might take the energy packages out of your own mailbox and use them to take a trip to the mall or the gym. But how does the mail get to your mailbox in the first place? The energy packages don't arrive in your molecular dormitory with addresses on them. The packages come in different sizes, because they contain different amounts of energy, but other than that there is no identifying information on them. Some of the packages don't fit into some of the mailboxes, because some of the packages are too big and some of the mailboxes are smaller than the others. The energy packages need to go into mailboxes that they will fit into. Still, there are an awful lot of mailboxes that most of the energy packages could still fit into. There needs to be some system of deciding where to put all of these packages. It turns out that, in the molecular world, there is such a system, and it follows a pretty simple rule. When a whole pile of energy packages arrive, the postmaster does her best to put one package into every mailbox. Then, when every mailbox has one, she starts putting a second one into each box, and so on. It didn't have to be that way. It could have been the case that all the energy was simply put into the first couple of mailboxes and the rest were left empty. In other words, the rule could have been that all the energy must be sorted into the same place, instead of being spread around. But that's not how it is. • Energy is always partitioned into the maximum number of states possible. Entropy is the sorting of energy into different modes or states. When energy is partitioned or sorted into additional states, entropy is said to increase. When energy is bundled into a smaller number of states, entropy is said to decrease. Nature's bias is towards an increase in entropy. This is a fundamental law of the universe; there is no reason that can be used to explain why nature prefers high entropy to low entropy. Instead, increasing entropy is itself the basic reason for a wide range of things that happen in the universe. Entropy is popularly described in terms of "disorder". That can be a useful idea, although it doesn't really describe what is happening energetically. A better picture of entropy can be built by looking at how a goup of molecules might sort some energy that is added to them. In other words, what are some examples of "states" in which energy can be sorted? If you get more energy -- maybe by eating breakfast -- one of the immediate benefits is being able to increase your physical activity. You have more energy to move around, to run, to jump. A similar situation is true with molecules. Molecules have a variety of ways in which they can move, if they are given some energy. They can zip around; this kind of motion is usually called translation. They can tumble and roll; this kind of motion is referred to as rotation. Also, they can wiggle, letting their bonds get longer and shorter by moving individual atoms around a little bit. This type of motion is called vibration. When molecules absorb extra energy, they may be able to sort the energy into rotational, vibrational and translational states. This only works with energy packages of a certain size; other packages would be sorted into other kinds of states. However, these are just a few examples of what we mean by states. Okay, so energy is stored in states, and it is sorted into the maximum possible number of states. But how does entropy change in a reaction? We know that enthalpy may change by breaking or forming certain bonds, but how does the energy get sorted again? The changes in internal entropy during a reaction are often very small. In other words, the energy remaining at the end of the reaction gets sorted more or less the way it was before the reaction. However, there are some very common exceptions. The most common case in which internal entropy changes a lot is when the number of molecules involved changes between the start of the reaction and the end of the reaction. Maybe two molecules react together to form one, new molecule. Maybe one molecule splits apart to make two, new molecules. If one molecule splits apart in the reaction, entropy generally increases. Two molecules can rotate, vibrate and translate (or tumble, wiggle and zip around) independently of each other. That means the number of states available for partitioning energy increases when one molecule splits into two. • Entropy generally increases when a reaction produces more molecules than it started with. • Entropy generally decreases when a reaction produces fewer molecules than it started with. Apart from a factor like a change in the number of molecules involved, internal entropy changes are often fairly subtle. They are not as easy to predict as enthalpy changes. Nevertheless, there may sometimes be a trade-off between enthalpy and entropy. If a reaction splits a molecule into two, it seems likely that an increase in enthalpy will be involved, so that the bond that held the two pieces together can be broken. That's not favourable. However, when that happens, we've just seen that there will be an increase in entropy, because energy can then be sorted into additional modes in the two, independent molecules. So we have two different factors to balance. There is a tool we often use to decide which factor wins out. It's called free energy, and we will look at it next.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Thermodynamics/Fundamentals_of_Thermodynamics/Entropy_Changes_in_Reactions.txt
Entropy and enthalpy are two of the basic factors of thermodynamics. Enthalpy has something to do with the energetic content of a system or a molecule. Entropy has something to do with how that energy is stored. • A reaction is favored if enthalpy decreases: There is a bias in nature toward decreasing enthalpy in a system. Reactions can happen when enthalpy is transferred to the surroundings. • A reaction is favored if entropy increases: There is also a bias in nature toward increasing entropy in a system. Reactions can happen when entropy increases. Consider the cartoon reaction below. Red squares are being converted to green circles, provided the reaction proceeds from left to right as shown. Whether or not the reaction proceeds to the right depends on the balance between enthalpy and entropy. There are several combinations possible. In one case, maybe entropy increases when the red squares turn into green circles, and the enthalpy decreases. If we think of the balance between these two factors, we come to a simple conclusion. Both factors tilt the balance of the reaction to the right. In this case, the red squares will be converted into green circles. Alternatively, maybe entropy decreases when the red squares turn into green circles, and enthalpy increases. If we think of the balance between these two factors, we come to another simple conclusion. Both factors tilt the balance of the reaction to the left. In this case, the red squares will remain just as they are. Having two factors may lead to complications. For example, what if enthalpy decreases, but so does entropy? Does the reaction happen, or doesn't it? In that case, we may need quantitation to make a decision. How much does the enthalpy decrease? How much does the entropy decrease? If the effect of the enthalpy decrease is greater than that of the entropy decrease, the reaction may still go forward. The combined effects of enthalpy and entropy are often combined in what is called "free energy." Free energy is just a way to keep track of the sum of the two effects. Mathematically, the symbol for the internal enthalpy change is "ΔH" and the symbol for the internal entropy change is "ΔS." Free energy is symbolized by "ΔG," and the relationship is given by the following expression: \[ \Delta G = \Delta H - T \Delta S \] \(T\) in this expression stands for the temperature (in Kelvin, rather than Celsius or Fahrenheit). The temperature acts as a scaling factor in the expression, putting the entropy and enthalpy on equivalent footing so that their effects can be compared directly. How do we use free energy? It works the same way we were using enthalpy earlier (that's why the free energy has the same sign as the enthalpy in the mathematical expression, whereas the entropy has an opposite sign). If free energy decreases, the reaction can proceed. If the free energy increases, the reaction can't proceed. • A reaction is favored if the free energy of the system decreases. • A reaction is not favored if the free energy of the system increases. Because free energy takes into consideration both the enthalpy and entropy changes, we don't have to consider anything else to decide if the reaction occurs. Both factors have already been taken into account. Remember the terms "endothermic" and "exothermic" from our discussion of enthalpy. Exothermic reactions were favored (in which enthalpy decreases). Endothermic ones were not. In free energy terms, we say that exergonic reactions are favored(in which free energy decreases). Endergonic ones (in which free energy increases) are not. Problem TD4.1. Imagine a reaction in which the effects of enthalpy and entropy are opposite and almost equally balanced, so that there is no preference for whether the reaction proceeds or not. Looking at the expression for free energy, how do you think the situation will change under the following conditions: 1. the temperature is very cold (0.09 K) 2. the temperature is very warm (500 K) Problem TD4.2. Which of the following reaction profiles describe reactions that will proceed? Which ones describe reactions that will not proceed? How Entropy Rules Thermodynamics Sometimes it is said that entropy governs the universe. As it happens, enthalpy and entropy changes in a reaction are partly related to each other. The reason for this relationship is that if energy is added to or released from the system, it has to be partitioned into new states. Thus, an enthalpy change can also have an effect on entropy. Specifically, the internal enthalpy change that we discussed earlier has an effect on the entropy of the surroundings. So far, we have just considered internal entropy changes. • In an exothermic reaction, the external entropy (entropy of the surroundings) increases. • In an endothermic reaction, the external entropy (entropy of the surroundings) decreases. Free energy takes into account both the entropy of the system and the entropy changes that arise because of heat exchange with the surroundings. Together, the system and the surroundings are called "the universe". That's because the system is just everything involved in the reaction, and the surroundings are everything that isn't involved in the reaction. Enthalpy changes in the system lead to additional partitioning of energy. We might visualize that with the mailbox analogy we used for entropy earlier. In this case, each molecule has its own set of mailboxes, into which it sorts incoming energy. Looked at in this way, thermodynamics boils down to one major consideration, and that is the combined entropy of both the system and its surroundings (together known as the universe). • For a reaction to proceed, the entropy of the universe must increase.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Thermodynamics/Fundamentals_of_Thermodynamics/Free_Energy_Changes_in_Reactions.txt
The balance between reactants and products in a reaction will be determined by the free energy difference between the two sides of the reaction. The greater the free energy difference, the more the reaction will favor one side or the other. The smaller the free energy difference, the closer the mixture will get to equal parts reactants and products (loosely speaking). Exactly where the balance lies in an equilibrium reaction is described by the equilibrium constant. The equilibrium constant is just the ratio of products to reactants, once the reaction has settled to equilibrium. That's the point at which the forward and reverse reactions are balanced, so that the ratio of products to reactants is unchanging. • A reaction has reached equilibrium when the reaction has stopped progressing (i.e., no change in concentrations although at a microscopic level both forward and reverse reactions occur), so that the amount of reactants that have turned into products remains constant, and the amount of reactants left over stays constant. • The equilibrium constant is the ratio of products to reactants when the reaction has reached equilibrium. The equilibrium constant could be a large number (like a thousand). That means that there are much more products than reactants at equilibrium. It could also be a very small fraction (like one millionth). That would indicate that the reaction does not proceed very far, producing only a tiny amount of products at equilibrium. • Every reaction has an equilibrium constant • A very large equilibrium constant (in the millions or billions) means the reaction goes "to completion", with all reactants essentially converted into products • A tiny equilibrium (very close to zero) constant means the reaction hardly moves forward at all. • A modest equilibrium constant (close to one, or as close to one as numbers like 0.01 or 100) is considered to be a true equilibrium reaction, in which there is a significant amount of both products and reactants. The equilibrium constant is related to the free energy change of the reaction by the expression: $K = e ^{-\Delta G/ RT}$ or $\ln K = - \dfrac{ \Delta G}{RT }$ in which T is the temperature in Kelvin and R is the "gas constant" (1.986 cal/K mol). Remember, e is just a number that occurs frequently in mathematical relationships in nature (sort of like π); it has a value of about 2.718. This expression for K does make some assumptions about the conditions that we won't worry about; we are using a slightly simplified model. Relating Gibbs energy and the equilibrium Let's look at the form of this relationship between free energy and the equilibrium constant. First, we will see how we deal with endergonic versus exergonic reactions. The free energy changes in opposite directions in these two cases, and we usually deal with opposites by giving one quantity a positive sign and one quantity a negative sign. A reaction in which the free energy increases is given a positive value for its free energy. On the other hand, if free energy decreases over the course of the reaction, we show that by using a negative number for the value of the free energy. If ΔG is negative, the exponent in the relationship becomes positive (because it is multiplied by -1 in the expression). Since e to a positive power will usually be a number greater than one, the relationship suggests there are more products than reactants. That's good, because the reaction is exergonic, and we expect the reaction to go forward. What's more, the larger the value of ΔG, the more product-favored the reaction will be. • 10large number is a large number. • 10small number is a smaller number. However, if ΔG is a positive number, then the exponent in the relationship becomes negative. An number with a negative exponent, by the rules of exponents, is the same as the inverse of the number with a positive exponent of the same size. In other words, 10-2 = 1 / 102. • 10negative number is a fraction. That means if ΔG is positive, the equilibrium constant becomes a fraction. That's because that positive value of ΔG is multiplied by -1 in the expression, becoming negative, and then it's placed in the exponent. That's good, because a positive value of ΔG corresponds to an endergonic reaction, which does not favor product formation. Other factors There are other factors in the expression relating ΔG to the equilibrium constant. One of them, R, is just a "fudge factor"; it's the number that, when placed in the expression, makes the relationship agree with reality. Moreover, it is a constant, so it does not change. However, the other factor is temperature, which does change. That means that the equilibrium constant may change with different temperatures. Overall, the effect of temperature is to make the exponent in the expression a smaller number. That's because the free energy is divided by the temperature and the gas constant; the resulting number becomes the exponent in the relationship. At the extreme, a high temperature could make the exponent into a very, very small number, something close to zero. What happens then? • 100 = 1 • e0 = 1 As the exponent gets smaller and smaller, the equilibrium constant could approach 1. That means there would be more or less equal amounts of products and reactants in our simplified approach. However, the fact that there is a temperature factor in the expression for ΔG itself means that there is a limit to how small K will get as the temperature increases. At some point, the two values for temperature cancel out altogether and the expression becomes K = e (ΔS/R). At that point, the equilibrium constant is independent of temperature and is based only on internal entropy differences between the two sides of the reaction. This relationship is useful because of its predictive value. Qualtitatively, it confirms ideas we had already developed about thermodynamics. • Highly exergonic reactions (large, negative/decreasing ΔG) favor products. • Highly endergonic reactions (large, positive/increasing ΔG) favor reactants. • Reactions with small free energy changes lead to equilibrium mixtures of both products and reactants. Problems TD6.1. Arrange the following series of numbers from the largest quantity to the smallest, from left to right. 1. 105 104 106 2. 23 26 22 3. 33 30 32 4. e2 e1 e4 5. 10-1 10-5 10-3 6. 1 / 10 1 / 25 1 / 50 7. 20.5 20.1 20.9 Problem TD6.2. Given the following free energy differences, arrange the corresponding equilibrium constants from largest to smallest. 1. 25 kcal/mol 17 kcal/mol 9 kcal/mol 2. 16 kcal/mol 19 kcal/mol 21 kcal/mol 3. 7 kcal/mol 22 kcal/mol 13 kcal/mol 4. -17 kcal/mol -3 kcal/mol -8 kcal/mol 5. -17 kcal/mol 3 kcal/mol -8 kcal/mol Problem TD6.3: What is the value of the equilibrium constant at 300K in the following cases? (1 kcal = 1000 cal) 1. ΔG = 3 kcal /mol 2. ΔG = -2 kcal/mol 3. ΔG = -5 kcal /mol 4. ΔG = 15 kcal/mol 5. ΔG = -10 kcal/mol 6. The free energy increases by 8 kcal/mol over the reaction. 7. The free energy decreases by 1 kcal/mol over the reaction. Problem TD6.4. In which of the cases in TD6.3. do you think there would be significant amounts of both products and reactants at equilibrium? Problem TD6.5. The mathematical expression for the equilibrium constant says that K will get smaller at higher temperatures. Explain this phenomenon without the mathematical expression in terms of what you know about temperature and energy.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Thermodynamics/Fundamentals_of_Thermodynamics/Free_Energy_and_Equilibrium.txt
Thermodynamics is the study of the relationship between heat (or energy) and work. In other words, thermodynamics looks at how we can put energy into a system (whether it is a machine or a molecule) and make it do work. Alternatively, we might be able to do some work on a system and make it produce energy (like spinning the turbines in a power station to produce electricity). In chemistry, we sometimes speak more broadly about "energetics" of reactions (rather than thermodynamics), because energy given off during a reaction may simply be lost to the surroundings without doing useful work. Nevertheless, the ideas are the same: energy can be added to a set of molecules in order to produce a reaction, or a reaction can occur between a set of molecules in order to release energy. A classic example of reaction energetics is the hydrolysis of ATP to ADP in biology. This reaction is used in the cell as a source of energy; the energy released from the reaction is frequently coupled to other processes that could not occur without the added energy. The hydrolysis of ATP, or the addition of water to ATP in order to break ATP into two, smaller molecules, gives off energy. That energy can be used by the cell to carry out other processes that would cost energy. One molecule of ADP and one molecule of inorganic phosphate, sometimes abbreviated as Pi, are also produced. • Energy can be given off by a chemical reaction. • That energy can be used to power other reactions that require energy. In the cell, ATP is produced in high levels in the mitochondria. Because it is a relatively small molecule, it can be transported easily to other areas of the cell where energy may be needed. The ATP can be hydrolysed on site, providing energy for the cell to use for other reactions. Note that the scheme above uses some thermodynamics jargon. The place where the reaction takes place, or the molecules participating in the reaction, are called "the system". Energy is supplied to "the surroundings", meaning places or molecules other than those directly involved in this reaction. There are a couple of other ways in which energetics of reactions are commonly depicted. The energetic relationship between ATP plus water and ADP plus phosphate shown above is really a simplified graph of energy versus reaction progress (sometimes called reaction coordinate). This type of graph shows changes in energy over the course of a reaction. The energy of the system at the beginning of the reaction is shown on the left, and the energy at the end of the reaction is shown on the right. This type of graph is sometimes referred to as a reaction profile. Another common way of discussing energetics is to include energy as a reactant or product in an equation describing the reaction. An equation for a reaction shows what the starting materials were for the reaction, and what they turned into after the reaction. The things that reacted together in the reaction are called the "reactants". They are written on the left hand side of the arrow that says a reaction took place. The things that the reactants turned into are called the "products". They show up on the right hand side of the arrow. $ATP + H_2O \rightarrow ADP + P_i + \text {energy}$ For the hydrolysis of ATP, energy is simply included as one of the products of the reaction, since the reaction releases energy. Alternatively, the energetic observation about ATP can be turned around, since there are evidently some reactions that cost energy. Probably the most well-know reaction of this type is the conversion of carbon dioxide to carbohydrates such as glucose. This conversion actually results from a long series of different reactions that happen one after another. Overall, the process requires a lot of energy. This energy is supplied in part by ATP, generated with assistance from photosystem I and II, which are arrays of molecules that interact with sunlight. A simplified reaction profile for carbohydrate synthesis is shown below. • Energy can be consumed by a chemical reaction. • Reactions that consume energy need an energy source in order to occur. Again, this energetic relationship can be thought of in the form of a balanced reaction. $\text {energy} + 6CO_2 + 6H_2O \rightarrow C_6H_{12}O_6 + O_2$ In this case, energy is a reactant, not a product. It is one of the key ingredients needed to make the reaction happen. Reactions that produce energy, like ATP hydrolysis, are referred to as exothermic reactions (or sometimes exergonic, meaning roughly the same thing). In reaction profiles, these reactions go downhill in energy as the reaction occurs from the left side of the diagram to the right. On the other hand, reactions that cost energy (the ones that go uphill on the reaction profile, like carbohydrate synthesis) are referred to as endothermic (or sometimes endergonic). It is useful to think of reactions as "going downhill" or "going uphill" because one of these situations should seem inherently easier than the other (especially if you've ever been skiing). Exothermic reactions (the downhill ones) occur very easily; endothermic reactions do not (those are the uphill ones). • Systems always go to lower energy if possible. Reactions that are energetically "uphill" cannot happen easily by themselves. Those reactions must be powered by other reactions that are going downhill. The energy traded between these reactions keeps chemical reactions going, in cells and other important places. Sometimes, a process that is used to supply energy for another reaction is thought of as the "driving force" of the reaction. Without the driving force, the desired reaction would not be able to occur. In general, a reaction will occur if more than enough energy is supplied. Excess energy does not hurt on the macroscopic scale. However, if not enough energy is supplied to make up for an endothermic reaction, the reaction is not likely to happen. Energy is a lot like money. It can be passed from one set of hands to another. Doing so often helps get things done. There is one problem with the use of chemical reactions as sources of energy. If ATP hydrolysis releases energy, and if the release of energy is always favoured, why doesn't it happen spontaneously? In other words, why don't all the ATP molecules in all the cells in all the organisms in the whole world just slide downhill into ADP right now? What is stopping them? Fortunately, all reactions have barriers that stop them from happening until they are ready to go. A reaction barrier is an initial investment of energy needed to get things started. Reaction barriers occur for a variety of physical reasons: two molecules may need to get oriented in the right direction to react with each other, or a bond may have to be broken to get the reaction going, costing an initial outlay of energy. The reaction barriers of reactions influence how quickly reactions happen. High barriers slow reactions down a lot. Low barriers allow them to happen more easily. The study of reaction barriers, and how quickly reactions can occur, is called chemical kinetics. Thermodynamics, on the other hand, is really concerned with the overall energy change from the beginning of a reaction to the end. It compares the energies of two sets of molecules to each other: the energies of the reactants and the energies of the products.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Thermodynamics/Fundamentals_of_Thermodynamics/Introduction_to_Thermodynamics.txt
Sometimes, there is not a big difference in energy between reactants and products of a reaction. What happens then? Does the reaction go forward, because it will not cost a lot of energy? Or does it not proceed, because there isn't enough driving force? For example, one simple reaction that occurs all the time is the reaction of water with carbon dioxide. This is a reaction that happens when carbon dioxide dissolves in lakes, rivers and oceans. It even happens in your own bloodstream. Water reacts with carbon dioxide to form carbonic acid. However, carbonic acid also decomposes spontaneously in water. It reacts to form carbon dioxide and water. In other words, this is a reaction that can go either direction. It can go forwards or backwards. It is an example of an equilibrium reaction. An equilibrium reaction is one that is energetically balanced, so that it really isn't favored to go in either direction. Equilibrium reactions are extremely important in nature, partly because of the forward and reverse capabilities that they offer. In essence, they are reactions with an "undo" button. The reaction can proceed in one direction when needed, and it can proceed in the other direction when needed. However, there are some inherent limitations involved. Frequently, equilibrium reactions only proceed "partway". That is, a group of molecules will start to produce products. However, at some point those products will begin reverting to the starting materials again. Eventually the system will settle out as a mixture of reactants and products. What if it's really important that we have the products of the reaction at one point, with none of the reactants? And if later on we need the reactants, but not any of the products? It would be useful if there were a way to control the direction of an equilibrium reaction, so that we could "push" it to one side or the other. Control of equilibrium reactions can be remarkably simple. It follows a rule that was observed by Henri le Châtelier (ah-REE luh shah-tell-YAY), a French industrial chemist, around 1900. Le Chatelier noticed that equilibrium reactions often shift direction if the conditions of the reaction are changed. In general, adding any product of the reaction shifts the balance back toward the reactants. If any product of the reaction is added, the reaction makes more starting materials. Thus, adding more carbonic acid to a carbon dioxide - water - carbonic acid mixture would result in reverse reaction, producing more water and carbon dioxide. Adding more carbon dioxide, on the other hand, would lead to production of more carbonic acid. Here is a cartoon illustration of "le Châtelier's Principle" at work. Suppose red squares and blue ovals can react together to make black circles and green circles. Maybe there is a natural equilibrium in this reaction, so that the two piles of shapes are roughly equal in size. What would happen if something knocked this system off balance? For example, maybe black circles are highly elusive, and they just wander away as soon as they are formed. The system won't be in equilibrium anymore, because without those black circles, the balance will be upset, with not enough things on the right side for the number of things on the left. Le Chatelier noticed that nature automatically corrects for such changes. If some of the black circles disappear, the reaction will kick into action again, using up some red squares and blue ellipses to produce more green and black circles. The exact numbers of shapes won't return to exactly the same as before, because some of the black circles have still gone missing, but the system will have shifted to use up more reactants on the left and to produce more products on the right, so that the overall ratio between right and left is restored. Alternatively, maybe we found a way to make the black circles stay where they are. Instead, we have dumped in a bunch of extra blue ellipses. Once again, the system is knocked off balance. This time, there is too much stuff on the left, compared to the amount on the right side. The reaction goes into action again. It uses up some of those extra blue ellipses (and, at the same time, some of the red squares) to produce more black and green circles, bringing the system back to the original ratio of right side shapes to left side shapes. In general, if molecules are added to a system, the reaction will shift to bring the system back into equilibrium. If molecules are removed from the system, the reaction will also shift to bring the system back into equilibrium. Furthermore, because heat can be consumed by (or produced by) reactions, temperature can sometimes be used to shift equilibria. If a reaction is exothermic, heat is a product of the reaction. Adding more heat will result in the reaction shifting to produce more reactants. Cooling the reaction (removing heat) would do the opposite: the reaction would shift to produce more heat, and more products. In the cartoon, we have a shape-shifting reaction again, but this time the reaction releases energy (those are orange flames, symbolic of the heat produced). What happens if that energy is removed? For example, if heat is removed through addition of a pale blue ice cube, what will be the effect on the system? Those orange energy shapes (the "flames") were a part of the system. If they are removed, the system will have to shift in order to restore them. If the reaction pushes to the right again, more energy will be released, bringing the system back into equilibrium.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Thermodynamics/Fundamentals_of_Thermodynamics/Reversibility_and_Le_Chatelier.txt
A state function is a property whose value does not depend on the path taken to reach that specific value. In contrast, functions that depend on the path from two values are call path functions. Both path and state functions are often encountered in thermodynamics. Introduction Whenever compounds or chemical reactions are discussed, one of the first things mentioned is the state of the specific molecule or compound. "State" refers to temperature, pressure, and the amount and type of substance present. Once the state has been established, state functions can be defined. State functions are values that depend on the state of the substance, and not on how that state was reached. For example, density is a state function, because a substance's density is not affected by how the substance is obtained. Consider a quantity of H2O: it does not matter whether that H2O is obtained from the tap, from a well, or from a bottle, because as long as all three are in the same state, they have the same density. When deciding whether a certain property is a state function or not, keep this rule in mind: is this property or value affected by the path or way taken to establish it? If the answer is no, then it is a state function, but if the answer is yes, then it is not a state function. Mathematics of State Functions Another way to think of state functions is as integrals. Integrals depend on only three things: the function, the lower limit and the upper limit. Similarly, state functions depend on three things: the property, the initial value, and the final value. In other words, integrals illustrate how state functions depend only on the final and initial value and not on the object's history or the path taken to get from the initial to the final value. Here is an example of the integral of enthalpy, $H$, where $t_0$ represents the initial state and $t_1$ represents the final state. $\displaystyle \int_{t_o}^{t_1} \; H(t) dt = H(t_1)-H(t_o)$ This is equivalent to a familiar definition of enthalpy: $\Delta H = H_{final} - H_{initial}$ As represented by the solution to the integral, enthalpy is a state function because it only depends on the initial and final conditions, and not on the path taken to establish these conditions. Therefore, the integral of state functions can be taken using only two values: the final and initial values. On the other hand, multiple integrals and multiple limits of integration are required take the integral of a path function. If an integral of a certain property can be calculated using just the property and it's initial and final value, the property is a state function. State Functions vs. Path Functions State functions are defined by comparing them to path functions. As stated before, a state function is a property whose value does not depend on the path taken to reach that specific function or value. In essence, if something is not a path function, it is probably a state function. To better understand state functions, first define path functions and then compare path and state functions. Path functions are functions that depend on the path taken to reach that specific value. For example, suppose you have $1000 in your savings account. Suppose you want to deposit some money to this account. The amount you deposit is a path function because it is dependent upon the path taken to obtain that money. In other words, the amount of money you will deposit in your savings account is dependent upon the path or way taken to obtain that money. If you work as a CEO of a company for a week versus working at a gas station for a week, you would receive two different amounts of money at the end of the week. Thus, a path function is a property or value that is dependent on the path taken to establish that value. State functions do not depend on the path taken. Using the same example, suppose you have$1000 in your savings account. You withdraw $500 from your savings account. It does not matter whether you withdraw the$500 in one shot or whether you do so at a rate of $50. At the end when you receive your monthly statement, you will notice a net withdrawal of$500 and will see your resulting balance as \$500. Thus, the bank balance is a state function because it does not depend on the path or way taken to withdraw or deposit money. In the end whether you do so in one lump or in multiple transactions, your bank balance will stay the same. The figure below illustrates state functions in the form of enthalpy: In this figure, two different steps are shown to form $NaCl_{(s)}$. Path one: The first path takes only a single step with an enthalpy of formation of -411 kJ/mol: $Na^+_{(g)} + Cl^-_{(g)} \rightarrow NaCl_{(s)}$ Path two: The second path takes five steps to form $NaCl_{(s)}$ $Na_{(s)} + 1/2 \;Cl_{(g)} \rightarrow Na_{(g)} + 1/2\; Cl_{(g)} \tag{1: sublimation}$ $Na_{(g)} + 1/2 \;Cl_{(g)} \rightarrow Na_{(g)} + Cl_{(g)} \tag{2: atomization}$ $Na_{(g)} + Cl_{(g)} \rightarrow Na^+_{(g)} + Cl_{(g)} \tag{3: ionization}$ $Na^+_{(g)} + Cl_{(g)} \rightarrow Na^+_{(g)} + Cl^-_{(g)} \tag{4: electron affinity}$ $Na^+_{(g)} + Cl^-_{(g)} \rightarrow NaCl_{(s)} \tag{5: lattice formation}$ When enthalpies of all these steps are added, the enthalpy of formation of $NaCl_{(s)}$ is still -411 kJ/mol. This is a perfect example of a state function: no matter which path is taken to form $NaCl_{(s)}$, it results the same enthalpy of formation of -411 kJ/mol. Table 1: Summary of differences between state and path functions State Function Path Function Independent of path taken to establish property or value. Dependent on path taken to establish property or value. Can integrate using final and initial values. Need multiple integrals and limits of integration in order to integrate. Multiple steps result in same value. Multiple steps result in different value. Based on established state of system (temperature, pressure, amount, and identity of system). Based on how state of system was established. Normally represented by an uppercase letter.1 Normally represented by a lowercase letter.1 1The last comparison made is a generalization that does not necessarily hold for all aspects and calculations involved in chemistry. Analogy The main point to remember when trying to identify a state function is to determine whether the path taken to reach the function affects the value. The analogy below illustrates how to tell whether a certain property is a state function. Every morning, millions of people must decide how to reach their offices. Some opt for taking the stairs, whereas others take the elevator. In this situation, ∆y, or change in vertical position is the same whether a person take the stairs or the elevator. The distance from the office lobby to the office stays the same, irrespective of the path taken to get to your office. As a result, ∆y is a state function because its value is independent of the path taken to establish its value. In the same situation, time, or ∆t, is not a state function. If someone takes the longer way of getting to the office (climbing the stairs), ∆t would be greater, whereas ∆t would be smaller if the elevator is taken. In this analogy, ∆t is not a state function because its value is dependent on the path. Applications State functions are commonly encountered in thermodynamics; many of the equations involved with thermodynamics, such as $\Delta U$ and $\Delta H$, are state functions. Additionally, state functions are crucial in thermodynamics because they make calculations simple and allow one to calculate data that could otherwise only be obtained through experiments. More specifically, state functions facilitate the use of Hess's Law, which allows the manipulation (addition, subtraction, multiply etc.) of the enthalpies of half reactions when adding multiple half reactions to form a full reaction. Hess's Law is dependent upon the fact that enthalpy is a state function. If enthalpy was not a state function, Hess's Law would be much more complicated, because the enthalpies of half reactions could not be added. Instead, several additional calculations would be required. Furthermore, state functions and Hess's Law helps one calculate the enthalpy of complex reactions without having to actually replicate these reactions in a laboratory. All that is required is to write out and sum the enthalpy of the half reactions or of the hypothetical steps leading to the chemical reaction. State functions are also encountered in many other equations involved with thermodynamics such as internal energy (∆U), Gibb's free energy, enthalpy, and entropy. Problems 1. In terms of what we have discussed in this module, is going from the 1st floor of Sproul hall, to the 9th floor of Sproul hall, the same thing as going from the 1st floor of Sproul hall, to the 3rd floor, to the 5th floor, to the 9th floor of Sproul hall? 2. Is ∆U a state function? 3. Is temperature a state function? 4. Is volume a state function? (prove with an example) 5. Although pressure and volume are state functions, why is work (which is often expressed as -P∆V) not a state function? Solutions 1. Yes, because the question describes a state function. Your position is dependent only on the final and initial position, which are respectively 9th floor of Sproul and 1st floor of Sproul, and not on the path or way taken to get there. 2. The formula for ∆U is, ∆U = Ufinal- Uinitial . The formula of ∆U itself proves that it is a state function because ∆U is only dependent on Ufinal and Uinitial . In other words, ∆U is not affected by the path taken to establish its values. This is the definition of a state function and as a result, ∆U is a state function. 3. Temperature is a state function as it is one of the values used to define the state of an object. Furthermore, temperature is dependent on the final and initial values, not on the path taken to establish the values. 4. Volume is a state function because volume is only dependent on the final and initial values and not on the path taken to establish those values. Any example that shows this statement in function is acceptable. Here's an acceptable answer: Imagine a balloon is inflated until a certain volume. If inflated in multiple steps or in a single step, it will still attain the same volume at the end. As a result, volume is a state function because it is not dependent on the object's path or history. 5. The reason work is not a state function depends on the definition of work rather than the formula of work. The definition of work is moving an object against a force. Thus, in essence, the definition of work states that work depends on its history or path it takes because the movement of an object is dependent upon the path taken to execute that movement (i.e. running vs. walking). Therefore, if an object is dependent on its history or on the path it takes, the resulting value or property is not a state function. Even though pressure and volume are state functions, the definition of work illustrates why work is not a state function. Contributors and Attributions • Allison Billings (UCD), Rachel Morris (UCD), Ryan Starr (UCD), Angad Oberoi (UCD)
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Thermodynamics/Fundamentals_of_Thermodynamics/State_vs._Path_Functions.txt
The temperature of a system in classical thermodynamics is intimately related to the zeroth law of thermodynamics; two systems having to have the same temperature if they are to be in thermal equilibrium (i.e. there is no net heat flow between them). However, it is most useful to have a temperature scale. By making use of the ideal gas law one can define an absolute temperature $T = \frac{pV}{Nk_B}$ however, perhaps a better definition of temperature is $\frac{1}{T(E,V,N)} = \left. \frac{\partial S}{\partial E}\right\vert_{V,N}$ where S is the entropy. Temperature has the SI units of kelvin (K) (named in honour of William Thomson [1]) The kelvin is the fraction 1/273.16 of the thermodynamic temperature of the triple point of water[2] [3]. $T = \frac{2}{3} \frac{1}{k_B} \overline {\left(\frac{1}{2}m_i v_i^2\right)}$ where kB is the Boltzmann constant. The kinematic temperature so defined is related to the equipartition theorem. • SklogWiki Troutons rule Trouton's rule says that for many (but not all) liquids, the entropy of vaporization is approximately the same at ~85 J mol−1K−1. The (partial) success of the rule is due to the fact that the entropy of a gas is considerably larger than that of any liquid. $S_{gas} \gg S_{liquid}$ Therefore, the entropy of the initial state (e.g. the liquid) is negligible in determining the entropy of vaporization $\Delta S_{vap}= S_{gas} - S_{liquid} \approx S_{gas} \label{approx}$ When a liquid vaporizes its entropy goes from a modest value to a significantly larger one. This is related to the ratio of the enthalpy of vaporization and the temperature of the transition: $ΔS_{vap}= \dfrac{ΔH_{vap}}{T} \label{Eq1}$ $ΔS_{vap}$ is found to be approximately constant at the boiling point (Figure $1$): $ΔS_{vap} \approx 85 \,J\, mol^{−1}K^{−1} \label{Trule}$ This is Trouton’s rule, which is valid for many liquids (e.g, the entropy of vaporization of toluene is 87.30 J K−1 mol−1, that of benzene is 89.45 J K−1 mol−1, and that of chloroform is 87.92 J K−1 mol−1). Because of its convenience, the rule is used to estimate the enthalpy of vaporization of liquids whose boiling points are known. Trouton’s rule can be used to estimate the enthalpy of vaporization of liquids whose boiling points are known. Experimental values vary rather more than this and for gases such as neon, nitrogen, oxygen and methane whose liquids all boil below 150 K, have values that are in the range 65−75, benzene, many 'normal' liquids and liquid sodium, lithium and iodine, in the range 80−90 and ethanol, water, hydrogen fluoride in the range 105−115 J mol−1 K−1. Thus is nothing unusual about 150 K, but rather an influence from intermolecular interactions. The value of ~85 J mol−1K−1 corresponds to a interaction energy of ~9.5kT per molecule and so the boiling point gives an indication of the strength of the cohesive energy holding molecules together in the condensed phase. When the cohesive energy exceeds this value, as in water, then the ratio $ΔH_{vap}/T$ (Equation $\ref{Eq1}$) is larger and conversely the ratio is smaller when the cohesive energy is less as in Neon or methane. The ≈9.5kT minimum energy per molecule is quite a modest energy; if a molecule has six near neighbors this corresponds to about 3kT/2 per interaction between two molecules, roughly the average thermal energy. Melting There is no universal rule for the entropy of melting since a similar approximate like that used for Trouton's law (Equation $\ref{approx}$) does not exist. However, if the mature of the interactions are consistent between a set of solids, then a crude correlation can be identified (Figure $1$; orange symbols). Trouton Rule's does not Apply to Structured Liquids For example, the entropies of vaporization of water, ethanol, formic acid and hydrogen fluoride are far from the predicted values. However, if the liquid presents hydrogen bonding or any other kind of high ordered structure, its entropy will be particularly low and the entropy gain during vaporization will greater, too. The enthalpy of vaporization is greater for hydrogen-bonding molecules than for plain alkanes. For low-molecular weight alcohols, this effect is pronounced. The longer the alkane chain becomes, the more the compound behaves like a pure alkane. Table $1$: Enthalpy of vaporization in kj/mol Liquid $\Delta H_{vap}$ (kJ/mol) $\Delta H_{vap}$ (kJ/mol per carbon) Methanol 38 38 Ethanol 42 21 n-propanol 47 16 n-butanol 52 13 n-pentanol 57 11 n-hexanol 61 10 n-heptanol 67 10 n-octanol 71 9 n-nonanol 77 9 n-decanol 82 8 Data obtained from the NIST Webbook. Keeping in mind the relative molecular weights of the compounds, you can see there is a decreasing effect of the hydrogen bonding (and other) effects on the n-alcohol series as we move to larger chains and become less alcohol-like (structured liquid) and more alkane-like (unstructure liquid). This is much more obvious when $\mathrm{\Delta}H$ is normalized to a per-carbon basis. Table $1$ shows that two different processes control the enthalpy of vaporization, and similarly the saturation vapor concentration (also known as vapor pressure) or boiling point. At the low-molecular weight end, hydrogen bonding dominates, so we see the behavior common to polar, hydrogen-bonding compounds. At the high-molecular weight end, we see the pattern observed for alkanes. Trouton's rule hardly works for high ordered substances exhibiting hydrogen bonding. Other factors like the enthalpy of vaporization for a long chained organic molecule {strength of Van der Waals forces} may also play some significance role. Example $1$: Entropy of Waporization for Water The experimentally determined enthalpy of vaporization if water is $40.7\, kJ\,mol^{-1}$. Does water follow Trouton's rule in predicting the enthalpy of vaporization? Solution From Equations $\ref{Eq1}$ and $\ref{Trule}$, we get $ΔS_{vap}= \dfrac{\Delta H_{vap}}{T} \approx 85 \, J \,K^{-1} mol^{-1}\nonumber$ This predicts that (since water boils at 373.15 K under atmospheric pressure) $ΔH_{vap} \approx (85 \, J \,K^{-1} mol^{-1} )( 373.15\, K) = 31.7\, kJ \,K^{-1} mol^{-1} \nonumber$ This deviates from the observed enthalpy change of $40.7\, J\, K^{-1} mol^{-1}$ by 23%, which is a sizable deviation from experiment. So, Trouton's law does not apply to water. This deviation is similarly observed by comparing the determined entropy of vaporization (Equation $\ref{Eq1}$) to the estimate from Trouton's law $ΔS_{vap} = \dfrac{40.7\, kJ\, mol^{-1}}{373.15 K} = 109.1 J\, K^{-1} mol^{-1} > 85 \, J \,K^{-1} mol^{-1} \nonumber$ Example $1$ demostrates the general observation that the $S_{liquid}$ for structured liquids is lower than for unstructured liquids, so the entropy gain during vaporization (i.e., $\Delta_{vap}$ in Equation $\ref{approx}$) will be greater. Trouton Rule's does not Apply to Ordered Gasses In contrast to water, the entropy of vaporization of formic acid has negative deviance from Trouton's rule. This fact indicates the existence of an orderly structure in the gas phase; it is known that formic acid forms a dimer structure even in the gas phase (Figure $2$). As with water, where hydrogen bonding results in structured phase and reduced entropy of the liquid (positive deviation from Trouton's law), the dimerization in formic acid reduces the entropy of the gas (negative deviation from Trouton's law). Negative deviance can also occur as a result of a reduced gas phase entropy owing to a low population of excited rotational states in the gas phase, particularly in small molecules such as methane. Trouton's rule validity can be increased by considering $\Delta {\bar {S}}_{vap}=4.5R+R\ln T$ Contributors and Attributions • Renato at Chemistry StackExchange • PHANUEL ALUOMU at Chemistry StackExchange • porphyrin at Chemistry StackExchange • airhuff at Chemistry StackExchange • Wikipedia
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Thermodynamics/Fundamentals_of_Thermodynamics/Temperature.txt
A gas will always flow into a newly available volume and does so because its molecules are rapidly bouncing off one another and hitting the walls of their container, readily moving into a new allowable space. It follows from the second law of thermodynamics that a process will occur in the direction towards a more probable state. In terms of entropy, this can be expressed as a system going from a state of lesser probability (less microstates) towards a state of higher probability (more microstates). This corresponds to increasing the $W$ in the equation $S=k_B\ln W$. The Mixing of Ideal Gases For our example, we shall again consider a simple system of two ideal gases, A and B, with a number of moles $n_A$ and $n_B$, at a certain constant temperature and pressure in volumes of $V_A$ and $V_B$, as shown in Figure $\PageIndex{1A}$. These two gases are separated by a partition so they are each sequestered in their respective volumes. If we now remove the partition (like opening a window in the example above), we expect the two gases to randomly diffuse and form a homogenous mixture as we see in Figure $\PageIndex{1B}$. To calculate the entropy change, let us treat this mixing as two separate gas expansions, one for gas A and another for B. From the statistical definition of entropy, we know that $\Delta S=nR\ln \dfrac{V_2}{V_1} \;. \nonumber$ Now, for each gas, the volume $V_1$ is the initial volume of the gas, and $V_2$ is the final volume, which is both the gases combined, $V_A+V_B$. So for the two separate gas expansions, $\Delta S_A=n_A R\ln \dfrac{V_A+V_B}{V_A} \nonumber$ $\Delta S_B=n_B R\ln \dfrac{V_A+V_B}{V_B} \nonumber$ So to find the total entropy change for both these processes, because they are happening at the same time, we simply add the two changes in entropy together. $\Delta_{mix}S = \Delta S_{A}+\Delta S_{B}=n_{A}R\ln \dfrac{V_{A}+V_{B}}{V_{A}}+n_{B}R\ln \dfrac{V_{A}+V_{B}}{V_{B}} \nonumber$ Recalling the ideal gas law, PV=nRT, we see that the volume is directly proportional to the number of moles (Avogadro's Law), and since we know the number of moles we can substitute this for the volume: $\Delta_{mix}S=n_{A}R\ln \dfrac{n_{A}+n_{B}}{n_{A}}+n_{B}R\ln \dfrac{n_{A}+n_{B}}{n_{B}} \nonumber$ Now we recognize that the inverse of the term $\frac{n_{A}+n_{B}}{n_{A}}$ is the mole fraction $\chi_{A}=\frac{n_{A}}{n_{A}+n_{B}}$, and taking the inverse of these two terms in the above equation, we have: $\Delta_{mix}S=-n_{A}R\ln \dfrac{n_A}{n_A+n_B}-n_BR\ln \dfrac{n_A}{n_A+n_B}\chi_{B} = -n_A R\ln \chi_A -n_B R\ln \chi_B \nonumber$ since $\ln x^{-1}=-\ln x$ from the rules for logarithms. If we now factor out R from each term: $\Delta_{mix}S=-R(n_{A}\ln \chi_{A}+n_{B}\ln \chi_{B}) \nonumber$ represents the equation for the entropy change of mixing. This equation is also commonly written with the total number of moles: $\Delta_{mix}S=-nR(\chi_A \ln \chi_A+\chi_B\ln \chi_B) \label{Final}$ where the total number of moles is $n=n_A+n_B$ Notice that when the two gases will be mixed, their mole fraction will be less than one, making the term inside the parentheses negative, and thus the entropy of mixing will always be positive. This observation makes sense, because as you add a component to another for a two-component solution, the mole fraction of the other component will decrease, and the log of a number less than 1 is negative. Multiplied by the negative in the front of the equation gives a positive quantity. Equation $\ref{Final}$ applies to both ideal solutions and ideal gases. Outside Links • Sattar, Simeen. "Thermodynamics of Mixing Real Gases." J. Chem. Educ. 2000 77 1361.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Thermodynamics/Ideal_Systems/Entropy_of_Mixing.txt
In this section we will talk about the relationship between ideal gases in relations to thermodynamics. We will see how by using thermodynamics we will get a better understanding of ideal gases. Introduction In the realm of Chemistry we often see many relations between the former and its relations to Physics. By utilizing both Chemistry and Physics we can get a better understanding for the both mentioned. We will use what we know about Ideal Gases and Thermodynamics to try to understand specific processes that occur in a system. A Quick Recap on Thermodynamics Before we discuss any further, let’s do a very quick recap on the important aspects of thermodynamics that are important to know for ideal gas processes. Some of this will be a quick review and some will be relatively new unless you have seen it in your Physics class. So first off let’s state the First law of thermodynamics: $\Delta{U} = Q + W$ The whole point of stating this equation is to remind us that energy within any given system is conserved. What that means is that no energy is EVER created or destroyed, but it is simply converted from one form to another, such as heat to work and vice versa. In case you may be confused with some of these symbols, here is a short explanation in a table. ΔU This is the total change in the internal energy of the gas. Q This is the total heat flow of the gas • When Q is negative (-), heat is being removed from the system • When Q is positive (+), heat is being added to the system W This is the total work done on or by the gas • When W is negative (-), work is being done by the system • When W is positive (+), work is being done on system Thermodynamic and Ideal Gases Below are two equations that describe the relationship between the internal energy of the system of a monatomic gas and a diatomic gas. In a monatomic (mono-: one) gas, since it only has one molecule, the ways for it have energy will be less than a diatomic gas (di-: two) since a diatomic gas has more ways to have energy (Hence, diatomic gas has a 5/2 factor while a monatomic gas has a 3/2). Looking at these two equations we have also conclude that the internal energy (ΔU) only has an effect on the kinetic energy of the gas molecules (movement). Nowhere in these two equations do we see that the potential energy being affected. A Monatomic Ideal Gas A Monatomic Ideal Gas Equation: $\Delta{U} = \frac{3}{2}nR\Delta{T}$ In a monatomic gas, it has a total of three translational kinetic energy modes (hence, the 3/2). A Diatomic Ideal Gas A Diatomic Ideal Gas Equation: $\Delta{U} = \frac{5}{2}nR\Delta{T}$ In a diatomic gas, it has a total of three translational kinetic energy modes and two rotational energy modes (hence, the 5/2). Work in Ideal Gases In relations to the first law of thermodynamics, we can see that by adding heat (Q) or work (W) the internal energy of the gaseous system can be increased. Also, that during compression of the system, the volume of the gas will decrease and response its temperature will increase and thus the internal energy of the system will also increase since temperature is related to energy. And this is true except in an isothermal system (which we’ll talk more about later). That is why when a gas is compressed the work is positive and when it is being compressed, it is negative. It may also be good to know that the area under the curve is work. If you have taken Calculus, you may remember the integral as it is used to find the area under the curve (or graph) as shown below. $W = -\displaystyle \int P dV$ OR W = - (area under curve) In this case, you can literally take the area of the triangle or work with integrals. Work = Area = (1/2)base x height or Work = ∫F(x) dx The Heat Capacity and State Functions When certain state functions (P, V, T) are held constant, the specific heat of the gas is affected. Below is the universal formula for a gas molecule when its pressure is held constant: $c_p = c_v + R$ When this formula is rearranged we get the heat capcity of the gas when its volume is held constant: $c_v = R - c_p$ Types of Ideal Gas Processes There are four types of thermodynamics processes. What this basically means is that in a system, one or more variable is held constant. To keep things simple, below are examples as to how keeping a certain variable in a system constant can lead to. Isobaric •This is a process where the pressure of the system is kept constant. •P = 0 •An example of this would be when water is boiling in a pot over a burner. In this case, heat is being exchanged between the burner and pot but the pressure stays constant. To derive this process we start off by using what we know, and that is the first law of thermodynamics: $\Delta{U} = Q + W$ Rearranging this equation a bit we get: $Q = \Delta{U} + W$ Next, since pressure is equal to W ΔV, it can be denoted as: $Q = \Delta{U} + p\Delta{V}$ Now, the ideal gas law can be applied (PV=nRΔT) and since pressure is constant: $Q = ΔU + nR\Delta{T}$ For the next step, we will assume that this number of moles of gas stays constant throughout this process: $Q = n\c_V\ \Delta{T} + nR\Delta{T}$ Simplifying the equation some more by taking out the nΔT from both equations we get: $Q = n\(c_V + R)\ \Delta{T}$ Knowing that cp = cv + R we can substitute for cp: $Q = n\c_P\ \Delta{T}$ Now we got the equation for an isobaric process! Isochoric •This is a process where the volume of the system is kept constant. •ΔV = 0 •An example of this would be when you have Helium gas sealed up in a container and there is an object (like a piston) pushing down the container (exerting pressure). But, gas molecules is neither entering nor exiting out of the system. Let’s find the equation for this process, as before let’s start off with the first law of thermodynamics: $\Delta{U} = Q + W$ Rearranging this equation a bit we get: $Q = \Delta{U} + W$ In this case, since volume is constant, ΔV = 0: $Q = \Delta{U}$ Since the internal energy of the system equals to the amount of heat transferred we can replace ΔU with the ideal gas equation for heat: $Q = nC_V\Delta{T}$ Above is the ideal gas equation for an isochoric process! Figure: Isochoric Process in Graphical Form Isothermal •This is a process where the temperature of the system is kept constant. ΔU = 0, ΔT = 0 •When volume increases, the pressure will decrease, and vice versa. ΔT = 0 then:ΔV ↑and P ↓ OR ΔV↓ and P ↑ (inverse relationship) •As an example, gas molecules are sealed up in a container but an object on top of the container (such as a piston) pushes down on the container in a very slow fashion that there is not enough to change its temperature. To derive the equation for an isothermal process we must first write out the first law of thermodynamics: $\Delta{U} = Q + W$ Rearranging this equation a bit we get: $Q = \Delta{U} + W$ Since ΔT = 0. Therefore we are only left with work: $Q = W$ As such we get: $W = -p\Delta{V}$ Making this equation into an ideal gas equation we get: $W = \frac{nRt}{V}$ In order to get to the next step we need to use some calculus: $Q = nRT ln \frac{V_f}{V_i}$ And there you go! The equation for an isothermal process. Adiabatic •This is a process where no heat is being added or removed from the system. •Or can be simply stated as: no heat transfer (or heat flow) happening in a system. •In freshman chemistry, only the basic idea of this process is needed and that is when there is no heat transfer, Q = 0. Problems for Practice 1 The volume of a gas in a container expanded from 1L to 3L upon releasing the piston upward. From following graph, find the amount work associated with the expansion of the gas. $f(x) = x + 3$ from [1, 3] Problem 2 Calvin is observing an unknown monatomic gas molecule (sealed inside a container) in his Freshman Chemistry Lab. He has been told by his lab instructor that there are four moles of this unidentified gas in the container. The laboratory room’s temperature was initially set at room temperature when he started the lab, but by the time he was almost finished with the lab the temperature has gone up 10°C. What is the total internal energy of this unknown gaseous substance by the time Calvin’s lab session ended? Problem 3 In an isochoric system, three moles of hydrogen gas is trapped inside an enclosed container with a piston on top of it. The total amount of internal energy of the gaseous system is 65 Joules, and the temperature of the system decreased from 25°C to 19°C. What is the specific heat of the gas molecules? Problem 4 While looking over some of her lab data, a chemistry student notices she forgot to record the numbers of moles of the gas molecule she looked at. The experiment she did that day had kept the pressure constant and the temperature went down two degrees in the process of the experiment. Assuming all variables are also ideal, how many moles was she dealing with? Problem 5 A chemistry student is looking at 5.00 grams of the monatomic Helium gas that is put into a container that expands from 10L to 13L. The container is confined at a constant temperature of 30°C in an enclosed system. (a) What is the total energy of the system? (b) Is the pressure of the system increasing, decreasing, or not affected? Solution to Problem 1 As stated above, we know that the amount of work is the same as saying the area under the curve. In this case we can look for shapes we can easily find the area of. Here, we have a shape that is similar to a rectangle. Work= Area = Length x Width = (3)(3-1) =(3)(2) =6 Work = 6 Joules Solution to Problem 2 From the problem, we know that the unknown gaseous substance is monatomic, so we would have to use the equation for a monatomic gas: $\Delta{U} = \frac{3}{2}nR\Delta{T}$ Also from the problem, we know that the substance is at room temperature with is 25°C, but that was just extra information given and we didn’t really need to know that to solve the problem. Since we know that the temperature change from start to finish was +10°. $\Delta{T} = +10K$ The problem also given us the number of moles that was in the container: $n = 4 mol$ For the R-value, we can choose any R constant, but to make this problem a little easier we will choose a R-constant that will cancel out any other units except for joules. $R = 8.3145 \frac{J}{mol K}$ Plugging in all these values into the equation we have: $\Delta{U} = \frac{3}{2}(4 mol)(8.3145 \frac{J}{mol K})(10K)$ $= 498 J$ Solution to Problem 3 Since the system is an isochoric system, there will be zero change in volume, therefore: $\Delta{V} = 0$ And the equation for an isochoric system will be: $Q = nC_V\Delta{T}$ From the equation we know the following: $n=3mol$ $Q = \Delta{T}=(25-19)K=6K$ (Since we are only trying to find the difference between the starting point and the final point, we do not need to do conversions to degrees Kelvin) $Q=65J$ Before plugging the values in, let’s rearrange the isochoric equation and set it to what we are trying to find: $C_V = \frac{Q}{n\Delta{T}}$ Now, we can plug the values in: $C_V = \frac{65J}{(3mol)(6K)}$ $= 3.6\frac{J}{mol k}$ Solution to Problem 4 From the problem we are given the following: $\Delta{T} = +2K$ $\Delta{U} = Q = 110 J$ Since are given that the pressure of the gas was kept constant: $C_p = 14\frac{J}{K mol}$ We have enough evidence to conclude that this is an isobaric process: $Q = n\c_P\ \Delta{T}$ Rearranging the equation to fit what we are trying to find we get: $n = \frac{Q}{\c_P\ \Delta{T}}$ Now we can plug in our values: $n = \frac{110 J}{14\frac{J}{K mol}2K}$ $= 4 moles of gas$ Solution to Problem 5 A) First, let’s list what is given in the problem: $V_i = 10 L$ $V_f = 13 L$ Converting temperature from Celsius to Kelvin we have: $T = (30+273.15)K = 303.15K$ And since temperature is constant, this is an isothermal process: $Q = nRT ln \frac{V_f}{V_i}$ In order to use this equation we will need to convert Helium gas to moles: $n = \frac{grams}{\frac{grams}{mole}}$ $= \frac{5.00 g He}{\frac{4.00 g He}{mol He}}$ $= 1.25 mol He$ We will choose a universal R-constant that will easily cancel out all the units that will help us get the answer without any other conversions: $= 8.31451\frac{J}{K mol}$ Plugging in all these values into the isothermal equation we get: $Q = (1.25 mol)(8.31451 \frac{J}{K mol})(303.15 K) ln \frac{13 L}{10 L}$ $= 827 J$ B) It is a rule that in an isothermal process that volume and pressure have an inverse relationship. Meaning in one goes up, the other must go down. In this case, volume went up so the pressure must have decreased. Contributors and Attributions • Joanne Chan (UC Davis); all images on this pages were made/taken by Joanne Chan.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Thermodynamics/Ideal_Systems/Ideal_Gas_Processes.txt
When solids, liquids or gases are combined, the thermodynamic quantities of the system experience a change as a result of the mixing. This module will discuss the effect that mixing has on a solution’s Gibbs energy, enthalpy, and entropy, with a specific focus on the mixing of two gases. Introduction A solution is created when two or more components mix homogeneously to form a single phase. Studying solutions is important because most chemical and biological life processes occur in systems with multiple components. Understanding the thermodynamic behavior of mixtures is integral to the study of any system involving either ideal or non-ideal solutions because it provides valuable information on the molecular properties of the system. Most real gases behave like ideal gases at standard temperature and pressure. This allows us to combine our knowledge of ideal systems and solutions with standard state thermodynamics in order to derive a set of equations that quantitatively describe the effect that mixing has on a given gas-phase solution’s thermodynamic quantities. Gibbs Free Energy of Mixing Unlike the extensive properties of a one-component system, which rely only on the amount of the system present, the extensive properties of a solution depend on its temperature, pressure and composition. This means that a mixture must be described in terms of the partial molar quantities of its components. The total Gibbs free energy of a two-component solution is given by the expression $G=n_1\overline{G}_1+n_2\overline{G} _2 \label{1}$ where • $G$ is the total Gibbs energy of the system, • $n_i$ is the number of moles of component i,and • $\overline{G}_i$ is the partial molar Gibbs energy of component i. The molar Gibbs energy of an ideal gas can be found using the equation $\overline{G}=\overline{G}^\circ+RT\ln \frac{P}{1 bar} \label{2}$ where $\overline{G}^\circ$ is the standard molar Gibbs energy of the gas at 1 bar, and P is the pressure of the system. In a mixture of ideal gases, we find that the system’s partial molar Gibbs energy is equivalent to its chemical potential, or that $\overline{G}_i=\mu_i \label{3}$ This means that for a solution of ideal gases, Equation $\ref{2}$ can become $\overline{G}_i=\mu_i=\mu^\circ_i+RT \ln \frac{P_i}{1 bar} \label{4}$ where • µi is the chemical potential of the ith component, • µi° is the standard chemical potential of component i at 1 bar, and • Pi is the partial pressure of component i. Now pretend we have two gases at the same temperature and pressure, gas 1 and gas 2. The Gibbs energy of the system before the gases are mixed is given by Equation $\ref{1}$, which can be combined with Equation $\ref{4}$ to give the expression $G_{initial}=n_1(\mu^\circ_1+RT \ln P)+n_2(\mu^\circ_2+RT \ln P) \label{5}$ If gas 1 and gas 2 are then mixed together, they will each exert a partial pressure on the total system, $P_1$ and $P_2$, so that $P_1+ P_2= P$. This means that the final Gibbs energy of the final solution can be found using the equation $G_{final}=n_1(\mu^\circ_1+RT \ln P_1)+n_2(\mu^\circ_2+RT \ln P_2) \label{6}$ The Gibbs energy of mixing, $Δ_{mix}G$, can then be found by subtracting $G_{initial}$ from $G_{final}$. \begin{align} Δ_{mix}G &= G_{final} - G_{initial}\[4pt] &=n_1RT \ln \frac{P_1}{P}+n_2RT \ln \frac{P_2}{P} \[4pt] &=n_1 RT \ln \chi_1+n_2 RT \ln \chi_2 \label{7} \end{align} where $P_i = \chi_iP$ and $\chi_i$ is the mole fraction of gas $i$. This equation can be simplified further by knowing that the mole fraction of a component is equal to the number of moles of that component over the total moles of the system, or $\chi_i = \dfrac{n_i}{n}.$ Equation \ref{7} then becomes $\Delta_{mix} G=nRT(\chi_1 \ln \chi_1 + \chi_2 \ln \chi_2) \label{8}$ This expression gives us the effect that mixing has on the Gibbs free energy of a solution. Since $\chi_1$ and $\chi_2$ are mole fractions that range from 0 to 1, we can conclude that $Δ_{mix}G$ will be a negative number. This is consistent with the idea that gases mix spontaneously at constant pressure and temperature. Entropy of mixing Figure $1$ shows that when two gases mix, it can really be seen as two gases expanding into twice their original volume. This greatly increases the number of available microstates, and so we would therefore expect the entropy of the system to increase as well. Thermodynamic studies of an ideal gas’s dependence of Gibbs free energy of temperature have shown that $\left( \dfrac {d G} {d T} \right )_P=-S \label{9}$ This means that differentiating Equation $\ref{8}$ at constant pressure with respect to temperature will give an expression for the effect that mixing has on the entropy of a solution. We see that \begin{align} \left( \dfrac {d G_{mix}} {d T} \right)_P &=nR(x_1 \ln x_1+x_2 \ln x_2) \[4pt] &=-\Delta_{mix} S \end{align} $\Delta_{mix} S=-nR(x_1 \ln x_1+x_2 \ln x_2) \label{10}$ Since the mole fractions again lead to negative values for ln x1 and ln x2, the negative sign in front of the equation makes ΔmixS positive, as expected. This agrees with the idea that mixing is a spontaneous process. Enthalpy of mixing We know that in an ideal system $\Delta G= \Delta H-T \Delta S$, but this equation can also be applied to the thermodynamics of mixing and solved for the enthalpy of mixing so that it reads $\Delta_{mix} H=\Delta_{mix} G+T\Delta_{mix} S \label{11}$ Plugging in our expressions for $Δ_{mix}G$ (Equation $\ref{8}$) and $Δ_{mix}S$ (Equation $\ref{10}$) , we get $\Delta_{mix} H=nRT(x_1 \ln x_1+x_2 \ln x_2)+T \left[-nR(x_1 \ln x_1+x_2 \ln x_2) \right] = 0$ This result makes sense when considering the system. The molecules of ideal gas are spread out enough that they do not interact with one another when mixed, which implies that no heat is absorbed or produced and results in a $Δ_{mix}H$ of zero. Figure $2$ illustrates how $TΔ_{mix}S$ and $Δ_{mix}G$ change as a function of the mole fraction so that $Δ_{mix}H$ of a solution will always be equal to zero (this is for the mixing of two ideal gasses). Outside Links • Satter, S. (2000). Thermodynamics of Mixing Real Gases. J. Chem. Educ. 77, 1361-1365. • Brandani, V., Evangelista, F. (1987). Correlation and prediction of enthalpies of mixing for systems containing alcohols with UNIQUAC associated-solution theory. Ind. Eng. Chem. Res. 26 (12), 2423–2430. Problems 1. Use Figure 2 to find the x1 that has the largest impact on the thermodynamic quantities of the final solution. Explain why this is true. 2. Calculate the effect that mixing 2 moles of nitrogen and 3 moles of oxygen has on the entropy of the final solution. 3. Another way to find the entropy of a system is using the equation ΔS = nRln(V2/V1). Use this equation and the fact that volume is directly proportional to the number of moles of gas at constant temperature and pressure to derive the final expression for $T\Delta_{mix}S$. (Hint: Use the derivation of $T\Delta_{mix}G$ as a guide). Answers 1. x1= 0.5 2. Increases the entropy of the system by 27.98 J/molK Contributors and Attributions • Elizabeth Billquist (Hope College)
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Thermodynamics/Ideal_Systems/Thermodynamics_of_Mixing.txt
• Real Gases - Joule-Thomson Expansion The Joule-Thomson effect is also known as the Joule-Kelvin effect. This effect is present in non ideal gasses, where a change in temperature occurs upon expansion. • Salting Out Salting out is a purification method that utilizes the reduced solubility of certain molecules in a solution of very high ionic strength. Salting out is typically, but not limited to, the precipitation of large biomolecules such as proteins. Real (Non-Ideal) Systems The Joule-Thomson effect is also known as the Joule-Kelvin effect. This effect is present in non ideal gasses, where a change in temperature occurs upon expansion. Introduction The Joule-Thomson coefficient is given by $\mu_{\mathrm JT} = \left. \dfrac{\partial T}{\partial p} \right\vert_H$ where • T is the temperature, • p is the pressure and • H is the enthalpy. In terms of heat capacities one has $\mu_{\mathrm JT} C_V = -\left. \dfrac{\partial E}{\partial V} \right \vert_T$ and $\mu_{\mathrm JT} C_p = -\left. \dfrac{\partial H}{\partial p} \right \vert_T$ In terms of the second virial coefficient at zero pressure one has $\mu_{\mathrm JT}\vert_{p=0} = ^0\!\!\phi = B_2(T) -T \dfrac{dB_2(T)}{dT}$ Salting Out Salting out is a purification method that utilizes the reduced solubility of certain molecules in a solution of very high ionic strength. Salting out is typically, but not limited to, the precipitation of large biomolecules such as proteins. In contrast to salting in, salting out occurs in aqueous solutions of high ionic strength that reduce the molecule's solubility causing certain proteins to precipitate. Ideally, the type of salt being used and the concentration of the salt can be varied to selectively precipitate a the molecule. In reality, salting out is an effective means for initial molecule purification, but lacks the ability for precise isolation of a specific protein. The Mechanism Behind Salting Out The conformation of large biomolecules in vivo is typically controlled by hydrophobic and hydrophilic interactions with the cellular environment. These interactions largely govern the molecule's final conformation by folding in such a way that most hydrophobic functional groups are shielded from the polar cellular environment. To achieve this conformation the molecule folds in such a way that all of the hydrophobic parts of a molecule are aggregated together and the hydrophilic groups are left to interact with the water. In the case of proteins it is the charged amino acids that allow selective salting out to occur. Charged and polar amino acids such as glutamate, lysine, and tyrosine require water molecules to surround them to remain dissolved. In an aqueous environment with a high ionic strength, the water molecules surround the charges of the ions and proteins. At a certain ionic strength, the water molecules are no longer able to support the charges of both the ions and the proteins. The result is the precipitation of the least soluble solute, such as proteins and large organic molecules. The Hoffmeister Series Salting out can be a powerful tool to separate classes of proteins that vary in size, charge, and surface area among other characteristics. One method of controlling the precipitation is the utilize the different effects of various salts and their respective concentrations. A salt's ability to induce selective precipitation is dependent on many interactions with the water and solutes. Research by Franz Hofmeister in the early 20th century organized various anions and cations by their ability to salt out. The ordering of cations and anions is called the Hoffmeister Series (1). The cations are arranged as follows $\ce{NH4^{+}> K^{+}> Na^{+} >Li^{+} >Mg^{2+} >Ca^{2+}} \nonumber$ where ammonium has the highest ability to precipitate other proteinaceous solutes. Likewise, the order for anions is $\ce{F^{-} ≥ SO4^{2-}> H2PO4^{-}> H3CCOO^{-}> Cl^{-}> NO3^{-}> Br^{-}> ClO3^{-}> I^{-}>ClO^{-}} \nonumber$ Between cations and anions in solution the concentration of the anion typically has the greatest effect on protein precipitation. One of the most commonly used salts is ammonium sulfate, which is typically used because the ions produced in an aqueous solution are very high on the Hofmeister series, and their interaction with the protein itself is relatively low. Other ions such as iodide are very good at precipitating proteins, but are not used due to their propensity to denature or modify the protein. Salting out relies on changes in solubility based on ionic strength. The ionic strength of a solution, I, is defined as $I =\dfrac{1}{2} \sum_i \ m_i {z_i}^{2} \label{1}$ where • $m_i$ is the concentration of the ion and • $z_i$ is the charge of the ion. Total ionic strength of multiple ions is the sum of the ionic strengths of all of the ions. Using the Debye-Hückel limiting law given by $\log \gamma_\pm = -\dfrac{1.824 \times 10^6} { \left( \epsilon T \right)^{3/2}} | z_+ z_- | \sqrt I \label{2}$ where • $I$ is the ionic Strength • $z_+$ is the catonic charge of the electrolyte for $\gamma_\pm$ • $z_-$ is the anionic charge of the electrolyte for $\gamma_\pm$ • $\gamma$ is the mean ionic activity coefficient • $T$ is the temperature of the electrolyte solution • $\epsilon$ is the relative dielectric constant for the solution which can be adapted for for an aqueous solution at 298 K, $\log \gamma_\pm = - 0.509 | z_+ z_- | \sqrt I \label{3}$ the solubility, $S$, of a particular aqueous solute can be defined as $\log \dfrac{S}{S_o}\ = - 0.509 | z_+ z_- | \sqrt I -K' I \label{4}$ where • $K'$ is the is a constant dependent on the size of the solute and the ions present, • $S$ is the solubility and • $S_0$ is the solubility in pure solvent. Example $1$ A common salt used in protein precipitation is ammonium sulfate, calculate the ionic strength of a 4g being added to 1230 ml 0.1M NaCl. 0.191M Example $2$ Which of the following polypeptides would likely precipitate first at pH of 4: AAVKI or DDEKVK Solution Although Ionic strength matters, one cannot forget that normal solubility rules still hold and the polypeptide AAVKI would likely precipitate first given its almost complete non-polar nature. Example $3$ Since protein precipitation is dependent on which salt is used, which of the following salts would precipitate protein at the lowest concentration of the salt solution? 1. LiI 2. NaBr 3. K2SO4 4. LiF c Example $4$ Protein Y was just discovered by scientists at a national lab. The scientists managed to purify the protein with some precipitate in their flask. However, their yields were very low, to solve their problem to extracted the rest of the protein from the solution. They added 58 g of NaCl to 1 L of their protein solution in order to salt out the protein Y. After addition of the NaCl, they noticed that the solution no longer had some of the previously precipitated protein. What is the reason for the disappearance of the precipitate? Solution The ionic strength of the solution after addition of 58g of NaCl was about 1. In this case the ionic strength was in the region where salting in occurs. Hence the disappearance of the precipitate.
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Thermodynamics/Real_(Non-Ideal)_Systems/Real_Gases_-_Joule-Thomson_Expansion.txt
• 0th Law of Thermodynamics The Zeroth Law of Thermodynamics states that if two systems are in thermodynamic equilibrium with a third system, the two original systems are in thermal equilibrium with each other. Basically, if system A is in thermal equilibrium with system C and system B is also in thermal equilibrium with system C, system A and system B are in thermal equilibrium with each other. • 1st Law of Thermodynamics The First Law of Thermodynamics states that energy can be converted from one form to another with the interaction of heat, work and internal energy, but it cannot be created nor destroyed, under any circumstances. • 2nd Law of Thermodynamics The Second Law of Thermodynamics states that the state of entropy of the entire universe, as an isolated system, will always increase over time. The second law also states that the changes in the entropy in the universe can never be negative. • 3rd Law of Thermodynamics The 3rd law of thermodynamics will essentially allow us to quantify the absolute amplitude of entropies. It says that when we are considering a totally perfect (100% pure) crystalline structure, at absolute zero (0 Kelvin), it will have no entropy (S). Note that if the structure in question were not totally crystalline, then although it would only have an extremely small disorder (entropy) in space, we could not precisely say it had no entropy. The Four Laws of Thermodynamics The Zeroth Law of Thermodynamics states that if two systems are in thermodynamic equilibrium with a third system, the two original systems are in thermal equilibrium with each other. Basically, if system A is in thermal equilibrium with system C and system B is also in thermal equilibrium with system C, system A and system B are in thermal equilibrium with each other. Introduction Essentially, two systems that are in thermodynamic equilibrium will not exchange any heat. Systems in thermodynamic equilibrium will have the same temperature. • In 1872 James Clerk Maxwell wrote: "If when two bodies are placed in thermal communication, one of the two bodies loses heat, and the other gains heat, that body which gives out heat is said to have a higher temperature than that which receives heat from it." And, "If when two bodies are placed in thermal communication neither of them loses or gains heat, the two bodies are said to have equal temperature or the same temperature. The two bodies are then said to be in thermal equilibrium." Maxwell also stated, "Bodies whose temperatures are equal to that of the same body have themselves equal temperatures." • In 1897 Max Planck said, "If a body, A, be in thermal equilibrium with two other bodies, B and C, then B and C are in thermal equilibrium with one another." References 1. Petrucci, Harwood, Herring, and Madura. General Chemistry: Principles and Modern Applications. 9th ed. Upper Saddle River, New Jersey: Pearson Education, 2007. 2. Muller, Ingo, and Wolf Weiss. Entropy and Energy A Universal Competition. Germany: Springer-Verlag Berlin Heidelberg, 2005. eBook. Exercise \(1\) 1 kg of water at 10º C is added to 10 kg of water at 50º C. What is the temperature of the water when it reaches thermal equilibrium? Answer 46.36º C
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Thermodynamics/The_Four_Laws_of_Thermodynamics/0th_Law_of_Thermodynamics.txt
To understand and perform any sort of thermodynamic calculation, we must first understand the fundamental laws and concepts of thermodynamics. For example, work and heat are interrelated concepts. Heat is the transfer of thermal energy between two bodies that are at different temperatures and is not equal to thermal energy. Work is the force used to transfer energy between a system and its surroundings and is needed to create heat and the transfer of thermal energy. Both work and heat together allow systems to exchange energy. The relationship between the two concepts can be analyzed through the topic of Thermodynamics, which is the scientific study of the interaction of heat and other types of energy. Introduction To understand the relationship between work and heat, we need to understand a third, linking factor: the change in internal energy. Energy cannot be created nor destroyed, but it can be converted or transferred. Internal energy refers to all the energy within a given system, including the kinetic energy of molecules and the energy stored in all of the chemical bonds between molecules. With the interactions of heat, work and internal energy, there are energy transfers and conversions every time a change is made upon a system. However, no net energy is created or lost during these transfers. Law of Thermodynamics The First Law of Thermodynamics states that energy can be converted from one form to another with the interaction of heat, work and internal energy, but it cannot be created nor destroyed, under any circumstances. Mathematically, this is represented as $\Delta U=q + w \label{1}$ with • $ΔU$ is the total change in internal energy of a system, • $q$ is the heat exchanged between a system and its surroundings, and • $w$ is the work done by or on the system. Work is also equal to the negative external pressure on the system multiplied by the change in volume: $w=-p \Delta V \label{2}$ where $P$ is the external pressure on the system, and $ΔV$ is the change in volume. This is specifically called "pressure-volume" work. The internal energy of a system would decrease if the system gives off heat or does work. Therefore, internal energy of a system increases when the heat increases (this would be done by adding heat into a system). The internal energy would also increase if work were done onto a system. Any work or heat that goes into or out of a system changes the internal energy. However, since energy is never created nor destroyed (thus, the first law of thermodynamics), the change in internal energy always equals zero. If energy is lost by the system, then it is absorbed by the surroundings. If energy is absorbed into a system, then that energy was released by the surroundings: $\Delta U_{system} = -\Delta U_{surroundings}$ where ΔUsystem is the total internal energy in a system, and ΔUsurroundingsis the total energy of the surroundings. Table 1 Process Sign of heat (q) Sign of Work (w) Work done by the system N/A - Work done onto the system N/A + Heat released from the system- exothermic (absorbed by surroundings) - N/A The above figure is a visual example of the First Law of Thermodynamics. The blue cubes represent the system and the yellow circles represent the surroundings around the system. If energy is lost by the cube system then it is gained by the surroundings. Energy is never created nor destroyed. Since the area of the clue cube decreased the visual area of the yellow circle increased. This symbolizes how energy lost by a system is gained by the surroundings. The affects of different surroundings and changes on a system help determine the increase or decrease of internal energy, heat and work. Table 2 The Process Internal Energy Change Heat Transfer of Thermal Energy (q) Work (w=-PΔV) Example q=0 Adiabatic + 0 + Isolated system in which heat does not enter or leave similar to styrofoam ΔV=0 Constant Volume + + 0 A hard, pressure isolated system like a bomb calorimeter Constant Pressure + or - enthalpy (ΔH) -PΔV Most processes occur are constant external pressure ΔT=0 Isothermal 0 + - There is no change in temperature like in a temperature bath Example $1$ A gas in a system has constant pressure. The surroundings around the system lose 62 J of heat and does 474 J of work onto the system. What is the internal energy of the system? Solution To find internal energy, ΔU, we must consider the relationship between the system and the surroundings. Since the First Law of Thermodynamics states that energy is not created nor destroyed we know that anything lost by the surroundings is gained by the system. The surrounding area loses heat and does work onto the system. Therefore, q and w are positive in the equation ΔU=q+w because the system gains heat and gets work done on itself. \begin{align} ΔU &= (62\,J) + (474\,J) \[4pt] &= 536\,J \end{align} Example $2$ A system has constant volume (ΔV=0) and the heat around the system increases by 45 J. 1. What is the sign for heat (q) for the system? 2. What is ΔU equal to? 3. What is the value of internal energy of the system in Joules? Solution Since the system has constant volume (ΔV=0) the term -PΔV=0 and work is equal to zero. Thus, in the equation ΔU=q+w w=0 and ΔU=q. The internal energy is equal to the heat of the system. The surrounding heat increases, so the heat of the system decreases because heat is not created nor destroyed. Therefore, heat is taken away from the system making it exothermic and negative. The value of Internal Energy will be the negative value of the heat absorbed by the surroundings. 1. negative (q<0) 2. ΔU=q + (-PΔV) = q+ 0 = q 3. ΔU = -45J Outside Links • Hamby, Marcy. "Understanding the language: Problem solving and the first law of thermodynamics." J. Chem. Educ. 1990: 67, 923. Contributors and Attributions • Lauren Boyle (UCD) First Law of Thermodynamics The law of Conservation of Energy refers to an isolated system in which there is no net change in energy and where energy is neither created nor destroyed. Although there is no change in energy, energy can change forms, for example from potential to kinetic energy. In other words, potential energy (V) and kinetic energy (T) sum to a constant total energy (E) for a specific isolated system. $E = T + V$ Another way that energy can change forms is heat (q) and work (w). As heat is applied to a closed system, the system does work by increasing its volume. $w = P_{ext}\Delta{V}$ where Pext is the external pressure, and delta V is the change in volume. A classic example of this is a piston. As heat is added to the cylinder, the pressure inside the cylinder increases. The piston then rises to relieve the pressure difference between the pressure inside the cylinder and the external pressure. By increasing the volume in the cylinder, the piston has just done work. Reference the picture below. The sum of heat and work is the change in internal energy, $\Delta{U}$. In an isolated system, $q = -w$. Therefore, $\Delta{U} = 0$. In quantum mechanics, the equation $\hat{H}\psi_n = E_n\psi_n$ Where, • $E$ = the energy corresponding to a wave function • $V$ = the potential • $\hat{H}$ = the Hamiltonian operator The equation is analogous to the equation: $E = T + V$ • Vanessa Chan
textbooks/chem/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Thermodynamics/The_Four_Laws_of_Thermodynamics/First_Law_of_Thermodynamics/Conservation_of_Energy.txt