chapter
stringlengths
1.97k
1.53M
path
stringlengths
47
241
• 21.1: Introduction to the Study of Surfaces Thus far we have considered methods for analyzing the bulk properties of samples, such as determining the identity or concentration of an ion in a solution, of a molecule in a gas, or of several elements in a solid. In do so, we did not concern ourselves with the sample's homogeneity or heterogeneity, which may vary along any of the x,y,z-axes. In this chapter we give consideration to the composition of a sample's surface. • 21.2: Spectroscopic Surface Methods n this section we consider three representative surface analytical methods: X-ray photoelectron spectroscopy, in which the input is a beam of X-ray photons and the output is electrons; Auger electron spectroscopy, in which the input is either a beam of electrons or of X-ray photons and the output is electrons; and secondary-ion mass spectrometry, in which the input is a beam of ions and the output is ions. • 21.3: Scanning Electron Microscopy In scanning electron microscopy we raster a beam of high-energy electrons over a surface using a two-dimensional grid, achieving a resolution limit of approximately 0.2 nm, or approximately 1000× better than an optical microscope. • 21.4: Scanning Probe Microscopes in the last section we considered how we can image a surface using an electron beam. In this section we consider a very different approach to developing an image of a surface, one in which we bring a probe close to the surface and examine how the probe interacts with the surface. One advantage of this approach is that the interaction between the probe and the surface can include attraction and repulsion, which opens up vertical movement as a third dimension to the image. 21: Surface Characterization by Spectroscopy and Microscopy Thus far we have considered methods for analyzing the bulk properties of samples, such as determining the identity or concentration of an ion in a solution, of a molecule in a gas, or of several elements in a solid. In doing so, we did not concern ourselves with the sample's homogeneity or heterogeneity. In this chapter we give consideration to how we can gather information about the composition of a sample's surface and how it differs from the sample's bulk composition. But first, let's consider several important questions. What Is A Surface? A surface is a boundary, or interface, between two phases, such as a solid and a gas (the type of interface of particular interest to us in this chapter). This is a helpful, but not a sufficient description. Also of interest is the question of depth. Is a surface just the outermost layer of atoms, ions, or molecules, or does it extend several layers into the sample? In what ways might the composition of a sample at the surface differ from its composition in the sample's bulk interior? And, what about variations in composition across a surface? Is the surface itself homogeneous or heterogeneous in its composition? Different analytical methods will sample the surface to different depths and with different surface areas, which means the volume of sample analyzed will vary from method-to-method. For this reason, we usually define a sample's surface as what is analyzed by the analytical method we are using. Why Are Surfaces of interest? Figure $1$ shows the crystal structure of AgCl(s), which consists of a repeating pattern of Ag+ ions and Cl ions. If you look at the ions in interior of the structure, you will see that each Ag+ ion is surrounded by six Cl ions, and each Cl ion is surrounded by six Ag+ ions. On the surface, however, we see that Cl ions and Ag+ ions no longer are surrounded by six ions of opposite charge. As a result, the Ag+ ions and Cl ions on the surface are more chemically reactive than those in the interior and can serve as sites for interesting chemistry. The chemical and physical properties of a sample's surface are likely to be very different than the sample's bulk properties. What Challenges Does a Surface Present? Suppose we are interested in studying the surface of a piece of zinc metal using a probe that samples just the outermost layer of atoms and that samples a circular surface area that is 1 µm2. How many atoms of Zn might we expect our probe to encounter? Here is some useful information about zinc: it has a molar mass of 65.38 g/mol, it has a density of 7.14 g/cm3, and it has an atomic radius of approximately 0.13 nm. From this we calculate the atoms per unit volume as $\frac{7.14 \text{ g}}{\text{cm}^{3}} \times \frac{100 \text{ cm}}{\text{m}} \times \frac{1 \text{m}}{10^9 \text{ nm}} \times \frac{1 \text{ mol}}{65.38 \text{ g}} \times \frac{6.022 \times 10^{23} \text{ atoms}}{\text{mol}} = \frac{6.6 \times 10^{15} \text{ atoms}}{\text{cm}^2 \text{ nm}} \nonumber$ The units in the denominator may look odd to you, but writing them this way emphasizes that we are interested both in the depth from which information is received (given here in nanometers, nm) and in the surface area from which information is received (given here in square centimeters, cm2). Multiplying this value by the thickness of an atomic layer of zinc, which is twice its atomic radius, suggests we are analyzing approximately $\frac{6.6 \times 10^{15} \text{ atoms}}{\text{cm}^2 \text{ nm}} \times 0.26 \text{ nm} = 1.7 \times 10^{15} \frac{\text{atoms}}{\text{cm}^2} \nonumber$ If we multiply this value by the surface area that we are sampling from, then we are interacting with approximately $1.7 \times 10^{15} \frac{\text{atoms}}{\text{cm}^2} \times \left(\frac{100 \text{ cm}^2}{\text{m}}\right)^2 \times \left(\frac{1 \text{m}}{10^6 \text{µm}} \right)^2 = 1.7 \times 10^7 \text{ atoms of Zn} \nonumber$ Although 17 million may seem like a large number, it is not a particularly large number of atoms on which to carry out an analysis. Now, suppose the surface has a 10 ppm impurity of copper atoms; that is, there are 10 copper atoms for every 106 zinc atoms. In this case, our probe of the sample involves just $1.7 \times 10^7 \text{ atoms of Zn} \times \frac{10 \text{ atoms of Cu}}{10^6 \text{ atoms of Zn}} = 170 \text{ atoms of Cu} \nonumber$ As a comparison, if we analyze a sample in which the analyte is present at a concentration that is $1 \times 10^{-6} \text{ mol/L}$ using an analytical method that gathers information from a volume that is just 1 mm3, then we are sampling $\frac{1 \times 10^{-6} \text{ mol}}{\text{L}} \times \frac{1 \text{L}}{1000 \text{ cm}^3} \times \left( \frac{1 \text{cm}}{10 \text{ mm}}\right)^3 \times 1 \text{ mm}^3 \times 6.022 \times 10^{23} \text{ mol}^{-1} = 6.0 \times 10^{11} \text{ particles of analyte} \nonumber$ An additional challenge when we attempt to analyze a surface is that a freshly exposed surface becomes contaminated with an absorbed layer of gas molecules almost instantly when sitting on a laboratory bench, and in a few seconds to a few minutes at pressures in the range of 10–6 torr to 10–8 torr. Analysis of a surface requires careful attention to how the surface is prepared. What Opportunities Does a Surface Present? Compared to many of the methods in Chapters 6–20 and in Chapters 22–34, the use of a probe that samples from a small area allows for moving the probe across the surface—this is called rastering—developing a two-dimensional image of the surface. When using an energetic beam that can etch a hole in the sample, we can obtain information at depth—a process called depth profilling—that provides information in a third dimension. These are particularly important strengths of surface analytical methods. How Can We Probe the Surface? To study a surface, we put energy into it in the form of a beam of photons, electrons, or ions and then we measure the energy that exits the surface in the form of a beam of photons, electrons, or ions. Table $1$ shows some of the possibilities. Also included in this table are methods in which an applied field generates a response from the surface. Entries in bold receive attention in this chapter. Surface enhanced Raman spectroscopy received a brief mention in Chapter 18. Note that Auger electron spectroscopy appears twice as the emission of electrons can follow the input of X-ray photons or electrons. Table $1$. Classifying surface analysis methods based on the input energy and the output energy. energy out $\rightarrow$ energy in $\downarrow$ photon electron ion field photon surface enhanced Raman spectroscopy (SERS) extended X-ray absorption fine structure (EXAFS) X-ray photoelectron spectroscopy (XPS) Auger electron spectroscopy (AES) UV-photoelectron spectroscopy (UPS) laser-microprobe mass spectrometry (LAMMA) electron energy dispersive X-ray spectroscopy (EDS) electron microprobe (EM) Auger electron spectroscopy (AES) scanning electron microscopy (SEM) low energy electron diffraction (LEED) ion Rutherford back scattering (RBS) secondary ion mass spectrometry (SIMS) field scanning tunneling microscopy (STM) atomic force microscopy (AFM) Note There are other ways to probe a surface by putting energy into it, including the application of thermal energy and the use of neutral species. See the text Methods of Surface Analysis, Czanderna, A. Editor, Elsevier: Amsterdam (1975) and the article "Analytical Chemistry of Surfaces" by D. M. Hercules and S. H. Hercules, J. Chem. Educ. 1984, 61, 402–409 for detailed reviews. Although neither is a recent publication, both provide an excellent introduction to surface analysis.
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/21%3A_Surface_Characterization_by_Spectroscopy_and_Microscopy/21.01%3A_Introduction_to_the_Study_of_Surfaces.txt
In this section we consider three representative surface analytical methods: X-ray photoelectron spectroscopy, in which the input is a beam of X-ray photons and the output is electrons; Auger electron spectroscopy, in which the input is either a beam of electrons or of X-ray photons and the output is electrons; and secondary-ion mass spectrometry, in which the input is a beam of ions and the output is ions. X-Ray Photoelectron Spectroscopy (XPS) In X-ray photoelectron spectroscopy, which is also known as electron spectroscopy for chemical analysis (ESCA), we measure the kinetic energy of electrons ejected from a sample following the absorption of X-ray photons. The resulting spectrum is a count of these emitted electrons as a function of their energy. Principles of XPS We can explain the origin of X-ray photoelectron spectroscopy using the photoelectric effect. Figure $1a$ shows the energy level diagram for an element's 1s, 2s, and 2p core-level electrons along with their KLM designations (see Chapter 12.1 for a previous discussion of this way of designating electrons). A nearly monoenergetic X-ray beam of known energy is focused on the sample, which results in the ejection of a core-level photoelectron, as shown in Figure $1b$. The kinetic energy of this emitted electron, $E_{KE}$, is related to its binding energy to the nucleus, $E_{BE}$, by the equation $E_{KE} = h \nu - E_{EB} - \Phi_w \label{xps1}$ where $h \nu$ is the energy of the X-ray photon and $\Phi_w$ is the spectrometer's work function (the energy needed to remove the electron from the surface and into the vacuum). The most common sources of X-rays are the Mg $K_{\alpha}$ line with an energy of 1253.6 eV or the Al $K{\alpha}$ line with an energy of 1486.6 eV. The core-level vacancy created by the photoelectron leaves the atom with an unstable electron configuration. Relaxation to the ground state occurs when this vacancy is filled by an electron from a higher energy shell, with the excess energy released as either the emission of a second electron or the fluorescent emission of a characteristic X-ray, as seen in Figure $1c$. The secondary electron in Figure $1c$ is called an Auger electron. Figure $2$ provides an example of an XPS survey spectrum for aluminum oxide, $\ce{Al2O3}$, using the $K_{\alpha}$ line for aluminum as the source of X-rays. The peak table gives the binding energies of the peaks for aluminum and for oxygen using the $K_{\alpha}$ for Al and, for comparison, the $K_{\alpha}$ line for Mg. Note the difference in how the major peaks are labeled. The photoelectrons ejected in the process shown in Figure $1b$ are designated by the element and the ns notation that specifies the orbital from which the electron was ejected. Auger electrons are designated using the KLM notation, specifying the initial vacancy created by the absorbed photon, the source of the electron that fills that vacancy, and the source of the ejected Auger electron. The OKLL peak in this spectrum is consistent with the scheme shown in Figure $1c$. When the source of the second and third electrons is from the valance shell, then notation is sometimes written as KVV; the OKLL peak here could be designated as OKVV. There are a few additional interesting features to note in the survey spectrum for Al2O3. One is the presence of a peak for carbon even though the sample, Al2O3, does not contain carbon. The prevalence of carbon in the atmosphere means that trace levels of carbon appear in almost all XPS spectra. A second feature is the increase in the signal on the high binding energy (low kinetic energy) side of peaks, which is particularly visible here for the O1s peak the C1s peak, and the Al2s peak. The source of this background is electrons that fail to escape the sample without undergoing inelastic collisions that result in a loss of kinetic energy and, given Equation \ref{xps1}, are recorded as if they have a larger than expected binding energy. Because X-rays penetrate more deeply into the sample than the depth from which electrons can travel without undergoing an inelastic collision, this background is unavoidable. Note that the background is more significant at higher binding energies (smaller kinetic energies). A third feature is that the binding energy of an XPS peak is independent of the X-ray source, but the binding energy for an Auger peak varies with the X-ray source's energy (see table in Figure $2$). The kinetic energy of the O1s, Al2s, and Al2p photoelectrons is the difference between the energy of the X-ray photon, $h \nu$, and each electron's binding energy, BE; if we change the X-ray source, then $h \nu$ and KE increase in value, but the BE remains fixed. For the OKLL Auger electron, the kinetic energy depends on the difference in the binding energies of the three electrons involved $\text{KE} \approx \text{BE}_K - \text{BE}_L - \text{BE}_L \label{xps2}$ and is, therefore, independent of the energy of the X-ray source. Given Equation \ref{xps1}, if the KE remains constant, then an increase in the energy of the X-ray photon, $h \nu$, means that the apparent BE must increase. This shift in binding energy when using a different source is one way to identify a peak as resulting from Auger electrons. Instrumentation The basic instrumentation for XPS is shown in Figure $3$. The most common X-ray sources, as noted above, are Mg (1253.6 eV) and Al (1486.6 ev), which have the advantage of relatively narrow line-widths (0.7 and 0.9 eV, respectively) and, therefore, a relatively narrow range of energies. Higher energy sources are available, such as Ag (2984.4 ev), but at the cost of a wider line-width (2.6 eV). A system of electron lenses collects and focuses the ejected electrons onto the entrance slit of a hemispherical analyzer. The path of an electron through the analyzer depends upon its kinetic energy. By varying the potentials applied to the hemispherical analyzer's inner and outer plates, electrons of different kinetic energies reach the detector. A sputtering gun is an optional feature that can be used to clean the surface of the sample or to remove successive layers of the sample, allowing for the gathering of spectra at various depths within the sample. Calibration of the spectrometer's binding energy scale, which accounts for the spectrometer's work function, is made using specific lines for one or more conductive metals; examples include Au 4f7/2 at 83.95 eV and Cu 2p3/2 at 932.63 ev. The peak for carbon that appears in almost all XPS spectra provides an additional way to check the calibration of the binding energy scale. Applications X-ray photoelectron spectroscopy is a particularly useful tool for determining the composition and structure of a sample. XPS also can provide information about how the composition of a sample varies with depth and quantitative information about a sample's components. Qualitative Analysis. One of the strengths of X-ray photoelectron spectroscopy is the ability to determine the elements that make up a sample's surface. Figure $4$ shows a survey scan from 0 eV to 1100 eV of a ceramic material. In addition to the O1s peak, we see strong peaks for Si and Al—probably an aluminosilicate ceramic—and small peaks for a variety of elements: La, Ba, Mn, Sn, Ca, Cl, P, and Mg. NIST maintains an extensive, and searchable, database of XPS peaks that help in identifying the elements in a sample. Chemical Shifts. An element's binding energy is sensitive to its chemical environment, particularly with respect to oxidation states and structure. For example, Table $1$ provides the binding energy for chlorine's 2p line in three potassium salts drawn from the NIST database with all values from the same literature source. Using KCl, in which chlorine has an oxidation state of –1, as a baseline, the Cl 2p peak for KClO3 is shifted by +8.4 eV and the Cl 2p peak for KClO4 is shifted by +10.3 eV. The direction of the shift makes sense, as we expect that it will require more energy to remove an electron from an element that has a more positive oxidation state. Table $1$. Binding energies for the Cl 2p XPS peak as a function of chlorine's oxidation state. compound oxidation state for chlorine binding energy (eV) for Cl 2p line $\Delta \text{BE}$ relative to KCl KCl –1 198.1 KClO3 +5 206.5 +8.4 KClO4 +7 208.4 +10.3 Chemical shifts also reflect changes in chemical structure. Figure $5$, for example, shows a high resolution scan for the oxygen 2p peak for same sample of aluminum oxide, Al2O3, in Figure $1$. The surface of a metal oxide often has three distinct sources of oxygen: oxides, which make up the bulk of the sample, hydroxides that form at the surface following the chemisorption of water, and water that is physically absorbed to the surface. Curve-fitting of the raw data shows the contribution of each type of oxygen to the raw data and, through the peak areas, their relative abundance. Wagner Plots. Both the binding energy of an X-ray photoelectron and the kinetic energy of an Auger electron convey information about the element from which the electrons were emitted. A Wagner plot shows both the binding energy for a photoelectron that leaves a particular core-level vacancy and the kinetic energy of the Auger electron whose origin arises from the filling of this core-level vacancy. Figure $6$ shows an example of a Wagner plot for copper based on its 2p3/2 X-ray photoelectron and its LMM Auger electron. The diagonal lines are called the modified Auger parameter, which is defined as the sum of the XPS binding energy and the AES kinetic energy. Values for 20 compounds are included in this plot. Of interest here is the clustering of the individual compounds. For example, all of the samples for which copper has an oxidation state of +1 (shown as magenta squares) have similar binding energies between 932 eV and 933 eV, but with more variable kinetic energies, which range from 914 eV to 917 eV. Most of the compounds in which copper has an oxidation state of +2 (shown as blue diamonds) have modified Auger parameters between approximately 1850 eV and 1851 eV, although there is signifiant variation in their individual binding energies and kinetic energies. The two metals (shown as green circles) and the commpounds CuS and CuSe occupy a similar space within the Wagner plot. Interestingly, both CuS and CuSe are transition metal chalcogenides and have semiconducting properties. Information at Depth. X-rays penetrate into a sample to a depth that is greater than the distance the photoelectron can travel without losing energy to inelastic collisions. We can take advantage of this to vary the depth from which we gather information. Figure $7$ shows how this is accomplished by changing the angle from which we collect and analyze photoelectrons. The length of the solid black line is the distance an electron can travel without losing energy in an inelastic collision. When the detector is placed at 90° to the sample's surface, the sampling depth is at its greatest. Adjusting the detector so that it is at 30° to the surface, results in its detecting electrons from a depth that is just half of that when the detector is at 90°. Quantitative Analysis. The intensity of an XPS peak—either is peak height or its peak area—is proportional to the number of atoms of the element responsible for the peak. This allows for determining the relative concentration, $C_x$, of an element in a sample; thus $C_x = \frac {I_x / S_x} {\sum{(I_i / S_i)} } \label{xps3}$ where $I_x$ is the peak intensity for the element, $S_x$ is the sensitivity factor for the element, and $I_i$ and $S_i$ are the peak intensities and sensitivity factors for all other elements in the sample. Sensitivity factors account for differences in the ease with which photoelectrons are produced and escape from the sample. Published tables of sensitivity factors are available, although they may vary some from instrument-to-instrument. Sensitivity factors are referenced to a standard line, typically C1s, which is assigned a sensitivity factor of 1.00. In many cases we are interested only in the relative concentration of just two elements. In this case, we write Equation \ref{xps3} as $\frac{C_x}{C_y} = \frac{I_x / S_x}{I_y / S_y} \label{xps4}$ where $x$ and $y$ are the two elements. For example, using the data in Figure $4$, the Si2p peak has a height of 30 mm and the Al2p peak has a peak height of 20 mm. Their sensitivity factors are, respectively 0.817 and 0.737. Using these values, we find that $\frac{C_\text{Si}}{C_\text{Al}} = \frac{30/0.817}{20/0.737} = 1.4 \nonumber$ there are approximately $1.4 \times$ as many atoms of silicon as there are atoms of aluminum. Auger Electron Spectroscopy (AES) In Figure $2$ we learned that following the ejection of a photoelectron from an atom's core, the now unstable atom releases energy by either emitting a secondary electron from a higher energy orbital or by releasing a photon. In XPS we measure the intensity of the ejected photoejected electrons as a function of their binding energy; in Auger spectroscopy we measure the intensity of these secondary electrons as a function of their kinetic energies. The instrumentation for AES can be coupled with an XPS spectrometer or be a stand-alone instrument. In either case, the basic instrumentation is similar to that shown in Figure $3$, although AES spectra are usually initiated using an electron gun as a source instead of an X-ray source. One advantage to using an electron gun is that it can be focused into a smaller beam and can then easily rastered across a surface, allowing for imaging of the sample's surface. Depth profiling, using an ion beam to remove layers of the sample, is another common use of AES. Figure $8$ provides an example of a typical AES spectrum, in this case for the mineral calcite, CaCO3. The raw data, on the left, shows the intensity of the signal as a count of electrons with a particular kinetic energy. The large background signal is from electrons that lose kinetic energy as the result of inelastic collisions. The two broad features between approximately 100 eV and 600 eV are the Auger peaks for calcium and for oxygen. The peaks in this case are easy to see because the two analytes are present in bulk. For a trace-level analyte, the Auger peaks in a normal plot may be difficult to see. For this reason, Auger spectra are usually presented by plotting the derivative of the raw data giving the spectrum on the right. Figure $9$ provides an example of how AES is used to study changes in the composition of a single crystal of CaCO3 that was allowed to equilibrate with a solution containing Mg2+ ions. The sample was mounted on a sample probe and the AES spectrum recorded. An Ar+ ion beam was used to remove layers of the sample while the spectrometer was used to record spectra of the sample. As expected, the surface is enriched in Mg2+ ions that diffused into the crystal with the relative concentration of Mg2+ decreasing. The relative abundance of Ca2+ increases with depth; the relative concentration of oxygen remains more or less constant with depth. Secondary-Ion Mass Spectrometry (SIMS) In SIMS we bombard the surface of a sample with an energetic primary beam—typically 5-20 keV—of an ion, such as Ar+, O2+, or Cs+. The primary ion penetrates the sample's surface, ejecting a variety of particles, including neutral atoms, electrons, and, more importantly, secondary ions (singly-charged or multiply charged cations and anions, and clusters of ions). These secondary ions are characterized by their mass-to-charge ratios and by their kinetic energies. The use of a primary ion beam of Cs+ favors the formation of secondary ions that are anions, and a primary beam of O2+ favors the formation of secondary ions that are cations. The instrumentation for SIMS includes an ion gun for generating the primary beam and a mass spectrometer for analyzing the secondary ions. In static mode, the primary ion beam is run using a low current density that minimizes the extent to which the sample's outermost layers are removed. In dynamic mode, the primary beam is run at a higher current density that removes more of the sample's surface. Dynamic SIMS is well suited to depth profiling. High mass resolution is obtained using a time-of-flight mass analyzer or a double-focusing mass analyzer; see Chapter 20 for a discussion of different types of mass analyzers. SIMS is well suited for imaging as the positively charged primary ion can be rastered across the sample's surface. Figure $10$ provides an example of imaging a surface using SIMS by measuring the yield of 14N12C ions while rastering the primary ion beam across a 10 µm $\times$ 10 µm section of the sample.
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/21%3A_Surface_Characterization_by_Spectroscopy_and_Microscopy/21.02%3A_Spectroscopic_Surface_Methods.txt
In optical microscopy we use photons to provide images of a sample. Although extraordinarily useful and powerful, the ability to resolve features in optical microscopy is limited by the source of light; in general, we can distinguish between two objects if they are separated by a distance that is greater than the wavelength of the photons being used. The maximum resolution for an optical microscope is about 0.2 µm (200 nm), which means we can use an optical microscope to view a human hair (20-200 µm), an eukaryotic cell (10-100 µm), a chloroplast (5-8 µm), and a mitochondrion (1-3 µm), but not a ribosome (0.01 µm-0.02 µm). In this section we will consider the electron microscope, which has a resolution limit of approximately 0.2 nm, or approximately $1000 \times$ more than an optical microscope. In Section 21.4, we will examine two additional types of non-optical microscopy. Instrumentation In scanning electron microscopy we raster a beam of high-energy electrons over a surface using a two-dimensional grid. Figure $1$ shows the basic instrumental needs. The electron gun usually is just a simple tungsten wire that releases electrons when it is heated resistively. Other sources include solid-state crystals of lanthanum hexaboride (LaB6) or cerium hexaboride (CeB6) and the field emission gun, which uses a tungsten wire with a tip that has a radius of about 100 µm. Regardless of their source, these electrons are accelerated to an energy of 1-40 keV and passed through a series of lens that narrow and focus them into a beam with a diameter that falls within a range of 1 nm to 1000 nm (0.001 µm to 1 µm). A set of coiled scan controls deflects the electron beam in a raster pattern across the sample's surface (see inset at the bottom left of Figure $1$). An electron detector monitors the electrons that scatter back from the sample; the type of detector used varies with the type of emission from the sample that we choose to monitor—see the next sub-heading for types of emission—but typically are scintillation devices when monitoring electrons and energy-dispersive detectors when monitoring X-rays. Interaction of Electron Beams With Solids Figure $1$ suggests that the only type of signal is the measurement of electrons that are scattered back toward the detector. The interaction between the electron beam and the sample, however creates a variety of signals, including both electrons and X-rays. Figure $2$ illustrates the types of emission that follows from the interaction of the electron beam with the sample. The electron beam penetrates approximately 1-2 µm into the sample. As you might expect from the previous section on electron spectroscopy, the interaction of an electron beam with a sample results in the emission of some Auger electrons; these electrons come from a volume near the vacuum-sample interface. Of more importance are secondary electrons and backscattered electrons. As the electron beam penetrates into the sample, the electrons undergo collisions with the sample's atoms. Some of these collisions are elastic in which the electron changes its direction, but retains its kinetic energy. With sufficient time, these electrons eventually undergo a collision in which they cross the sample-vacuum interface and exit the solid. These backscattered electrons are collected and passed along to the detector. Other electrons undergo inelastic collisions, losing kinetic energy and, eventually, become embedded in the sample. Backscattered electrons come from a depth as great as 50% of the depth to which the electron beam penetrates. Another source of electrons comes from a process in which the electron beam induces the ejection of electrons from the sample's conduction band. These secondary electrons are less numerous than backscattered electrons and they also come from a much shallower depth, typically 5-50 nm. The electron beam also stimulates the release of X-rays, including the characteristic X-rays of the sample's elements, a broad continuum, and fluorescent X-ray emission. See Chapter 12 for more details about atomic X-ray emission. As the electron beam is rastered across the sample, the intensity of the backscattered electrons from a specific position on the sample that reaches the detector is stored in the corresponding pixel on the instrument's monitor. The image created in this way is not an optical picture, but a digitized electronic reproduction of the sample's surface. The extent of magnification depends on the length of the detector's monitor relative to the length of a single scan across the sample; scanning a shorter distance results in a greater magnification. An optical microscope usually provides a maximum magnification of $1000 \times$; an SEM can achieve a magnification of $1,000,000 \times$. Applications Figure $3$ shows four examples of applications of scanning electron microscopy for the measurement of particle size (upper left), for the evaluation of nanowires (upper right), for characterizing the channels in a microfluidic device (lower left), and for examining the tip of a cantilever and tip used for atomic force microscopy. Other applications include biological samples, films and coatings, fibers, and powders, to name a few.
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/21%3A_Surface_Characterization_by_Spectroscopy_and_Microscopy/21.03%3A_Scanning_Electron_Microscopy.txt
in the last section we considered how we can image a surface using an electron beam. In this section we consider a very different approach to developing an image of a surface, one in which we bring a probe close to the surface and examine how the probe interacts with the surface. One advantage of this approach is that the interaction between the probe and the surface can include attraction and repulsion, which opens up a third dimension to the image. Scanning Tunneling Microscope (STM) In the scanning tunneling microscope we take advantage of the ability of a current to pass through the gap between the tip of a conducting probe and a conducting sample when the probe and the sample are held at different potentials. Figure $1$ shows the basic arrangement in which the probe has, ideally, a single atom at its tip. The tunneling current, $I_t$ is given by $I_t = V e^{-Cd} \label{stm1}$ where $V$ is the applied voltage, $d$ is the distance between the probe's tip and the sample, and $C$ is a constant whose value depends upon the composition of the probe and the sample. The exponential decrease in the tunneling current with distance means that a small change in the position of probe's tip relative to the sample results in a significant change in the signal, providing vertical resolution on the order of 0.1 nm. Probes are fashioned using tungsten wires or platinum-iridium wires. Scanning tunneling microscopy images are created by moving the probe back-and-forth across the sample while measuring the current. The signal is acquired in one of two modes: constant current or constant height. In constant current mode, the probe's tip is brought near the surface and the current measured, which establishes a setpoint. As the probe moves across the sample, it is raised or lowered to maintain the setpoint current. The result is measure of the distance, $d$, between the probe's tip and the sample along the z-axis as a function of the xy position of the probe's tip. In constant height mode, the distance, $d$, between the probe's tip and the sample is held constant, and the current, $I_t$, is measured as a function of the xy position of the probe's tip. Constant height mode allows for faster data acquisition, but is limited to samples that have flat surfaces. Positioning of the sample and the probe's tip relative to each other is accomplished by either moving the probe or moving the sample. In either case, the control of movement in accomplished using a piezoelectric scanner. A piezoelectric material, as shown in Figure $2$ experiences a change in its length when a dc potential is applied across its sides, either extending its length or contracting its length. Figure $3$ shows a configuration of cylindrical piezoelectric scanner in which the cylinder's upper half controls movement along the z-axis, and the cylinder's lower half is used to control movement along the x-axis, the y-axis, or both. One limitation of STM is that the sample must be conductive. It is possible to image a non-conducting sample if it first coated with a conductive material, such as gold, although such coatings can mask surface features. Atomic Force Microscope (AFM) Unlike the scanning tunneling microscope, the atomic force microscope does not require a conducting sample and imaging is achieved without a current flowing between the sample and the tip of the probe. Instead, as shown in Figure $4$, the probe is attached to the end of a flexible cantilever. The tip of the probe (see photograph in Figure $3$) is pyramidal in shape and extends about 10 µm from its base on the cantilever. The tip of the probe has a diameter on the order of 10 nm and is made of silicon, Si, or silicon nitride, Si3N4. The cantilever typically is 100-500 µm in length. The probe is scanned across the sample's surface and the position of the probe relative to the surface is determined by reflecting the beam from a diode laser off the probe-end of the cantilever to a detector. The force in atomic force is the interaction between the probe's tip and the sample, which may be a force of attraction or a force of repulsion. When the probe's tip is in contact with the sample—known as contact mode—there is a force of repulsion between them. Because the cantilever has a smaller force constant than the atoms in the probe's tip, the cantilever bends. Moving the sample stage to maintain a constant deflection of the laser off of the cantilever provides an image of the sample's surface. Contact mode allows for rapid scanning and work well for samples with rough surfaces, although it may damage samples with softer surfaces. In non-contact mode, the probe's tip is brought close to the sample's surface, but not allowed to come into contact with it. The cantilever is place into an oscillatory motion. The amplitude of this oscillation is proportional to the force of attraction between the probe's tip and the sample, which varies with the distance between the probe's tip and the sample. Moving the sample stage to maintain a constant oscillation provides an image of the sample's surface. Non-contact mode AFM generally provides lower resolution images, but is less damaging to the sample. A third mode for collecting data is called intermittent or tapping mode. In this mode the cantilever is set to oscillate at its resonant frequency with the probe's tip coming into contact with the sample's surface when it reaches the bottom of the cantilever's oscillation. The frequency of the oscillation is sensitive to the distance between the probe's tip and the sample. Moving the sample stage to maintain the resonant frequency provides an image of the sample's surface. You can view a gallery of scanning tunneling microscopy images here, and a gallery of atomic force microscopy images here.
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/21%3A_Surface_Characterization_by_Spectroscopy_and_Microscopy/21.04%3A_Scanning_Probe_Microscopes.txt
In Chapters 6–21 we examined a wide range of spectroscopic techniques that take advantage of the interaction between electromagnetic radiation and matter. In this chapter we turn our attention to electrochemical techniques in which the potential, current, or charge in an electrochemical cell serves as the analytical signal. Although there are only three fundamental electrochemical signals, there are many possible experimental designs—too many, in fact, to cover adequately in an introductory textbook. The simplest division of electrochemical techniques is between bulk techniques, in which we measure a property of the solution in the electrochemical cell, and interfacial techniques, in which the potential, current, or charge depends on the species present at the interface between an electrode and the solution in which it sits. The measurement of a solution’s conductivity, which is proportional to the total concentration of dissolved ions, is one example of a bulk electrochemical technique. A determination of pH using a pH electrode is an example of an interfacial electrochemical technique. Only interfacial electrochemical methods receive further consideration in this textbook. In this chapter we provide an introduction to electrochemistry, introducing ideas relevant to understanding the specific electroanalytical methods introduced in Chapters 23–25. • 22.1: Electrochemical Cells The electrochemical cell consists of two half-cells, each of which contains an electrode immersed in a solution of ions whose activities determine the electrode’s potential. A salt bridge that contains an inert electrolyte, such as KCl, connects the two half-cells. The ends of the salt bridge are fixed with porous frits, which allow the electrolyte’s ions to move freely between the half-cells and the salt bridge. This movement of ions in the salt bridge completes the electrical circuit. • 22.2: Potentials in Electroanalytical Cells If an electrochemical cell is at equilibrium, there is no current and the potential is fixed. If we change the potential, current flows as the system moves to its new equilibrium position. Alternatively, we can pass a current through the cell and effect a change in potential. If we choose to control the potential, then we must accept the resulting current, and we must accept the resulting potential if we choose to control the current. • 22.3: Electrode Potentials The potential of an electrochemical cell is the difference between the potential at the cathode and the potential at the anode where both potentials are defined in terms of a reduction reaction (and are called reduction potentials). • 22.4: Calculation of Cell Potentials from Electrode Potentials The potential of an electrochemical cell is the difference between the electrode potentials of the cathode and the anode. • 22.5: Currents in Electrochemical Cells Most electrochemical techniques rely on either controlling the current and measuring the resulting potential, or controlling the potential and measuring the resulting current; only potentiometry measures a potential under conditions where there is essentially no current. Understanding the relationship between current and potential is important. Although, the experimentally measured potentials may differ from their thermodynamic values for a variety of reasons that we outline here. • 22.6: Types of Electroanalytical Methods We divide electrochemical techniques into static techniques and dynamic techniques. In a static technique we do not allow current to pass through the electrochemical cell and, as a result, the concentrations of all species remain constant. Dynamic techniques, in which we allow current to flow and force a change in the concentration of species in the electrochemical cell, comprise the largest group of interfacial electrochemical techniques. 22: An Introduction to Electroanalytical Chemistry A schematic diagram of a typical electrochemical cell is shown in Figure $1$. The electrochemical cell consists of two half-cells, each of which contains an electrode immersed in a solution of ions whose activities determine the electrode’s potential. A salt bridge that contains an inert electrolyte, such as KCl, connects the two half-cells. The ends of the salt bridge are fixed with porous frits, which allow the electrolyte’s ions to move freely between the half-cells and the salt bridge. This movement of ions in the salt bridge completes the electrical circuit, allowing us to measure the potential using a potentiometer. The reason for separating the electrodes is to prevent the oxidation reaction and the reduction reaction from occurring at the same electrode. For example, if we place a strip of Zn metal in a solution of AgNO3, the reduction of Ag+ to Ag occurs on the surface of the Zn at the same time as a portion of the Zn metal oxidizes to Zn2+. Because the transfer of electrons from Zn to Ag+ occurs at the electrode’s surface, we can not pass them through the potentiometer. Conduction in a Cell Current moves through the cell in Figure $1$ as a result of the movement of two types of charged particles: electrons and ions. First, when zinc, Zn(s) underoges an oxidation reaction $\mathrm{Zn}(s) \rightleftharpoons \text{ Zn}^{2+}(a q)+2 e^{-} \label{ox_rxn}$ it releases two electrons. These electrons move through the circuit that connects the metallic Zn electrode in the left half-cell to the metallic Ag electrode in the right half-cell, where it effects the reduction of Ag+(aq). $\mathrm{Ag}^{+}(a q)+e^{-} \rightleftharpoons \mathrm{Ag}(s) \label{red_rxn}$ If this is all that happens, then the half-cell on the left will develop an excess of positive charge as Zn2+(aq) ions accumulate and the half-cell on the right will develop an excess of negative charge due to the loss of Ag+(aq). The salt bridge provides a way to continue the movement of charge, and thus the current, with the K+ ions moving toward the right half-cell and Cl ions moving toward the left half-cell. Galvanic and Electrolytic Cells The net reaction for the electrochemical cell in Figure $1$ is $\mathrm{Zn}(s)+2 \mathrm{Ag}^{+}(a q) \rightleftharpoons 2 \mathrm{Ag}(s)+\mathrm{Zn}^{2+}(\mathrm{aq}) \label{net_rxn}$ which simply is the result of adding together the reactions in the two half-cells after adjusting for the difference in electrons. As shown by the arrows in the figure, when we connect the electrodes to the potentiometer, current spontaneously flows from the left half-cell to the right half-cell. We call this a galvanic cell. If we apply a potential sufficient to reverse the direction of the current flow, resulting in a net reaction of $2 \mathrm{Ag}(s)+\mathrm{Zn}^{2+}(\mathrm{aq}) \rightleftharpoons \mathrm{Zn}(s)+2 \mathrm{Ag}^{+}(a q) \nonumber$ then we call the system an electrolytic cell. A galvanic cell produces electrical energy and an electrolytic cell consumes electrical energy. Anodes and Cathodes The half-cell where oxidation takes place is called the anode and, by convention, it is shown on the left for a galvanic cell. The half-cell where reduction takes place is called the cathode and, by convention, it is shown on the right for a galvanic cell. Faradaic and Non-Faradaic Currents When we oxidize or reduce an analyte at the electrode in one half-cell, the electrons pass through the potentiometer to the electrode in the other half-cell where a corresponding reduction or oxidation reaction takes place. In either case, the current from the redox reactions at the two electrodes is called a faradaic current. A faradaic current due to the reduction of an analyte is called a cathodic current and carries a positive sign. An anodic current results from the analyte’s oxidation and carries a negative sign. In addition to the faradaic current from a redox reaction, the current in an electrochemical cell includes non-faradaic sources. Suppose the charge on an electrode is zero and we suddenly change its potential so that the electrode’s surface acquires a positive charge. Cations near the electrode’s surface will respond to this positive charge by migrating away from the electrode; anions, on the other hand, will migrate toward the electrode. This migration of ions occurs until the electrode’s positive surface charge and the excess negative charge of the solution near the electrode's surface are equal. Because the movement of ions and the movement of electrons are indistinguishable, the result is a small, short-lived non-faradaic current that we call the charging current. Every time we change the electrode’s potential, a short-lived charging current flows. Even in the absence of analyte, a small, measurable current flows through an electrochemical cell. This residual current has two components: a faradaic current due to the oxidation or reduction of trace impurities and a non-faradaic charging current. Methods for discriminating between the analyte’s faradaic current and the residual current are discussed later in this chapter. The Electrical Double Layer As noted in the previous section, when we apply a potential to an electrode it develops a positive or negative surface charge, the magnitude of which is a function of the metal and the applied potential. Because the surface carries a charge, the composition of the layer of solution immediately adjacent to the electrode changes with, for example, the concentration of cations increasing and the concentration of anions decreasing if the electrode's surface carries a negative charge. As we move away from the electrode's surface, the net potential first decreases in a linear manner, due to the imbalance of the cations and anions, and then in an exponential manner until it reaches zero. This structured surface is called the electrical double layer and consists of an inner layer and a diffuse layer. Anytime we change the potential applied to the electrode, the structure of the electrical double layer changes and a small charging current flows. Mass Transfer in Cells with the Passage of Current The magnitude of a faradaic current is determined by the rate at which the analyte is oxidized at the anode or reduced at the cathode. Two factors contribute to the rate of an electrochemical reaction: the rate at which the reactants and products are transported to and from the electrode—what we call mass transport—and the rate at which electrons pass between the electrode and the reactants and products in solution. There are three modes of mass transport that affect the rate at which reactants and products move toward or away from the electrode surface: diffusion, migration, and convection. Diffusion occurs whenever the concentration of an ion or a molecule at the surface of the electrode is different from that in bulk solution. For example, if we apply a potential sufficient to completely reduce $\text{Ag}^+$ at the electrode surface, the result is a concentration gradient similar to that shown in Figure $3$. The region of solution over which diffusion occurs is the diffusion layer. In the absence of other modes of mass transport, the width of the diffusion layer, $\delta$, increases with time as the $\text{Ag}^+$ must diffuse from an increasingly greater distance. Convection occurs when we mix the solution, which carries reactants toward the electrode and removes products from the electrode. The most common form of convection is stirring the solution with a stir bar; other methods include rotating the electrode and incorporating the electrode into a flow-cell. The final mode of mass transport is migration, which occurs when a charged particle in solution is attracted to or repelled from an electrode that carries a surface charge. If the electrode carries a positive charge, for example, an anion will move toward the electrode and a cation will move toward the bulk solution. Unlike diffusion and convection, migration affects only the mass transport of charged particles. Schematic Representations of Cells Although Figure $1$ provides a useful picture of an electrochemical cell, it is not a convenient way to represent it. Imagine having to draw a picture of each electrochemical cell you are using! A more useful way to describe an electrochemical cell is a shorthand notation that uses symbols to identify different phases and that lists the composition of each phase. We use a vertical slash (|) to identify a boundary between two phases where a potential develops, and a comma (,) to separate species in the same phase or to identify a boundary between two phases where no potential develops. Shorthand cell notations begin with the anode and continue to the cathode. For example, we describe the electrochemical cell in Figure $1$ using the following shorthand notation. $\text{Zn}(s) | \text{ZnCl}_2(aq, a_{\text{Zn}^{2+}} = 0.0167) || \text{AgNO}_3(aq, a_{\text{Ag}^+} = 0.100) | \text{Ag} (s) \nonumber$ The double vertical slash (||) represents the salt bridge, the contents of which we usually do not list. Note that a double vertical slash implies that there is a potential difference between the salt bridge and each half-cell. Example $1$ What are the anodic, the cathodic, and the overall reactions responsible for the potential of the electrochemical cell in Figure $4$? Write the shorthand notation for the electrochemical cell. Solution The oxidation of Ag to Ag+ occurs at the anode, which is the left half-cell. Because the solution contains a source of Cl, the anodic reaction is $\mathrm{Ag}(s)+\mathrm{Cl}^{-}(aq) \rightleftharpoons\text{ AgCl}(s)+e^{-} \nonumber$ The cathodic reaction, which is the right half-cell, is the reduction of Fe3+ to Fe2+. $\mathrm{Fe}^{3+}(a q)+e^{-}\rightleftharpoons \text{ Fe}^{2+}(a q) \nonumber$ The overall cell reaction, therefore, is $\mathrm{Ag}(s)+\text{ Fe}^{3+}(a q)+\text{ Cl}^{-}(a q) \rightleftharpoons \mathrm{AgCl}(s)+\text{ Fe}^{2+}(a q) \nonumber$ The electrochemical cell’s shorthand notation is $\text{Ag}(s) | \text{HCl} (aq, a_{\text{Cl}^{-}} = 0.100), \text{AgCl} (\text{sat’d}) || \text{FeCl}_2(aq, a_{\text{Fe}^{2+}} = 0.0100), \text{ Fe}^{3+}(aq,a_{\text{Fe}^{3+}} = 0.0500) | \text{Pt} (s) \nonumber$ Note that the Pt cathode is an inert electrode that carries electrons to the reduction half-reaction. The electrode itself does not undergo reduction.
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/22%3A_An_Introduction_to_Electroanalytical_Chemistry/22.01%3A_Electrochemical_Cells.txt
If a Zn(s) electrode in a solution of Zn2+(aq) in an electrochemical cell is at equilibrium, the current is zero and the potential is fixed in value. If we change the potential from its equilibrium value, current will flow as the system moves to its new equilibrium position. Although the initial current is quite large, it decreases over time, reaching zero when the reaction reaches equilibrium. The current, therefore, changes in response to the applied potential. Alternatively, we can pass a fixed current through the electrochemical cell, forcing the oxidation of Zn(s) to Zn2+(aq), effecting a change in the potential. In short, if we choose to control the potential, then we must accept the resulting current, and we must accept the resulting potential if we choose to control the current. The Thermodynamics of Cell Potentials Because a redox reaction involves a transfer of electrons from a reducing agent to an oxidizing agent, it is convenient to consider the reaction’s thermodynamics in terms of the electron. For a reaction in which one mole of a reactant undergoes oxidation or reduction, the net transfer of charge, q, in coulombs is $q=n F \label{q}$ where n is the moles of electrons per mole of reactant, and F is Faraday’s constant (96485 C/mol). The free energy, ∆G, to move this charge given an applied potential, E, is $\Delta G=E q \label{deltaG1}$ The change in free energy (in kJ/mole) for a redox reaction, therefore, is $\Delta G=-n F E \label{deltaG2}$ where ∆G has units of kJ/mol. The minus sign in Equation \ref{deltaG2} is the result of a different convention for assigning a reaction’s favorable direction. In thermodynamics, a reaction is favored when ∆G is negative, but a redox reaction is favored when E is positive. Substituting Equation \ref{deltaG2} into the thermodynamic equation that relates the free energy to its standard state value $\Delta G = \Delta G^{\circ} + RT\ln Q_r \label{deltaG3}$ gives $-n F E = -n F E^{\circ}+R T \ln Q_r \label{nfe}$ Dividing by –nF leads to the Nernst equation $E=E^{\circ}-\frac{R T}{n F} \ln Q_r \label{nerst1}$ where Eo is the potential under standard‐state conditions (more on this in Section 22.3). Substituting appropriate values for R and F, assuming a temperature of 25 oC (298 K), and switching from the natural logarithm (ln) to the base 10 logarithm (log) gives the potential in volts as $E=E^{\mathrm{o}}-\frac{0.05916}{n} \log Q_r \label{nernst2}$ The term $Q_r$ in the previous equations is the reaction quotient, which has the same mathematical form as the reaction's equilibrium constant expression, but uses the instantaneous amounts of reactants and products in place of their equilibrium values. For the cell in Figure 22.1.1, for example, the overall reaction is $\mathrm{Zn}(s)+2 \mathrm{Ag}^{+}(a q) \rightleftharpoons 2 \mathrm{Ag}(s)+\mathrm{Zn}^{2+}(\mathrm{aq}) \label{net_rxn}$ and Equation \ref{nernst2} becomes $E = E^{\circ} - \frac{0.05916}{2} \log \frac {\left[ \ce{Zn^{2+}} \right]} {\left[ \ce{Ag+} \right]^2} \label{zn_ag}$ Equation \ref{zn_ag} shows us how the potential changes as the concentrations of Zn2+ and Ag+ change. As we will see in 22.3, \ref{zn_ag} actually is expressed in terms of activities instead of concentrations. The appendix in Chapter 35.7 explains what activity is, why it is important to make a distinction between activity and concentration, and when it is reasonable to use concentrations in place of activities. Liquid Junction Potentials A junction potential develops at the interface between two ionic solutions if there is a difference in the concentration and the mobility of the ions. Consider, for example, a porous membrane that separates a solution of 0.1 M HCl from a solution of 0.01 M HCl (Figure $1$a). Because the concentration of HCl on the membrane’s left side is greater than that on the right side of the membrane, H+ and Cl will diffuse in the direction of the arrows. The mobility of H+, however, is greater than that for Cl, as shown by the difference in the lengths of their respective arrows. Because of this difference in mobility, the solution on the right side of the membrane develops an excess concentration of H+ and a positive charge (Figure $1$b). Simultaneously, the solution on the membrane’s left side develops a negative charge because there is an excess concentration of Cl. We call this difference in potential across the membrane a junction potential and represent it as Ej. The magnitude of a junction potential depends upon the difference in the concentration of ions on the two sides of the interface, and may be as large as 30–40 mV. For example, a junction potential of 33.09 mV has been measured at the interface between solutions of 0.1 M HCl and 0.1 M NaCl [Sawyer, D. T.; Roberts, J. L., Jr. Experimental Electrochemistry for Chemists, Wiley-Interscience: New York, 1974, p. 22]. A salt bridge’s junction potential is minimized by using a salt, such as KCl, for which the mobilities of the cation and anion are approximately equal. We also can minimize the junction potential by incorporating a high concentration of the salt in the salt bridge. For this reason salt bridges frequently are constructed using solutions that are saturated with KCl. Nevertheless, a small junction potential, generally of unknown magnitude, is always present.
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/22%3A_An_Introduction_to_Electroanalytical_Chemistry/22.02%3A_Potentials_in_Electroanalytical_Cells.txt
We began this chapter by examining the electrochemical cell in Figure 22.1.1 where $\ce{Zn(s)}$ is oxidized to $\ce{Zn^{2+}(aq)}$ and $\ce{Ag^{+}(aq)}$ is reduced to $\ce{Ag(s)}$, as shown by the following reaction. $2 \mathrm{Ag}(s)+\mathrm{Zn}^{2+}(\mathrm{aq}) \rightleftharpoons \mathrm{Zn}(s)+2 \mathrm{Ag}^{+}(a q) \nonumber$ The reaction proceeds as written because the reduction of Ag+(aq) to Ag(s) $\mathrm{Ag}^{+}(a q)+e^{-} \rightleftharpoons \mathrm{Ag}(s) \label{red_ag}$ is more thermodynamically favorable than the reduction of $\ce{Zn^{2+}(aq)}$ to $\ce{Zn(s)}$ $\text{ Zn}^{2+}(aq)+2 e^{-} \rightleftharpoons \mathrm{Zn}(s) \label{red_zn}$ But, how do we know this is true? In this section we answer this question by taking a close look at electrode potentials. Nature of Electrode Potentials The potential of an electrochemical cell is the difference between the potential at the cathode, $E_\text{cathode}$, and the potential at the anode, $E_\text{anode}$, where both potentials are defined in terms of a reduction reaction (and are called reduction potentials); thus $E_\text{cell} = E_\text{cathode} - E_\text{anode} \label{cell_pot}$ $\mathrm{H}^{+}(a q)+e^{-}=\frac{1}{2} \mathrm{H}_{2}(g) \label{she}$ which is the reaction that defines the standard hydrogen electrode, or SHE. The Standard Hydrogen Electrode (SHE) The SHE consists of a Pt electrode immersed in a solution in which the activity of hydrogen ion is 1.00 and in which the partial pressure of H2(g) is 1.00 atm (Figure $1$). A conventional salt bridge connects the SHE to the indicator half-cell. The short hand notation for the standard hydrogen electrode is $\text{Pt}(s), \text{ H}_{2}\left(g, f_{\mathrm{H}_{2}}=1.00\right) | \text{ H}^{+}\left(a q, a_{\mathrm{H}^{+}}=1.00\right) \| \label{she_cell}$ and the standard-state potential for the reaction \ref{she} is, by definition, 0.000 V at all temperatures. Practical Reference Electrodes Although the standard hydrogen electrode is the standard against which all other potentials are referenced, it is not practical for routine use as it is difficult to prepare and maintain. Instead, we use one of several other reference electrodes. The two most common of these alternative reference electrodes are the calomel, or Hg/Hg2Cl2 electrode, which is based on the following redox couple between Hg2Cl2 and Hg (calomel is the common name for Hg2Cl2) $\mathrm{Hg}_{2} \mathrm{Cl}_{2}(s)+2 e^{-}\rightleftharpoons2 \mathrm{Hg}(l)+2 \mathrm{Cl}^{-}(a q) \nonumber$ and the Ag/AgCl reference electrode, which is based on the reduction of AgCl to Ag $\operatorname{AgCl}(s)+e^{-} \rightleftharpoons \mathrm{Ag}(s)+\mathrm{Cl}^{-}(a q) \nonumber$ A more detailed examination of these two reference electrodes is found in Chapter 23.1. Definition of Electrode Potential To determine the potential for the reduction of Zn2+(aq) to Zn(s) we make it the cathode in the following electrochemical cell $\text{Pt}(s), \text{ H}_{2}\left(g, P_{\mathrm{H}_{2}}=1.00\right) | \text{ H}^{+}\left(a q, a_{\mathrm{H}^{+}}=1.00\right) \| \ce{Zn^{2+}}\left(a q, a_{\mathrm{Zn}^{2+}}=x\right) | \ce{Zn}(s) \label{she_zn}$ where x is the activity of Zn2+ in its half-cell. For example, when $a_{\mathrm{Zn}^{2+}} = 1.00$, the potential of the electrochemical cell is $-0.763 \text{V}$. If we find that the potential for the electrochemical cell $\ce{Zn}(s) | \ce{Zn^{2+}} (aq, a_{\mathrm{Zn}^{2+}} = 1.00) \| \ce{Ag+} (aq, a_{\mathrm{Ag}^{+}} = 1.00) | \ce{Ag}(s) \nonumber$ is +1.562 V, then knowing that $E_{cell} = E_{\ce{Ag+} / \ce{Ag}} - E_{\ce{Zn^{2+}} / \ce{Zn}} = E_{\ce{Ag+} / \ce{Ag}} - (-0.763 \text{V}) \nonumber$ gives $E_{\ce{Ag+} / \ce{Ag}} = +0.799$. In this way, we can build tables of potentials for individual half-reactions. Sign Convention for Electrode Potentials In Section 22.2 we noted the following relationship between an electrochemical potential, $E$, and the Gibbs free energy, $\Delta G$ $\Delta G = - n F E \label{dg}$ which tells us that a positive potential corresponds to a thermodynamically favorable reaction. Knowing that the potential for the electrochemical cell in Equation \ref{she_zn} is $-0.763 \text{V}$ tells us that the reduction of Zn2+(aq) to Zn(s) is not thermodynamically favorable relative to the reduction of H+(aq) to H2(g); that is, we do not expect the reaction $\ce{Zn^{2+}}(aq) + \ce{H2}(g) \rightleftharpoons 2 \ce{H+}(aq) + \ce{Zn}(s) \label{zn_h}$ to occur; however, with a potential of +0.799 V, we do expect the reaction $2 \ce{Ag+}(aq) + \ce{H2}(g) \rightleftharpoons 2 \ce{H+}(aq) + 2 \ce{Ag}(s) \label{ag_h}$ to occur. Or, looking at this another way, we expect that Zn(s), but not Ag(s), will dissolve in acid. Effect of Activity on Electrode Potentials In Chapter 22.2 we wrote the Nernst equation for the reaction $\mathrm{Zn}(s)+2 \mathrm{Ag}^{+}(a q) \rightleftharpoons 2 \mathrm{Ag}(s)+\mathrm{Zn}^{2+}(\mathrm{aq}) \label{net_rxn}$ in terms of the concentrations of Zn2+(aq) and Ag+(aq) $E = E^{\circ} - \frac{0.05916}{2} \log \frac {\left[ \ce{Zn^{2+}} \right]} {\left[ \ce{Ag+} \right]^2} \label{zn_ag_conc}$ Although there are times when we will write the Nernst equation in terms of concentrations, thermodynamic functions are more correctly written in terms of the activities of ions. Under ideal conditions, individual ions and molecules of gases behave as independent particles. When this is true, then an ion's activity and concentration are equal and we can write the Nernst equation using concentrations; under other conditions, then the Nernst equation is more correctly written in terms of activities $E = E^{\circ} - \frac{0.05916}{2} \log \frac {a_{\ce{Zn^{2+}}}} {\left( a_{\ce{Ag+}} \right)^2} \label{zn_ag_activity}$ where $a_{\ce{Zn^{2+}}}$ and $a_{\ce{Ag+}}$ are the activities of Zn2+ and Ag+. Equation \ref{zn_ag_activity} shows us how the potential changes as the activities of Zn2+ and Ag+ change. If you are not familiar with activity, or need a reminder on the relationship between activity and concentration, then see the appendix in Chapter 35.7, which explains what activity is, why it is important to make a distinction between activity and concentration, and when it is reasonable to use concentrations in place of activities. The Standard Electrode Potential $E^{\circ}$ The standard electrode potential, $E^{\circ}$, for a half-reaction is the potential when all species are present at unit activity or, for gases, unit fugacity. Its value is independent of how we choose to write the half-reaction; that is, the standard state potential for the reduction of Ag+(aq) to Ag(s), which is the cathode in the electrochemical cell in Figure 22.1.1, is +0.799 V whether we write the half-reacation as $\ce{Ag+}(aq) + e^{-} \rightleftharpoons \ce{Ag}(s) \label{ag1}$ or as $2 \ce{Ag+}(aq) + e^{-} \rightleftharpoons 2 \ce{Ag}(s) \label{ag2}$ At first glance, this seems counterintuitive; however, if we calculate the potential when the activity of Ag+ is 0.50 we get $E = E^{\circ} - \frac{0.05916}{1} \log \frac{1}{a_{\ce{Ag+}}} = 0.799 - \frac{0.05916}{1} \log \frac{1}{0.50} = 0.781 \text{V} \nonumber$ when using reaction \ref{ag1}, and $E = E^{\circ} - \frac{0.05916}{2} \log \frac{1}{(a_{\ce{Ag+}})^2} = 0.799 - \frac{0.05916}{2} \log \frac{1}{0.50^2} = 0.781 \text{V} \nonumber$ The appendix in Chapter 35.8 provides a table of standard state reduction potentials for a wide variety of half-reactions at 298 K. Some Limitations to the Use of Standard Electrode Potentials Although standard electrode potentials are valuable, they are several important limitations to their use, which we outline here. Substitution of Concentration for Activities One important limitation is that that the Nernst equation is defined in terms of the activity of ions instead of their concentrations. Although it is easy to prepare a solution for which the concentration of Na+ is 0.100 M using NaCl—just weigh out 5.844 g of NaCl and dissolve in 1.00 L of water—it is much more challenging to prepare a solution for which the activity of Na+ is 0.100. For this reason, in calculations we usually substitute concentrations for activities when using the Nernst equation. This simplification generally is okay for dilute solutions where the difference between activities and concentrations are small. Effect of Other Equilibrium Reactions A standard state potential tells us about the equilibrium position of a redox half-reaction reaction under standard state conditions. If one or more of the species in the half-reaction are involved in other equilibrium reactions, then these reactions will affect the value of the standard potential. For example, Fe2+ and Fe3+ form a variety of metal-ligand complexes with Cl which explains why $E_{\ce{Fe^{3+}}/\ce{Fe^{2+}}}^{\circ}$ is 0.771 in the absence of chloride ion, but is 0.70 in 1 M HCl. Formal Potentials One way to compensate for using concentrations and partial pressures in place of activities and fugacities, and to compensate for other equilibrium reactions, is to replace the standard state potentials, $E^{\circ}$ with a formal potential, $E^{\circ \prime}$, that is measured using concentrations of 1.00 for ions, partial pressures of 1.00 for gases, and for a specific concentration of other reagents. The table below, which is adapted from the appendix in Chapter 35.8, provides formal potentials for Fe3+/Fe2+ half-reaction in five different solvents. iron $E^{\circ}$ (V) $E^{\circ \prime}$ (V) $\ce{Fe^{3+}} + e^{-} \rightleftharpoons \ce{Fe^{2+}}$ 0.771 0.70 in 1 M $\ce{HCl}$ 0.767 in 1 M $\ce{HClO4}$ 0.746 in 1 M $\ce{HNO3}$ 0.68 in 1 M $\ce{H2SO4}$ 0.44 in 0.3 M $\ce{H3PO4}$ Reaction Rates The reduction of Fe3+ to Fe2+ consumes an electron, which is drawn from the electrode. The oxidation of another species, perhaps the solvent, at a second electrode is the source of this electron. Because the reduction of Fe3+ to Fe2+ consumes one electron, the flow of electrons between the electrodes—in other words, the current—is a measure of the rate at which Fe3+ is reduced. One important consequence of this observation is that the current is zero when the reaction $\text{Fe}^{3+}(aq) \rightleftharpoons \text{ Fe}^{2+}(aq) + e^-$ is at equilibrium. If redox half-reaction cannot maintain an equilibrium because the reaction in one direction is too slow, then we cannot measure a meaningful standard state potential.
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/22%3A_An_Introduction_to_Electroanalytical_Chemistry/22.03%3A_Electrode_Potentials.txt
The potential of an electrochemical cell is the difference between the electrode potentials of the cathode and the anode $E_\text{cell} = E_\text{cathode} - E_\text{anode} \label{ecell}$ where $E_\text{cathode}$ and $E_\text{anode}$ are both reduction potentials. Given a set of conditions, we can use the Nernst equation to calculate the cell potential, as shown by the following example. Example $1$ Calculate (a) the standard state potential and (b) the potential when [Ag+] = 0.0020 M and [Cd2+] = 0.0050 M, for the following reaction at 25oC. $\mathrm{Cd}(s)+2 \mathrm{Ag}^{+}(a q)\rightleftharpoons2 \mathrm{Ag}(s)+\mathrm{Cd}^{2+}(a q) \nonumber$ For part (b), calculate the potential twice, once using concentrations and once using activities assuming that the solution's ionic strength is 0.100. Solution (a) In this reaction Cd is oxidized at the anode and Ag+ is reduced at the cathode. Using standard state electrode potentials from Appendix 3, we find that the standard state potential is $E^{\circ} = E^{\circ}_{\text{Ag}^+/ \text{Ag}} - E^{\circ}_{\text{Cd}^{2+}/ \text{Cd}} = 0.7996 - (-0.4030) = 1.2026 \ \text{V} \nonumber$ (b) To calculate the potential when [Ag+] is 0.0020 M and [Cd2+] is 0.0050 M, we use the appropriate relationship for the reaction quotient, Qr, when writing the Nernst equation $E = E^{\circ} - \frac{0.05916 \ \mathrm{V}}{n} \log \frac{\left[\mathrm{Cd}^{2+}\right]}{\left[\mathrm{Ag}^{+}\right]^{2}} \nonumber$ $E=1.2026 \ \mathrm{V}-\frac{0.05916 \ \mathrm{V}}{2} \log \frac{0.050}{(0.020)^{2}}=1.14 \ \mathrm{V} \nonumber$ To calculate the potential using activities, we first calculate the activity coefficients for Cd2+ and Ag+. Following the approach outlined in the appendix in Chapter 35.7 gives $\log \gamma_{\ce{Cd^{2+}}} = \frac {-0.51 \times (+2)^2 \times \sqrt{0.100}} {1 + 3.3 \times 0.50 \times \sqrt{0.100}} = -0.2078 \nonumber$ $]log \gamma_{\ce{Ag^{+}}} = \frac {-0.51 \times (+1)^2 \times \sqrt{0.100}} {1 + 3.3 \times 0.25 \times \sqrt{0.100}} = -0.1279 \nonumber$ $a_{\ce{Cd^{2+}}} = \gamma_{\ce{Cd^{2+}}} \times [\ce{Cd}^{2+}] = 0.6197 \times 0.0050 = 0.003098 \nonumber$ $a_{\ce{Ag^{+}}} = \gamma_{\ce{Ag^{+}}} \times [\ce{Ag}^{+}] = 7449 \times 0.0020 = 0.00149 \nonumber$ Finally, we substitute activities for concentrations in the Nernst equation to arrive at a potential of $E=1.2026 \ \mathrm{V}-\frac{0.05916 \ \mathrm{V}}{2} \log \frac{0.003098}{(0.00149)^{2}}=1.11 \ \mathrm{V} \nonumber$ 22.05: Currents in Electrochemical Cells Most electrochemical techniques rely on either controlling the current and measuring the resulting potential, or controlling the potential and measuring the resulting current; only potentiometry (see Chapter 23) measures a potential under conditions where there is essentially no current. Understanding the relationship between current, i, and potential, E, is important. Although we learned in Sections 22.3 and 22.4 how to calculated electrode potentials and cell potentials using the Nernst equation, the experimentally measured potentials may differ from their thermodynamic values for a variety of reasons that we outline here. iR drop The movement of an electrical charge in an electrochemical cell generates a potential, $E_{ir}$ defined by Ohm's law $E_{ir} = iR \label{ohm}$ where i is the current and R is the solution's resistance. To account for this, we can include an additional term to the equation for the electrochemical cell's potential $E_\text{cell} = E_\text{cathode} - E_\text{anode} - E_{ir} = E_\text{Nernst} - iR \label{cellpot}$ where $E_\text{Nernst}$ is the potential from the Nernst equation. The resulting decease in the potential from its idealized value is called the iR drop. Polarization Equation \ref{cellpot} indicates that we expect a linear relationship between an electrochemical cell's potential, $E_\text{cell}$. When this is not the case, the electrochemical cell is said to be polarized. There are several sources that contribute to polarization, which we consider in this section; first, however, we define ideal polarized and nonpolarized electrodes. Ideal Polarized and Nonpolarized Electrodes and Electrochemical Cells An ideal polarized electrode is one in which a change in potential over a fairly wide range has no effect on the current that flows through the electrode, as we see in Figure $1a$ for the range of potentials defined by the solid green line. Such electrodes are useful because they do not themselves undergo oxidation or reduction—they are electrochemically inert—which makes them a good choice for studying the electrochemical behavior of other species. An ideal nonpolarized electrode is one in which a change in current has no effect on the electrode's potential, as we see in Figure $1b$ between the limits defined by the solid red line with deviations shown by the dashed red line. Such electrodes are useful because the provided a stable potential against which we can reference the redox potential of other species. Overpotential The magnitude of polarization when drawing a current is called the overpotential, $\eta$ and expressed as the difference between the applied potential, E, and the potential from the Nernst equation. $\eta = E - E_\text{Nernst} \label{overpot}$ The overpotential can be subdivided into a variety of sources, a few of which are discussed below. Concentration Polarization The reduction of Fe3+ to Fe2+ in an electrochemical cell consumes an electron, which is drawn from the electrode. The oxidation of another species, perhaps the solvent, at a second electrode is the source of this electron. Because the reduction of Fe3+ to Fe2+ consumes one electron, the flow of electrons between the electrodes—in other words, the current—is a measure of the rate at which Fe3+ is reduced. The rate of the reaction $\text{Fe}^{3+}(aq) \rightleftharpoons \text{ Fe}^{2+}(aq) + e^-$ is the change in the concentration of Fe3+ as a function of time. In order for the reduction of Fe3+ to Fe2+ to take place, Fe3+ must move from the bulk solution into the layer of solution immediately adjacent to the electrode and then diffuse to the electrode's surface; this is called the diffusion layer. Once the reduction takes place, the Fe2+ produced must diffuse away from the electrode's surface and enter into the bulk solution. These two processes are called mass transfer and if we try to change the electrode's potential too quickly, mass transfer may result in concentrations of Fe3+ to Fe2+ at the electrode's surface that are different from that in bulk solution, resulting in concentration polarization. Let's use the reduction of Fe3+ to Fe2+ at the cathode of a galvanic cell to think though how concentration polarization affects the potential we measure. From the Nernst equation we know that $E = E_{\ce{Fe^{3+}}/\ce{Fe^{2+}}}^{\circ} - \frac {0.05916} {1} \log \frac {[\ce{Fe^{2+}}]}{[\ce{Fe^{3+}}]} = +0.771 - \frac {0.05916} {1} \log \frac {[\ce{Fe^{2+}}]}{[\ce{Fe^{3+}}]} \label{iron1}$ If the mass transfer of Fe3+ from bulk solution to the electrode's surface is slow and if mass transfer of Fe2+ from the electrode's surface to bulk solution is slow, then the concentration of Fe3+ at the electrode's surface is smaller than in bulk solution and the concentration of Fe2+ at the electrode's surface is greater than in the bulk solution. As a result, the ratio $\frac {[\ce{Fe^{2+}}]}{[\ce{Fe^{3+}}]}$ is greater than that predicted by the bulk concentrations of Fe3+ and Fe2+ and the potential of the cathode is smaller (less positive) than the value +0.771 V predicted by the bulk concentrations of Fe3+ and Fe2+. The resulting potential of the electrochemical cell $E_\text{cell} = E_\text{cathode} - E_\text{anode} \label{iron2}$ is less positive than that predicted by the bulk concentrations of Fe3+ and Fe2+ due to this concentration polarization. Other kinetic processes can contribute to polarization, including the rate of chemical reactions that take place within the layer of solution near the electrode's surface, the kinetics of reactions in which the electroactive species absorb or desorb from the electrode's surface, and the kinetics of the electron transfer process itself. More details on these are included in later chapters covering specific electrochemical techniques. 22.06: Types of Electroanalytical Methods In the next three chapters we will consider a variety of different interfacial electrochemical experiments; that is, experiments in which the redox reaction takes place at the surface of an electrode. Because electrochemistry is such a broad field, let’s use Figure \(1\) to organize these techniques by the experimental conditions we choose to use (Do we control the potential or the current? How do we change the applied potential or applied current? Do we stir the solution?) and the analytical signal we decide to measure (Current? Potential?). At the first level, we divide electrochemical techniques into static techniques and dynamic techniques. In a static technique we do not allow current to pass through the electrochemical cell and, as a result, the concentrations of all species remain constant. Potentiometry, in which we measure the potential of an electrochemical cell under static conditions, is one of the most important quantitative electrochemical methods and is discussed in Chapter 23. Dynamic techniques, in which we allow current to flow and force a change in the concentration of species in the electrochemical cell, comprise the largest group of interfacial electrochemical techniques. Coulometry, in which we measure current as a function of time, is covered Chapter 24. Voltammetry and amperometry, in which we measure current as a function of a fixed or variable potential, are the subjects of Chapter 25.
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/22%3A_An_Introduction_to_Electroanalytical_Chemistry/22.04%3A_Calculation_of_Cell_Potentials_from_Electrode_Potentials.txt
In potentiometry we measure the potential of an electrochemical cell under static conditions. Because no current—or only a negligible current—flows through the electrochemical cell, its composition remains unchanged. For this reason, potentiometry is a useful quantitative method of analysis. The first quantitative potentiometric applications appeared soon after the formulation, in 1889, of the Nernst equation, which relates an electrochemical cell’s potential to the concentration of electroactive species in the cell [Stork, J. T. Anal. Chem. 1993, 65, 344A–351A]. Potentiometry initially was restricted to redox equilibria at metallic electrodes, which limited its application to a few ions. In 1906, Cremer discovered that the potential difference across a thin glass membrane is a function of pH when opposite sides of the membrane are in contact with solutions that have different concentrations of H3O+. This discovery led to the development of the glass pH electrode in 1909. Other types of membranes also yield useful potentials. For example, in 1937 Kolthoff and Sanders showed that a pellet of AgCl can be used to determine the concentration of Ag+. Electrodes based on membrane potentials are called ion-selective electrodes, and their continued development extends potentiometry to a diverse array of analytes. • 23.1: Reference Electrodes In potentiometry we measure the difference between the potential of two electrodes. The potential of one electrode—the working or indicator electrode—responds to the analyte’s activity and the other electrode—the counter or reference electrode—has a known, fixed potential. By convention, the reference electrode is the anode. • 23.2: Metallic Indicator Electrodes In potentiometry, the potential of the indicator electrode is proportional to the analyte’s activity. Two classes of indicator electrodes are used to make potentiometric measurements: metallic electrodes, which are the subject of this section, and ion-selective electrodes, which are covered in the next section. • 23.3: Membrane Ion-Selective Electrodes If metals were the only useful materials for constructing indicator electrodes, then there would be few useful applications of potentiometry. In 1906, Cremer discovered that the potential difference across a thin glass membrane is a function of pH when opposite sides of the membrane are in contact with solutions that have different concentrations of H+. The existence of this membrane potential led to the development of a new class of indicator electrodes, which we call ion-selective electrodes. • 23.4: Molecular-Selective Electrode Systems In this section we consider how we can incorporate an ion-selective electrode into an electrode that responds to neutral species, such as volatile analytes, such as CO2 and NH3, and biochemically important compounds, such as amino acids and urea. • 23.5: Instruments for Measuring Cell Potentials A potentiometer measures the potential of an electrochemical cell. To help us understand how it works, we describe the instrument as if the analyst operates it manually. The analyst observes a change in the current or the potential and adjusts the instrument’s settings to maintain the desired values. Modern electrochemical instruments provide an automated, electronic means for controlling and measuring current and potential. • 23.6: Quantitative Potentiometry The most important application of potentiometry is determining the concentration of an analyte in solution. Most potentiometric electrodes are selective toward the free, uncomplexed form of the analyte, and do not respond to any of the analyte’s complexed forms. This selectivity provides potentiometric electrodes with a significant advantage over other quantitative methods of analysis if we need to determine the concentration of free ions. 23: Potentiometry In potentiometry we measure the difference between the potential of two electrodes. The potential of one electrode—the working or indicator electrode—responds to the analyte’s activity and the other electrode—the counter or reference electrode—has a known, fixed potential. By convention, the reference electrode is the anode; thus, the short hand notation for a potentiometric electrochemical cell is reference electrode || indicator electrode and the cell potential is $E_{\mathrm{cell}}=E_{\mathrm{ind}}-E_{\mathrm{ref}} \nonumber$ The ideal reference electrode provides a stable, known potential so that we can attribute any change in Ecell to the analyte’s effect on the indicator electrode’s potential. In addition, the reference electrode should be easy to make and easy to use. Although the standard hydrogen electrode is the reference electrode used to define electrode potentials, it use is not common. Instead, the two reference electrodes discussed in this section find the most applications. Calomel Electrodes A calomel reference electrode is based on the following redox couple between Hg2Cl2 and Hg (calomel is the common name for Hg2Cl2) $\mathrm{Hg}_{2} \mathrm{Cl}_{2}(s)+2 e^{-}\rightleftharpoons2 \mathrm{Hg}(l)+2 \mathrm{Cl}^{-}(a q) \nonumber$ for which the potential is $E=E_{\mathrm{Hg}_{2} \mathrm{Cl}_{2} / \mathrm{Hg}}^{\mathrm{o}}-\frac{0.05916}{2} \log \left(a_{\text{Cl}^-}\right)^{2}=+0.2682 \mathrm{V}-\frac{0.05916}{2} \log \left(a_{\text{Cl}^-}\right)^{2} \nonumber$ The potential of a calomel electrode, therefore, depends on the activity of Cl in equilibrium with Hg and Hg2Cl2. As shown in Figure $1$, in a saturated calomel electrode (SCE) the concentration of Cl is determined by the solubility of KCl. The electrode consists of an inner tube packed with a paste of Hg, Hg2Cl2, and KCl, situated within a second tube that contains a saturated solution of KCl. A small hole connects the two tubes and a porous wick serves as a salt bridge to the solution in which the SCE is immersed. A stopper in the outer tube provides an opening for adding addition saturated KCl. The short hand notation for this cell is $\mathrm{Hg}(l) | \mathrm{Hg}_{2} \mathrm{Cl}_{2}(s), \mathrm{KCl}(a q, \text { sat'd }) \| \nonumber$ Because the concentration of Cl is fixed by the solubility of KCl, the potential of an SCE remains constant even if we lose some of the inner solution to evaporation. A significant disadvantage of the SCE is that the solubility of KCl is sensitive to a change in temperature. At higher temperatures the solubility of KCl increases and the electrode’s potential decreases. For example, the potential of the SCE is +0.2444 V at 25oC and +0.2376 V at 35oC. The potential of a calomel electrode that contains an unsaturated solution of KCl is less dependent on the temperature, but its potential changes if the concentration, and thus the activity of Cl, increases due to evaporation. For example, the potential of a calomel electrode is +0.280 V when the concentration of KCl is 1.00 M and +0.336 V when the concentration of KCl is 0.100 M. If the activity of Cl is 1.00, the potential is +0.2682 V. Silver/Silver Chloride Electrodes Another common reference electrode is the silver/silver chloride electrode, which is based on the reduction of AgCl to Ag. $\operatorname{AgCl}(s)+e^{-} \rightleftharpoons \mathrm{Ag}(s)+\mathrm{Cl}^{-}(a q) \nonumber$ As is the case for the calomel electrode, the activity of Cl determines the potential of the Ag/AgCl electrode; thus $E = E_\text{AgCl/Ag}^{\circ}-0.05916 \log a_{\text{Cl}^-} = 0.2223 \text{ V} - 0.05916 \log a_{\text{Cl}^-} \nonumber$ When prepared using a saturated solution of KCl, the electrode's potential is +0.197 V at 25oC. Another common Ag/AgCl electrode uses a solution of 3.5 M KCl and has a potential of +0.205 V at 25oC. As you might expect, the potential of a Ag/AgCl electrode using a saturated solution of KCl is more sensitive to a change in temperature than an electrode that uses an unsaturated solution of KCl. A typical Ag/AgCl electrode is shown in Figure $2$ and consists of a silver wire, the end of which is coated with a thin film of AgCl, immersed in a solution that contains the desired concentration of KCl. A porous plug serves as the salt bridge. The electrode’s short hand notation is $\operatorname{Ag}(s) | \operatorname{Ag} \mathrm{Cl}(s), \mathrm{KCl}\left(a q, a_{\mathrm{Cl}^{-}}=x\right) \| \nonumber$ Converting Potentials Between Reference Electrodes The standard state reduction potentials in most tables are reported relative to the standard hydrogen electrode’s potential of +0.00 V. Because we rarely use the SHE as a reference electrode, we need to convert an indicator electrode’s potential to its equivalent value when using a different reference electrode. As shown in the following example, this is easy to do. Example $1$ The potential for an Fe3+/Fe2+ half-cell is +0.750 V relative to the standard hydrogen electrode. What is its potential if we use a saturated calomel electrode or a saturated silver/silver chloride electrode? Solution When we use a standard hydrogen electrode the potential of the electrochemical cell is $E_\text{cell} = E_{\text{Fe}^{3+}/\text{Fe}^{2+}} - E_\text{SHE} = 0.750 \text{ V} -0.000 \text{ V} = 0.750 \text{ V} \nonumber$ We can use the same equation to calculate the potential if we use a saturated calomel electrode $E_\text{cell} = E_{\text{Fe}^{3+}/\text{Fe}^{2+}} - E_\text{SHE} = 0.750 \text{ V} -0.2444 \text{ V} = 0.506 \text{ V} \nonumber$ or a saturated silver/silver chloride electrode $E_\text{cell} = E_{\text{Fe}^{3+}/\text{Fe}^{2+}} - E_\text{SHE} = 0.750 \text{ V} -0.197 \text{ V} = 0.553 \text{ V} \nonumber$ Figure $3$ provides a pictorial representation of the relationship between these different potentials.
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/23%3A_Potentiometry/23.01%3A_Reference_Electrodes.txt
In potentiometry, the potential of the indicator electrode is proportional to the analyte’s activity. Two classes of indicator electrodes are used to make potentiometric measurements: metallic electrodes, which are the subject of this section, and ion-selective electrodes, which are covered in the next section. Electrodes of the First Kind If we place a copper electrode in a solution that contains Cu2+, the electrode’s potential due to the reaction $\mathrm{Cu}^{2+}(a q)+2 e^{-} \rightleftharpoons \mathrm{Cu}(s) \nonumber$ is determined by the activity of Cu2+. $E=E_{\mathrm{Cu}^{2+} / \mathrm{Cu}}^{\mathrm{o}}-\frac{0.05916}{2} \log \frac{1}{a_{\mathrm{Cu}^{2+}}}=+0.3419 \mathrm{V}-\frac{0.05916}{2} \log \frac{1}{a_{\mathrm{Cu}^{2+}}} \nonumber$ If copper is the indicator electrode in a potentiometric electrochemical cell that also includes a saturated calomel reference electrode $\mathrm{SCE} \| \mathrm{Cu}^{2+}\left(a q, a_{\mathrm{Cu^{2+}}}=x\right) | \text{Cu}(s) \nonumber$ then we can use the cell potential to determine an unknown activity of Cu2+ in the indicator electrode’s half-cell $E_{\text{cell}}= E_{\text { ind }}-E_{\text {SCE }}= +0.3419 \mathrm{V}-\frac{0.05916}{2} \log \frac{1}{a_{\mathrm{Cu}^{2+}}}-0.2224 \mathrm{V} \nonumber$ An indicator electrode in which the metal is in contact with a solution containing its ion is called an electrode of the first kind. In general, if a metal, M, is in a solution of Mn+, the cell potential is $E_{\mathrm{call}}=K-\frac{0.05916}{n} \log \frac{1}{a_{M^{n+}}}=K+\frac{0.05916}{n} \log a_{M^{n+}} \nonumber$ where K is a constant that includes the standard-state potential for the Mn+/M redox couple and the potential of the reference electrode. For a variety of reasons—including the slow kinetics of electron transfer at the metal–solution interface, the formation of metal oxides on the electrode’s surface, and interfering reactions—electrodes of the first kind are limited to the following metals: Ag, Bi, Cd, Cu, Hg, Pb, Sn, Tl, and Zn. Many of these electrodes, such as Zn, cannot be used in acidic solutions because they are easily oxidized by H+. $\mathrm{Zn}(s)+2 \mathrm{H}^{+}(a q)\rightleftharpoons \text{ H}_{2}(g)+\mathrm{Zn}^{2+}(a q) \nonumber$ Electrodes of the Second Kind The potential of an electrode of the first kind responds to the activity of Mn+. We also can use this electrode to determine the activity of another species if it is in equilibrium with Mn+. For example, the potential of a Ag electrode in a solution of Ag+ is $E=0.7996 \mathrm{V}+0.05916 \log a_{\mathrm{Ag}^{+}} \label{second1}$ If we saturate the indicator electrode’s half-cell with AgI, the solubility reaction $\operatorname{Agl}(s)\rightleftharpoons\operatorname{Ag}^{+}(a q)+\mathrm{I}^{-}(a q) \label{second2}$ determines the concentration of Ag+; thus $a_{\mathrm{Ag}^{+}}=\frac{K_{\mathrm{sp}, \mathrm{Agl}}}{a_{\text{I}^-}} \label{second3}$ where Ksp,AgI is the solubility product for AgI. Substituting Equation \ref{second3} into Equation \ref{second1} $E=0.7996 \text{ V}+0.05916 \log \frac{K_{\text{sp, Agl}}}{a_{\text{I}^-}} \label{second4}$ shows that the potential of the silver electrode is a function of the activity of I. If we incorporate this electrode into a potentiometric electrochemical cell with a saturated calomel electrode $\mathrm{SCE} \| \mathrm{AgI}(s), \text{ I}^-\left(a q, a_{\text{I}^-}=x\right) | \mathrm{Ag}(\mathrm{s}) \label{second5}$ then the cell potential is $E_{\mathrm{cell}}=K-0.05916 \log a_{\text{I}^-} \label{second6}$ where K is a constant that includes the standard-state potential for the Ag+/Ag redox couple, the solubility product for AgI, and the reference electrode’s potential. If an electrode of the first kind responds to the activity of an ion in equilibrium with Mn+, we call it an electrode of the second kind. Two common electrodes of the second kind are the calomel and the silver/silver chloride reference electrodes. In an electrode of the second kind we link together a redox reaction and another reaction, such as a solubility reaction. You might wonder if we can link together more than two reactions. The short answer is yes. An electrode of the third kind, for example, links together a redox reaction and two other reactions. Such electrodes are less common and we will not consider them in this text. Metallic Redox Electrodes An electrode of the first kind or the second kind develops a potential as the result of a redox reaction that involves the metallic electrode. An electrode also can serve as a source of electrons or as a sink for electrons in an unrelated redox reaction, in which case we call it a redox electrode. The Pt cathode in $1$ is a redox electrode because its potential is determined by the activity of Fe2+ and Fe3+ in the indicator half-cell. Note that a redox electrode’s potential often responds to the activity of more than one ion, which limits its usefulness for direct potentiometry. Figure $1$. Potentiometric electrochemical cell in which the anode is a metallic electrode of the first kind (Ag) and the cathode is a metallic redox electrode (Pt).
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/23%3A_Potentiometry/23.02%3A_Metallic_Indicator_Electrodes.txt
If metals were the only useful materials for constructing indicator electrodes, then there would be few useful applications of potentiometry. In 1906, Cremer discovered that the potential difference across a thin glass membrane is a function of pH when opposite sides of the membrane are in contact with solutions that have different concentrations of H3O+. The existence of this membrane potential led to the development of a new class of indicator electrodes, which we call ion-selective electrodes (ISEs). In addition to the glass pH electrode, ion-selective electrodes are available for a wide range of ions. It also is possible to construct a membrane electrode for a neutral analyte by using a chemical reaction to generate an ion that is monitored with an ion-selective electrode. The development of new ion-selective membrane electrodes continues to be an active area of research. Classification of Ions-Selective Membranes There are two broad classes of materials that are used as membranes: crystalline solid-state membranes and non-crystalline membranes. Examples of crystalline solid-state membranes are single crystals of LaF3 and polycrystalline AgS. Examples non-crystalline membranes are glass and hydrophobic membranes that hold a liquid ion-exchanger. Each of these are considered below. Properties of Ion-Selective Membranes To be useful, an ion-selective membrane must be structurally stable (a crystalline membrane that is soluble, for example, is not structurally stable), capable of being machined to a suitable size and shape that can be incorporated into the indicator electrode in a potentiometric electrochemical cell, electrically conductive so it is possible to measure the electrochemical cell's potential, and selective toward the analyte. The Membrane's Boundary Potential Figure $1$ shows a typical potentiometric electrochemical cell equipped with an ion-selective electrode. The short hand notation for this cell is $\text { ref (sample) }\left\|\mathrm{A}_{\text { samp }}\left(a q, a_{\mathrm{A_{\text{samp}}}}=x\right) | \text{membrane} | \mathrm{A}_{\text { int }}\left(a q, a_{\mathrm{A_{\text{int}}}}=y\right)\right\| \text { ref (internal) } \nonumber$ where the ion-selective membrane separates the two solutions that contain analyte with activities of x and y: the sample solution and the ion-selective electrode’s internal solution. The potential of this electrochemical cell $E_\text{cell} = E_\text{ref(int)} - E_\text{ref(samp)} + E_\text{mem} \label{membrane1}$ includes the potential of each reference electrode and the difference in potential across the membrane, Emem, which is the membrane's boundary potential. The notations ref(sample) and ref(internal) represent a reference electrode immersed in the sample and a reference electrode immersed in the ion-selective electrode's internal solution. Because the potential of the two reference electrodes are constant, any change in Ecell reflects a change in the membrane’s boundary potential. The analyte’s interaction with the membrane generates a boundary potential if there is a difference in its activity on the membrane’s two sides. Current is carried through the membrane by the movement of either the analyte or an ion already present in the membrane’s matrix. The membrane potential is given by the following Nernst-like equation $E_{\mathrm{mem}}=E_{\mathrm{asym}}-\frac{R T}{z F} \ln \frac{\left(a_{A}\right)_{\mathrm{int}}}{\left(a_{A}\right)_{\mathrm{samp}}} \label{membrane2}$ where (aA)samp is the analyte’s activity in the sample, (aA)int is the analyte’s activity in the ion-selective electrode’s internal solution, and z is the analyte’s charge. Ideally, Emem is zero when (aA)int = (aA)samp. The term Easym, which is an asymmetry potential, accounts for the fact that Emem usually is not zero under these conditions. For now we simply note that a difference in the analyte’s activity results in the membrane's boundary potential. As we consider different types of ion-selective electrodes, we will explore more specifically the source of the membrane potential. Substituting Equation \ref{membrane2} into Equation \ref{membrane1}, assuming a temperature of 25oC, and rearranging gives $E_{\mathrm{cell}}=K+\frac{0.05916}{z} \log \left(a_{A}\right)_{\mathrm{samp}} \label{membrane3}$ where K is a constant that includes the potentials of the two reference electrodes, the asymmetry potential, and the analyte's activity in the internal solution. Equation \ref{membrane3} is a general equation and applies to all types of ion-selective electrodes. Membrane Selectivity The membrane's boundary potential results from a chemical interaction between the analyte and active sites on the membrane’s surface. Because the signal depends on a chemical process, most membranes are not selective toward a single analyte. Instead, the membrane's boundary potential is proportional to the concentration of each ion that interacts with the membrane’s active sites. We can rewrite Equation \ref{membrane3} to include the contribution to the potential of an interferent, I $E_\text{cell} = K + \frac {0.05916} {z_A} \log \left\{ a_A + K_{A,I}(a_I)^{z_A/z_I} \right\} \label{membrane4}$ where zA and zI are the charges of the analyte and the interferent, and KA,I is a selectivity coefficient that accounts for the relative response of the interferent. The selectivity coefficient is defined as $K_{A,I} = \frac {(a_A)_e} {(a_I)_e^{z_A/z_I}} \label{membrane5}$ where (aA)e and (aI)e are the activities of analyte and the interferent that yield identical cell potentials. When the selectivity coefficient is 1.00, the membrane responds equally to the analyte and the interferent. A membrane shows good selectivity for the analyte when KA,I is significantly less than 1.00. Selectivity coefficients for most commercially available ion-selective electrodes are provided by the manufacturer. If the selectivity coefficient is not known, it is easy to determine its value experimentally by preparing a series of solutions, each of which contains the same activity of interferent, (aI)add, but a different activity of analyte. As shown in Figure $2$, a plot of cell potential versus the log of the analyte’s activity has two distinct linear regions. When the analyte’s activity is significantly larger than KA,I $\times$ (aI)add, the potential is a linear function of log(aA), as given by Equation \ref{membrane3}. If KA,I $\times$ (aI)add is significantly larger than the analyte’s activity, however, the cell’s potential remains constant. The activity of analyte and interferent at the intersection of these two linear regions is used to calculate KA,I. Example $1$ Sokalski and co-workers described a method for preparing ion-selective electrodes with significantly improved selectivities [Sokalski, T.; Ceresa, A.; Zwicki, T.; Pretsch, E. J. Am. Chem. Soc. 1997, 119, 11347–11348]. For example, a conventional Pb2+ ISE has a $\log K_{\text{Pb}^{2+}/\text{Mg}^{2+}}$ of –3.6. If the potential for a solution in which the activity of Pb2+ is $4.1 \times 10^{-12}$ is identical to that for a solution in which the activity of Mg2+ is 0.01025, what is the value of $\log K_{\text{Pb}^{2+}/\text{Mg}^{2+}}$ for their ISE? Solution Making appropriate substitutions into Equation \ref{membrane5}, we find that $K_{\text{Pb}^{2+}/\text{Mg}^{2+}} = \frac {(a_{\text{Pb}^{2+}})_e} {(a_{\text{Mg}^{2+}})_e^{z_{\text{Pb}^{2+}}/z_{\text{Mg}^{2+}}}} = \frac {4.1 \times 10^{-12}} {(0.01025)^{+2/+2}} = 4.0 \times 10^{-10} \nonumber$ The value of $\log K_{\text{Pb}^{2+}/\text{Mg}^{2+}}$, therefore, is –9.40. The Glass Electrode for pH Measurements The earliest ion-selective electrodes were based on the observation that a thin glass membrane separating two solutions with different levels of acidity develops a measurable difference in potential on opposite sides of the membrane. Incorporating the glass electrode into a potentiometer along with a reference electrode provides a way to measure the potential. Commercial glass membrane pH electrodes often are available in a combination form that includes both the indicator electrode and the reference electrode. The use of a single electrode greatly simplifies the measurement of pH. An example of a typical combination electrode is shown in Figure $3$. The Composition and Structure of Glass Membranes The first commercial glass electrodes were manufactured using Corning 015, a glass with a composition that is approximately 22% Na2O, 6% CaO, and 72% SiO2. Membranes fashioned from Corning 015 have an excellent selectivity for hydrogen ions, H+, below a pH of 9; above this pH the membrane becomes more selective for other cations and the measured pH value deviates from its actual value. Replacing Na2O and CaO with Li2O and BaO extends the useful pH range of glass membranes to pH levels greater than 12. Origin of the Boundary Potential for a Glass Membrane When immersed in an aqueous solution for several hours, the outer approximately 10 nm of the glass membrane’s surface becomes hydrated, resulting in the formation of negatively charged sites, —SiO. Sodium ions, Na+, serve as counter ions. Because H+ binds more strongly to —SiO than does Na+, they displace the sodium ions on both sides of the membrane. $\mathrm{H}^{+}+-\mathrm{SiO}^{-} \mathrm{Na}^{+}\rightleftharpoons-\mathrm{SiO}^{-} \mathrm{H}^{+}+\mathrm{Na}^{+} \label{glass1}$ explaining the membrane’s selectivity for H+. The transport of charge across the membrane is carried by the Na+ ions within the glass membrane. The potential of a glass electrode obeys the equation $E_{\mathrm{cell}}=K+0.05916 \log a_{\mathrm{H}^{+}} \label{glass2}$ Alkaline and Acid Errors As noted above, at sufficiently basic pH values a glass electrode no longer provides an accurate measure of a sample's pH as the membrane becomes more selective for other monovalent cations, such as Na+ and K+. Example $2$ For a Corning 015 glass membrane, the selectivity coefficient KH+/Na+ is $\approx 10^{-11}$. What is the expected error if we measure the pH of a solution in which the activity of H+ is $2 \times 10^{-13}$ and the activity of Na+ is 0.05? Solution A solution in which the activity of H+, (aH+)act, is $2 \times 10^{-13}$ has a pH of 12.7. Because the electrode responds to both H+ and Na+, the apparent activity of H+, (aH+)app, is $(a_{\text{H}^+})_\text{app} = (a_{\text{H}^+})_\text{act} + (K_{\text{H}^+ / \text{Na}^+} \times a_{\text{Na}^+}) = 2 \times 10^{-13} + (10^{-11} \times 0.05) = 7 \times 10^{-13} \nonumber$ The apparent activity of H+ is equivalent to a pH of 12.2, an error of –0.5 pH units. Glass pH electrodes also show deviations from ideal behavior at pH levels less than 0.5, although the reasons for this are not clear. Still, a glass electrode has a wide dynamic range for measuring pH. Other Limitations to Glass Electrodes Because an ion-selective electrode’s glass membrane is very thin—it is only about 50 μm thick—they must be handled with care to avoid cracks or breakage. Glass electrodes usually are stored in a storage buffer recommended by the manufacturer, which ensures that the membrane’s outer surface remains hydrated. If a glass electrode dries out, it is reconditioned by soaking for several hours in a solution that contains the analyte. The composition of a glass membrane will change over time, which affects the electrode’s performance. The average lifetime for a typical glass electrode is several years. Glass Electrodes for Other Cations The observation that the Corning 015 glass membrane responds to ions other than H+ led to the development of glass membranes with a greater selectivity for other cations. For example, a glass membrane with a composition of 11% Na2O, 18% Al2O3, and 71% SiO2 is used as an ion-selective electrode for Na+. Other glass ion-selective electrodes have been developed for the analysis of Li+, K+, Rb+, Cs+, $\text{NH}_4^+$, Ag+, and Tl+. Table $1$ provides several examples. Table $1$. Representative Examples of Glass Membrane Ion-Selective Electrodes for Analytes Other Than H+ analyte membrane composition selectivity coefficients Na+ 11% Na2O, 18% Al2O3, 71% SiO2 $K_{\mathrm{Na}^{+} / \mathrm{H}^{+}}=1000$ $K_{\mathrm{Na}^{+} / \mathrm{K}^{+}}=0.001$ $K_{\mathrm{Na}^{+} / \mathrm{Li}^{+}}=0.001$ Li+ 15% Li2O, 25% Al2O3, 60% SiO2 $K_{\mathrm{Li}^{+} / \mathrm{Na}^{+}}=0.3$ $K_{\mathrm{Li}^{+} / \mathrm{K}^{+}}=0.001$ K+ 27% Na2O, 5% Al2O3, 68% SiO2 $K_{\mathrm{K}^{+} / \mathrm{Na}^{+}}=0.05$ Selectivity coefficients are approximate; values found experimentally may vary substantially from the listed values. See Cammann, K. Working With Ion-Selective Electrodes, Springer-Verlag: Berlin, 1977. Crystalline Membrane Electrodes A solid-state ion-selective electrode has a membrane that consists of either a polycrystalline inorganic salt or a single crystal of an inorganic salt. We can fashion a polycrystalline solid-state ion-selective electrode by sealing a 1–2 mm thick pellet of AgS—or a mixture of AgS and a second silver salt or another metal sulfide—into the end of a nonconducting plastic cylinder, filling the cylinder with an internal solution that contains the analyte, and placing a reference electrode into the internal solution. Figure $4$ shows a typical design. The NaCl in a salt shaker is an example of polycrystalline material because it consists of many small crystals of sodium chloride. The NaCl salt plates used in IR spectroscopy, on the other hand, are an example of a single crystal of sodium chloride. The membrane potential for a Ag2S pellet develops as the result of a difference in the extent of the solubility reaction $\mathrm{Ag}_{2} \mathrm{S}(s)\rightleftharpoons2 \mathrm{Ag}^{+}(a q)+\mathrm{S}^{2-}(a q) \label{ss1}$ on the membrane’s two sides, with charge carried across the membrane by Ag+ ions. When we use the electrode to monitor the activity of Ag+, the cell potential is $E_{\text {cell }}=K+0.05916 \log a_{\mathrm{Ag}^{+}} \label{ss2}$ The membrane also responds to the activity of $\text{S}^{2-}$, with a cell potential of $E_{\mathrm{cell}}=K-\frac{0.05916}{2} \log a_{\text{S}^{2-}} \label{ss3}$ If we combine an insoluble silver salt, such as AgCl, with the Ag2S, then the membrane potential also responds to the concentration of Cl–, with a cell potential of $E_{\text {cell }}=K-0.05916 \log a_{\mathrm{Cl}^{-}} \label{ss4}$ By mixing Ag2S with CdS, CuS, or PbS, we can make an ion-selective electrode that responds to the activity of Cd2+, Cu2+, or Pb2+. In this case the cell potential is $E_{\mathrm{cell}}=K+\frac{0.05916}{2} \ln a_{M^{2+}} \label{ss5}$ where aM2+ is the activity of the metal ion. Table $2$ provides examples of polycrystalline, Ag2S-based solid-state ion-selective electrodes. The selectivity of these ion-selective electrodes depends on the relative solubility of the compounds. A Cl ISE using a Ag2S/AgCl membrane is more selective for Br (KCl/Br= 102) and for I (KCl/I = 106) because AgBr and AgI are less soluble than AgCl. If the activity of Br is sufficiently high, AgCl at the membrane/solution interface is replaced by AgBr and the electrode’s response to Cl decreases substantially. Most of the polycrystalline ion-selective electrodes listed in Table $2$ operate over an extended range of pH levels. The equilibrium between S2– and HS limits the analysis for S2– to a pH range of 13–14. $2$. Representative Examples of Polycrystalline Solid-State Ion-Selective Electrodes analyte membrane composition selectivity coefficients Ag+ Ag2S $K_{\text{Ag}^+/\text{Cu}^{2+}} = 10^{-6}$ $K_{\text{Ag}^+/\text{Pb}^{2+}} = 10^{-10}$ Hg2+ interferes Cd2+ CdS/Ag2S $K_{\text{Cd}^{2+}/\text{Fe}^{2+}} = 200$ $K_{\text{Cd}^{2+}/\text{Pb}^{2+}} = 6$ Ag+, Hg2+, and Cu2+ must be absent Cu2+ CuS/Ag2S $K_{\text{Cu}^{2+}/\text{Fe}^{3+}} = 10$ $K_{\text{Cu}^{2+}/\text{Cu}^{+}} = 10^{-6}$ Ag+ and Hg2+ must be absent Pb2+ PbS/Ag2S $K_{\text{Pb}^{2+}/\text{Fe}^{3+}} = 1$ $K_{\text{Pb}^{2+}/\text{Cd}^{2+}} = 1$ Ag+, Hg2+, and Cu2+ must be absent Br AgBr/Ag2S $K_{\text{Br}^-/\text{I}^{-}} = 5000$ $K_{\text{Br}^-/\text{Cl}^{-}} = 0.005$ $K_{\text{Br}^-/\text{OH}^{-}} = 10^{-5}$ S2– must be absent Cl AgCl/Ag2S $K_{\text{Cl}^-/\text{I}^{-}} = 10^{6}$ $K_{\text{Cl}^-/\text{Br}^{-}} = 100$ $K_{\text{Cl}^-/\text{OH}^{-}} = 0.01$ S2– must be absent I AgI/Ag2S $K_{\text{I}^-/\text{S}^{2-}} = 30$ $K_{\text{I}^-/\text{Br}^{-}} = 10^{-4}$ $K_{\text{I}^-/\text{Cl}^{-}} = 10^{-6}$ $K_{\text{I}^-/\text{OH}^{-}} = 10^{-7}$ SCN AgSCN/Ag2S $K_{\text{SCN}^-/\text{I}^{-}} = 10^{3}$ $K_{\text{SCN}^-/\text{Br}^{-}} = 100$ $K_{\text{SCN}^-/\text{Cl}^{-}} = 0.1$$K_{\text{SCN}^-/\text{OH}^{-}} = 0.01$ S2– must be absent S2– Ag2S Hg2+ must be absent Selectivity coefficients are approximate; values found experimentally may vary substantially from the listed values. See Cammann, K. Working With Ion-Selective Electrodes, Springer-Verlag: Berlin, 1977. The membrane of a F ion-selective electrode is fashioned from a single crystal of LaF3, which usually is doped with a small amount of EuF2 to enhance the membrane’s conductivity. Because EuF2 provides only two Fions—compared to the three F ions in LaF3—each EuF2 produces a vacancy in the crystal’s lattice. Fluoride ions pass through the membrane by moving into adjacent vacancies. As shown in Figure $4$, the LaF3 membrane is sealed into the end of a non-conducting plastic cylinder, which contains a standard solution of F, typically 0.1 M NaF, and a Ag/AgCl reference electrode. The membrane potential for a F ISE results from a difference in the solubility of LaF3 on opposite sides of the membrane, with the potential given by $E_{\mathrm{cell}}=K-0.05916 \log a_{\mathrm{F}^-} \label{ss6}$ One advantage of the F ion-selective electrode is its freedom from interference. The only significant exception is OH (KF/OH = 0.1), which imposes a maximum pH limit for a successful analysis. Below a pH of 4 the predominate form of fluoride in solution is HF, which does not contribute to the membrane potential. For this reason, an analysis for fluoride is carried out at a pH greater than 4. Example $3$ What is the maximum pH that we can tolerate if we need to analyze a solution in which the activity of F is $1 \times 10^{-5}$ with an error of less than 1%? Solution In the presence of OH the cell potential is $E_{\mathrm{cell}}=K-0.05916\left\{a_{\mathrm{F}^-}+K_{\mathrm{F}^- / \mathrm{OH}^{-}} \times a_{\mathrm{OH}^-}\right\} \nonumber$ To achieve an error of less than 1%, the term $K_{\mathrm{F}^- / \mathrm{OH}^{-}} \times a_{\mathrm{OH}^-}$ must be less than 1% of aF; thus $K_{\mathrm{F}^- / \mathrm{OH}^-} \times a_{\mathrm{OH}^{-}} \leq 0.01 \times a_{\mathrm{F}^-} \nonumber$ $0.10 \times a_{\mathrm{OH}^{-}} \leq 0.01 \times\left(1.0 \times 10^{-5}\right) \nonumber$ Solving for aOH gives the maximum allowable activity for OH as $1 \times 10^{-6}$, which corresponds to a pH of less than 8. Unlike a glass membrane ion-selective electrode, a solid-state ISE does not need to be conditioned before it is used, and it may be stored dry. The surface of the electrode is subject to poisoning, as described above for a Cl ISE in contact with an excessive concentration of Br. If an electrode is poisoned, it can be returned to its original condition by sanding and polishing the crystalline membrane. Poisoning simply means that the surface has been chemically modified, such as AgBr forming on the surface of a AgCl membrane. Liquid Membrane Electrodes Another class of ion-selective electrodes uses a hydrophobic membrane that contains a liquid organic complexing agent that reacts selectively with the analyte. Three types of organic complexing agents have been used: cation exchangers, anion exchangers, and neutral ionophores. A membrane potential exists if the analyte’s activity is different on the two sides of the membrane. Current is carried through the membrane by the analyte. An ionophore is a ligand whose exterior is hydrophobic and whose interior is hydrophilic. The crown ether shown here is one example of a neutral ionophore. One example of a liquid-based ion-selective electrode is that for Ca2+, which uses a porous plastic membrane saturated with the cation exchanger di-(n-decyl) phosphate. As shown in Figure $5$, the membrane is placed at the end of a non-conducting cylindrical tube and is in contact with two reservoirs. The outer reservoir contains di-(n-decyl) phosphate in di-n-octylphenylphosphonate, which soaks into the porous membrane. The inner reservoir contains a standard aqueous solution of Ca2+ and a Ag/AgCl reference electrode. Calcium ion-selective electrodes also are available in which the di-(n-decyl) phosphate is immobilized in a polyvinyl chloride (PVC) membrane that eliminates the need for the outer reservoir. The membrane potential for the Ca2+ ISE develops as the result of a difference in the extent of the complexation reaction $\mathrm{Ca}^{2+}(a q)+2\left(\mathrm{C}_{10} \mathrm{H}_{21} \mathrm{O}\right)_{2} \mathrm{PO}_{2}^{-}(mem) \rightleftharpoons \mathrm{Ca}\left[\left(\mathrm{C}_{10} \mathrm{H}_{21} \mathrm{O}\right)_{2} \mathrm{PO}_{2}\right]_2 (mem) \label{liq1}$ on the two sides of the membrane, where (mem) indicates a species that is present in the membrane. The cell potential for the Ca2+ ion-selective electrode is $E_{\mathrm{cell}}=K+\frac{0.05916}{2} \log a_{\mathrm{ca}^{2+}} \label{liq2}$ The selectivity of this electrode for Ca2+ is very good, with only Zn2+ showing greater selectivity. Table $3$ lists the properties of several liquid-based ion-selective electrodes. An electrode using a liquid reservoir can be stored in a dilute solution of analyte and needs no additional conditioning before use. The lifetime of an electrode with a PVC membrane, however, is proportional to its exposure to aqueous solutions. For this reason these electrodes are best stored by covering the membrane with a cap along with a small amount of wetted gauze to maintain a humid environment. Before using the electrode it is conditioned in a solution of analyte for 30–60 minutes. Table $3$. Representative Examples of Liquid-Based Ion-Selective Electrodes analyte membrane composition selectivity coefficients Ca2+ di-(n-decyl) phosphate in PVC $K_{\text{Ca}^{2+}/\text{Zn}^{2+}} = 1-5$ $K_{\text{Ca}^{2+}/\text{Al}^{3+}} = 0.90$ $K_{\text{Ca}^{2+}/\text{Mn}^{2+}} = 0.38$ $K_{\text{Ca}^{2+}/\text{Cu}^{2+}} = 0.070$ $K_{\text{Ca}^{2+}/\text{Mg}^{2+}} = 0.032$ K+ valinomycin in PVC $K_{\text{K}^{+}/\text{Rb}^{+}} = 1.9$ $K_{\text{K}^{+}/\text{Cs}^{+}} = 0.38$ $K_{\text{K}^{+}/\text{Li}^{+}} = 10^{-4}$ Li+ ETH 149 in PVC $K_{\text{Li}^{+}/\text{H}^{+}} = 1$ $K_{\text{Li}^{+}/\text{Na}^{+}} = 0.03$ $K_{\text{Li}^{+}/\text{K}^{+}} = 0.007$ $\text{NH}_4^+$ nonactin and monactin in PVC $K_{\text{NH}_4^{+}/\text{K}^{+}} = 0.12$ $K_{\text{NH}_4^{+}/\text{H}^{+}} = 0.016$ $K_{\text{NH}_4^{+}/\text{Li}^{+}} = 0.0042$ $K_{\text{NH}_4^{+}/\text{Na}^{+}} = 0.002$ $\text{ClO}_3^-$ $\text{Fe}(o\text{-phen})_3^{3+}$ in p-nitrocymene with porous membrane $K_{\text{ClO}_4^{-}/\text{OH}^{-}} = 1$ $K_{\text{ClO}_4^{-}/\text{I}^{-}} = 0.012$ $K_{\text{ClO}_4^{-}/\text{NO}_3^{-}} = 0.0015$ $K_{\text{ClO}_4^{-}/\text{Br}^{-}} = 5.6 \times 10^{-4}$ $K_{\text{ClO}_4^{-}/\text{Cl}^{-}} = 2.2 \times 10^{-4}$ $\text{NO}_3^-$ tetradodecyl ammonium nitrate in pVC $K_{\text{NO}_3^{-}/\text{Cl}^{-}} = 0.006$ $K_{\text{NO}_3^{-}/\text{F}^{-}} = 9 \times 10^{-4}$ Selectivity coefficients are approximate; values found experimentally may vary substantially from the listed values. See Cammann, K. Working With Ion-Selective Electrodes, Springer-Verlag: Berlin, 1977.
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/23%3A_Potentiometry/23.03%3A_Membrane_Indicator_Electrodes.txt
The electrodes in Chapter 23.3 are selective toward ions. In this section we consider how we can incorporate an ion-selective electrode into an electrode that responds to neutral species, such as volatile analytes, such as CO2 and NH3, and biochemically important compounds, such as amino acids and urea. Gas-Sensing Membrane Electrodes A number of membrane electrodes respond to the concentration of a dissolved gas. The basic design of a gas-sensing electrode, as shown in Figure $1$, consists of a thin membrane that separates the sample from an inner solution that contains an ion-selective electrode. The membrane is permeable to the gaseous analyte, but impermeable to nonvolatile components in the sample’s matrix. The gaseous analyte passes through the membrane where it reacts with the inner solution, producing a species whose concentration is monitored by the ion-selective electrode. For example, in a CO2 electrode, CO2 diffuses across the membrane where it reacts in the inner solution to produce H3O+. $\mathrm{CO}_{2}(a q)+2 \mathrm{H}_{2} \mathrm{O}(l)\rightleftharpoons\text{ HCO}_{3}^{-}(a q)+\text{ H}_{3} \mathrm{O}^{+}(a q) \label{gas1}$ The change in the activity of H3O+ in the inner solution is monitored with a pH electrode, for which the cell potential, from Chapter 23.3, is $E_\text{cell} = K + 0.05916 \log a_{\ce{H+}} \label{gas2}$ To find the relationship between the activity of H3O+ in the inner solution and the activity of CO2 in the inner solution we rearrange the equilibrium constant expression for reaction \ref{gas1}; thus $a_{\mathrm{H}_{3} \mathrm{O}^{+}}=K_{\mathrm{a}} \times \frac{a_{\mathrm{CO}_{2}}}{a_{\mathrm{HCO}_{3}^{-}}} \label{gas3}$ where Ka is the equilibrium constant. If the activity of $\text{HCO}_3^-$ in the internal solution is sufficiently large, then its activity is not affected by the small amount of CO2 that passes through the membrane. Substituting Equation \ref{gas3} into Equation \ref{gas2} gives $E_{\mathrm{cell}}=K^{\prime}+0.05916 \log a_{\mathrm{co}_{2}} \label{gas4}$ where K′ is a constant that includes the constant for the pH electrode, the equilibrium constant for reaction \ref{gas1} and the activity of $\text{HCO}_3^-$ in the inner solution. Table $1$ lists the properties of several gas-sensing electrodes. The composition of the inner solution changes with use, and both the inner solution and the membrane must be replaced periodically. Gas-sensing electrodes are stored in a solution similar to the internal solution to minimize their exposure to atmospheric gases. Table $1$. Representative Examples of Gas-Sensing Electrodes analyte inner solution reaction in inner solution ion-selective electrode CO2 10 mM NaHCO3 10 mM NaCl $\mathrm{CO}_{2}(a q)+2 \mathrm{H}_{2} \mathrm{O}(l ) \rightleftharpoons \ \text{ HCO}_{3}^{-}(a q)+\text{ H}_{3} \mathrm{O}^{+}(a q)$ glass pH ISE HCN 10 mM KAg(CN)2 $\mathrm{HCN}(a q)+\mathrm{H}_{2} \mathrm{O}(l )\rightleftharpoons \ \mathrm{CN}^{-}(a q)+\text{ H}_{3} \mathrm{O}^{+}(a q)$ Ag2S solid-state ISE HF 1 M H3O+ $\mathrm{HF}(a q)+\mathrm{H}_{2} \mathrm{O}(l)\rightleftharpoons \ \mathrm{F}^{-}(a q)+\text{ H}_{3} \mathrm{O}^{+}(a q)$ F solid-state ISE H2S pH 5 citrate buffer $\mathrm{H}_{2} \mathrm{S}(a q)+\text{ H}_{2} \mathrm{O}(l )\rightleftharpoons \ \mathrm{HS}^{-}(a q)+\text{ H}_{3} \mathrm{O}^{+}(a q)$ Ag2S solid state ISE NH3 10 mM NH4Cl 0.1 M KNO3 $\mathrm{NH}_{3}(a q)+\text{ H}_{2} \mathrm{O}(l)\rightleftharpoons \ \mathrm{NH}_{4}^{+}(a q)+\text{ OH}^{-}(a q)$ glass pH ISE NO2 20 mM NaNO2 0.1 M KNO3 $2 \mathrm{NO}_{2}(a q)+3 \mathrm{H}_{2} \mathrm{O}(l)\rightleftharpoons \ {\mathrm{NO}_{3}^{-}(a q)+\text{ NO}_{2}^{-}(a q)+2 \mathrm{H}_{3} \mathrm{O}^{+}(a q)}$ glass pH ISE SO2 1 mM NaHSO3 pH 5 $\mathrm{SO}_{2}(a q)+2 \mathrm{H}_{2} \mathrm{O}(l )\rightleftharpoons \ \mathrm{HSO}_{3}^{-}(a q)+\text{ H}_{3} \mathrm{O}^{+}(a q)$ glass pH ISE Source: Cammann, K. Working With Ion-Selective Electrodes, Springer-Verlag: Berlin, 1977. Biocatalytic Membrane Electrodes The approach for developing gas-sensing electrodes can be modified to create potentiometric electrodes that respond to a biochemically important species. The most common class of potentiometric biosensors are enzyme electrodes, in which we trap or immobilize an enzyme at the surface of a potentiometric electrode. The analyte’s reaction with the enzyme produces a product whose concentration is monitored by the potentiometric electrode. Potentiometric biosensors also have been designed around other biologically active species, including antibodies, bacterial particles, tissues, and hormone receptors. One example of an enzyme electrode is the urea electrode, which is based on the catalytic hydrolysis of urea by urease $\mathrm{CO}\left(\mathrm{NH}_{2}\right)_{2}(a q)+2 \mathrm{H}_{2} \mathrm{O}(l)\rightleftharpoons 2 \mathrm{NH}_{4}^{+}(a q)+\text{ CO}_{3}^{-}(a q) \label{bio1}$ Figure $2$ shows one version of the urea electrode, which modifies a gas-sensing NH3 electrode by adding a dialysis membrane that traps a pH 7.0 buffered solution of urease between the dialysis membrane and the gas permeable membrane [(a) Papastathopoulos, D. S.; Rechnitz, G. A. Anal. Chim. Acta 1975, 79, 17–26; (b) Riechel, T. L. J. Chem. Educ. 1984, 61, 640–642]. An NH3 electrode, as shown in Table $1$, uses a gas-permeable membrane and a glass pH electrode. The NH3 diffuses across the membrane where it changes the pH of the internal solution. When immersed in the sample, urea diffuses through the dialysis membrane where it reacts with the enzyme urease to form the ammonium ion, $\text{NH}_4^+$, which is in equilibrium with NH3. $\mathrm{NH}_{4}^{+}(a q)+\mathrm{H}_{2} \mathrm{O}(l ) \rightleftharpoons \text{ H}_{3} \mathrm{O}^{+}(a q)+\text{ NH}_{3}(a q) \label{bio2}$ The NH3, in turn, diffuses through the gas permeable membrane where a pH electrode measures the resulting change in pH. The electrode’s response to the concentration of urea is $E_{\text {cell }}=K-0.05916 \log a_{\text {urea }} \label{bio3}$ Another version of the urea electrode (Figure $3$) immobilizes the enzyme urease in a polymer membrane formed directly on the tip of a glass pH electrode [Tor, R.; Freeman, A. Anal. Chem. 1986, 58, 1042–1046]. In this case the response of the electrode is $\mathrm{pH}=K a_{\mathrm{urea}} \label{bio4}$ Few potentiometric biosensors are available commercially. As shown in Figure $2$ and Figure $3$, however, it is possible to convert an ion-selective electrode or a gas-sensing electrode into a biosensor. Several representative examples are described in Table $2$, and additional examples can be found in this chapter’s additional resources. Table $2$. Representative Examples of Potentiometric Biosensors analyte biologically active phase substance determined $5^{\prime}$-AMP AMP-deaminase (E) NH3 L-arginine arginine and urease (E) NH3 asparagine asparaginase (E) $\text{NH}_4^+$ L-cysteine Proteus morganii (B) H2S L-glutamate yellow squash (T) CO2 L-glutamine Sarcina flava (B) NH3 oxalate oxalate decarboxylase (E) CO2 penicillin penicllinase (E) H3O+ L-phenylalanine L-amino acid oxidase/horseradish peroxidase (E) I sugars bacteria from dental plaque (B) H3O+ urea urease (E) NH3 or H3O+ Source: Complied from Cammann, K. Working With Ion-Selective Electrodes, Springer-Verlag: Berlin, 1977 and Lunte, C. E.; Heineman, W. R. “Electrochemical techniques in Bioanalysis,” in Steckham, E. ed. Topics in Current Chemistry, Vol. 143, Springer-Verlag: Berlin, 1988, p.8. Abbreviations for biologically active phase: E = enzyme; B = bacterial particle; T = tissue.
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/23%3A_Potentiometry/23.04%3A_Molecular-Selective_Electrode_Systems.txt
To measure the potential of an electrochemical cell in a way that draws essentially no current we use a potentiometer. To help us understand how a potentiometer accomplishes this, we will describe the instrument as if the analyst is operating it manually. To do so the analyst observes a change in the current or the potential and manually adjusts the instrument’s settings to maintain the desired experimental conditions. It is important to understand that modern electrochemical instruments provide an automated, electronic means for controlling and measuring current and potential, and that they do so by using very different electronic circuitry than that described here. Figure $1$ shows a schematic diagram for a manual potentiometer that consists of a power supply, an electrochemical cell with a working electrode and a counter electrode, an ammeter to measure the current that passes through the electrochemical cell, an adjustable, slide-wire resistor, and a tap key for closing the circuit through the electrochemical cell. Using Ohm’s law, the current in the upper half of the circuit is $i_{\text {upper}}=\frac{E_{\mathrm{PS}}}{R_{a b}} \label{pot1}$ where EPS is the power supply’s potential, and Rab is the resistance between points a and b of the slide-wire resistor. In a similar manner, the current in the lower half of the circuit is $i_{\text {lower}}=\frac{E_{\text {cell}}}{R_{c b}} \label{pot2}$ where Ecell is the potential difference between the working electrode and the counter electrode, and Rcb is the resistance between the points c and b of the slide-wire resistor. When iupper = ilower = 0, no current flows through the ammeter and the potential of the electrochemical cell is $E_{\mathrm{coll}}=\frac{R_{c b}}{R_{a b}} \times E_{\mathrm{PS}} \label{pot3}$ To determine Ecell we briefly press the tap key and observe the current at the ammeter. If the current is not zero, then we adjust the slide wire resistor and remeasure the current, continuing this process until the current is zero. When the current is zero, we use Equation \ref{pot3} to calculate Ecell. Using the tap key to briefly close the circuit through the electrochemical cell minimizes the current that passes through the cell and limits the change in the electrochemical cell’s composition. For example, passing a current of 10–9 A through the electrochemical cell for 1 s changes the concentrations of species in the cell by approximately 10–14 moles. $10^{-9} \text{ A} = 10^{-9} \text{ C/s} \label{pot4}$ $10^{-9} \text{ C/s} \times 1 \text{ s} \times \frac {1 \text{ mol}} {96485 \text{C}} = 1.0 \times 10^{-14} \text{ mol} \label{pot5}$ Of course, trying to measure a potential in this way is tedious. Modern potentiometers use operational amplifiers to create a high-impedance voltmeter that measures the potential while drawing a current of less than $10^{–9}$ A. The relative error, $E_r$, in the measured potential $E_r = \frac {R_\text{cell}} {R_\text{meter} + R_\text{cell}} \label{pot6}$ where $R_\text{cell}$ is the resistance of the solution in the electrochemical cell and $R_\text{meter}$ is the resistance of the meter. For a solution with a resistance of $10 \text{M}\Omega$ to achieve a relative error of $-0.1\%)$ or $-0.001$ requires an $R_\text{meter}$ of $-0.001 = \frac {-10 \text{ M}\Omega} {R_\text{meter} + 10 \text{ M}\Omega} \nonumber$ $-0.001 \times R_\text{meter} - 0.01 = -10 \text{ M}\Omega \nonumber$ $-0.001 \times R_\text{meter} = -9.990 \text{ M}\Omega \nonumber$ $R_\text{meter} = 9990 \text{ M}\Omega \nonumber$
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/23%3A_Potentiometry/23.05%3A_Instruments_for_Measuring_Cell_Potentials.txt
The most important application of potentiometry is determining the concentration of an analyte in solution. Most potentiometric electrodes are selective toward the free, uncomplexed form of the analyte, and do not respond to any of the analyte’s complexed forms. This selectivity provides potentiometric electrodes with a significant advantage over other quantitative methods of analysis if we need to determine the concentration of free ions. For example, calcium is present in urine both as free Ca2+ ions and as protein-bound Ca2+ ions. If we analyze a urine sample using atomic absorption spectroscopy, the signal is propor- tional to the total concentration of Ca2+ because both free and bound calcium are atomized. Analyzing urine with a Ca2+ ISE, however, gives a signal that is a function of only free Ca2+ ions because the protein-bound Ca2+ can not interact with the electrode’s membrane. In this section, we consider several important aspects of quantiative potentiometry. The Relationship Between Concentration and Potential In Chapter 23.3, we showed that the potential of an ion-selective electrode for an ion with a charge of z is $E_{\mathrm{cell}}=K+\frac{0.05916}{z} \log \left(a_{A}\right)_{\mathrm{samp}} \label{quant1}$ where K is a constant that includes the potentials of the ion-selective electrode's internal and external reference electrodes, any asymmetry potential associated with the ion-selective electrode's membrane, and the analyte's activity in the ion-selective electrode's internal solution. Equation \ref{quant1} is a general equation and applies to all types of ion-selective electrodes. Note that when the analyte is a cation, an increase in the analyte's activity results in results in an increase in the potential; when the analyte is an anion, which makes z a negative number, an increase in the analyte's activity results in a decrease in the potential. As the concentrations of ions in solution often are reported as pX values, where $\text{pX} = - \log a_\text{X} \label{quant2}$ it is convenient to substitute Equation \ref{quant2} into Equation \ref{quant1} $E_{\mathrm{cell}}=K - \frac{0.05916}{z} \text{ pA} \label{quant3}$ Note that for a cation, an increase in pA results in a decrease in the potential; when the analyte is an anion, an increase in pA results in an increase in the potential. Calibrating Potentiometric Electrodes To use Equation \ref{quant3} we need to determine the value of K, which we can do using one or more external standards or by the method of standard addition, both of which were covered in Chapter 1.5. One complication, of course, is that potential is a function of the analyte's activity instead of its concentration. Activity and Concentration Equation \ref{quant1} is written in terms of the analyte's activity. When we use a potentiometric electrode, however, our goal is to determine the analyte’s concentration. As we learned in Chapter 22, an ion’s activity is the product of its concentration, [Mn+], and a matrix-dependent activity coefficient, $\gamma_{Mn^{n+}}$. $a_{M^{n+}}=\left[M^{n+}\right] \gamma_{M^{n+}} \label{quant4}$ Substituting Equation \ref{quant4} into Equation \ref{quant1} and rearranging, gives $E_{\mathrm{cell}}=K+\frac{0.05916}{n} \log \gamma_{M^{n+}}+\frac{0.05916}{n} \log \left[M^{n+}\right] \label{quant5}$ We can solve Equation \ref{quant5} for the metal ion’s concentration if we know the value for its activity coefficient. Unfortunately, if we do not know the exact ionic composition of the sample’s matrix—which is the usual situation—then we cannot calculate the value of $\gamma_{Mn^{n+}}$. There is a solution to this dilemma. If we design our system so that the standards and the samples have an identical matrix, then the value of $\gamma_{Mn^{n+}}$ remains constant and Equation \ref{quant5} simplifies to $E_{\mathrm{cell}}=K^{\prime}+\frac{0.05916}{n} \log \left[M^{n+}\right] \label{quanat6}$ where $K^{\prime}$ includes the activity coefficient. Calibration Using External Standards In the absence of interferents, a calibration curve of Ecell versus logaA, where A is the analyte, is a straight-line. A plot of Ecell versus log[A], however, may show curvature at higher concentrations of analyte as a result of a matrix-dependent change in the analyte’s activity coefficient. To maintain a consistent matrix we add a high concentration of an inert electrolyte to all samples and standards. If the concentration of added electrolyte is sufficient, then the difference between the sample’s matrix and the matrix of the standards will not affect the ionic strength and the activity coefficient essentially remains constant. The inert electrolyte added to the sample and the standards is called a total ionic strength adjustment buffer (TISAB). Example $1$ The concentration of Ca2+ in a water sample is determined using the method of external standards. The ionic strength of the samples and the standards is maintained at a nearly constant level by making each solution 0.5 M in KNO3. The measured cell potentials for the external standards are shown in the following table. [Ca2+] (M) Ecell (V) $1.00 \times 10^{-5}$ –0.125 $5.00 \times 10^{-5}$ –0.103 $1.00 \times 10^{-4}$ –0.093 $5.00 \times 10^{-4}$ –0.072 $1.00 \times 10^{-3}$ –0.063 $5.00 \times 10^{-3}$ –0.043 $1.00 \times 10^{-2}$ –0.033 What is the concentration of Ca2+ in a water sample if its cell potential is found to be –0.084 V? Solution Linear regression gives the calibration curve in Figure $1$, with an equation of $E_{\mathrm{cell}}=0.027+0.0303 \log \left[\mathrm{Ca}^{2+}\right] \nonumber$ Substituting the sample’s cell potential gives the concentration of Ca2+ as $2.17 \times 10^{-4}$ M. Note that the slope of the calibration curve, which is 0.0303, is slightly larger than its ideal value of 0.05916/2 = 0.02958; this is not unusual and is one reason for using multiple standards. One reason that it is not unusual to find that the experimental slope deviates from its ideal value of 0.05916/n is that this ideal value assumes that the temperature is 25°C. Calibration Using Standard Additions Another approach to calibrating a potentiometric electrode is the method of standard additions, which was introduced in Chapter 1.5. First, we transfer a sample with a volume of Vsamp and an analyte concentration of Csamp into a beaker and measure the potential, (Ecell)samp. Next, we make a standard addition by adding to the sample a small volume, Vstd, of a standard that contains a known concentration of analyte, Cstd, and measure the potential, (Ecell)std. If Vstd is significantly smaller than Vsamp, then we can safely ignore the change in the sample’s matrix and assume that the analyte’s activity coefficient is constant. Example $9$ demonstrates how we can use a one-point standard addition to determine the concentration of analyte in a sample. Example $2$ The concentration of Ca2+ in a sample of sea water is determined using a Ca ion-selective electrode and a one-point standard addition. A 10.00-mL sample is transferred to a 100-mL volumetric flask and diluted to volume. A 50.00-mL aliquot of the sample is placed in a beaker with the Ca ISE and a reference electrode, and the potential is measured as –0.05290 V. After adding a 1.00-mL aliquot of a $5.00 \times 10^{-2}$ M standard solution of Ca2+ the potential is –0.04417 V. What is the concentration of Ca2+ in the sample of sea water? Solution To begin, we write the Nernst equation before and after adding the standard addition. The cell potential for the sample is $\left(E_{\mathrm{cell}}\right)_{\mathrm{samp}}=K+\frac{0.05916}{2} \log C_{\mathrm{samp}} \nonumber$ and that following the standard addition is $\left(E_{\mathrm{cell}}\right)_{\mathrm{std}}=K+\frac{0.05916}{2} \log \left\{ \frac {V_\text{samp}} {V_\text{tot}}C_\text{samp} + \frac {V_\text{std}} {V_\text{tot}}C_\text{std} \right\} \nonumber$ where Vtot is the total volume (Vsamp + Vstd) after the standard addition. Subtracting the first equation from the second equation gives $\Delta E = \left(E_{\mathrm{cell}}\right)_{\mathrm{std}} - \left(E_{\mathrm{cell}}\right)_{\mathrm{samp}} = \frac{0.05916}{2} \log \left\{ \frac {V_\text{samp}} {V_\text{tot}}C_\text{samp} + \frac {V_\text{std}} {V_\text{tot}}C_\text{std} \right\} - \frac{0.05916}{2}\log C_\text{samp} \nonumber$ Rearranging this equation leaves us with $\frac{2 \Delta E_{cell}}{0.05916} = \log \left\{ \frac {V_\text{samp}} {V_\text{tot}} + \frac {V_\text{std}C_\text{std}} {V_\text{tot}C_\text{samp}} \right\} \nonumber$ Substituting known values for $\Delta E$, Vsamp, Vstd, Vtot and Cstd, $\begin{array}{l}{\frac{2 \times\{-0.04417-(-0.05290)\}}{0.05916}=} \ {\log \left\{\frac{50.00 \text{ mL}}{51.00 \text{ mL}}+\frac{(1.00 \text{ mL})\left(5.00 \times 10^{-2} \mathrm{M}\right)}{(51.00 \text{ mL}) C_{\mathrm{samp}}}\right\}} \ {0.2951=\log \left\{0.9804+\frac{9.804 \times 10^{-4}}{C_{\mathrm{samp}}}\right\}}\end{array} \nonumber$ and taking the inverse log of both sides gives $1.973=0.9804+\frac{9.804 \times 10^{-4}}{C_{\text {samp }}} \nonumber$ Finally, solving for Csamp gives the concentration of Ca2+ as $9.88 \times 10^{-4}$ M. Because we diluted the original sample of seawater by a factor of 10, the concentration of Ca2+ in the seawater sample is $9.88 \times 10^{-3}$ M. The Operational Definition of pH With the availability of inexpensive glass pH electrodes and pH meters, the determination of pH is one of the most common quantitative analytical measurements. The potentiometric determination of pH, however, is not without complications, several of which we discuss in this section. One complication is confusion over the meaning of pH [Kristensen, H. B.; Saloman, A.; Kokholm, G. Anal. Chem. 1991, 63, 885A–891A]. The conventional definition of pH in most general chemistry textbooks is given in terms of the concentration of H+ $\mathrm{pH}=-\log \left[\mathrm{H}^{+}\right] \label{quant7}$ As we now know, when we measure pH it actually is a measure of the activity of H+. $\mathrm{pH}=-\log a_{\mathrm{H}^{+}} \label{quant8}$ Try this experiment—find several general chemistry textbooks and look up pH in each textbook’s index. Turn to the appropriate pages and see how it is defined. Next, look up activity or activity coefficient in each textbook’s index and see if these terms are indexed. Equation \ref{quant7} only approximates the true pH. If we calculate the pH of 0.1 M HCl using Equation \ref{quant7}, we obtain a value of 1.00; the solution’s actual pH, as defined by Equation \ref{quant8}, is 1.1 [Hawkes, S. J. J. Chem. Educ. 1994, 71, 747–749]. The activity and the concentration of H+ are not the same in 0.1 M HCl because the activity coefficient for H+ is not 1.00 in this matrix. Figure $2$ shows a more colorful demonstration of the difference between activity and concentration. A second complication in measuring pH is the uncertainty in the relationship between potential and activity. For a glass membrane electrode, the cell potential, (Ecell)samp, for a sample of unknown pH is $(E_{\text{cell}})_\text {samp} = K-\frac{R T}{F} \ln \frac{1}{a_{\mathrm{H}^{+}}}=K-\frac{2.303 R T}{F} \mathrm{pH}_{\mathrm{samp}} \label{quant9}$ where K includes the potential of the reference electrode, the asymmetry potential of the glass membrane, and any junction potentials in the electrochemical cell. All the contributions to K are subject to uncertainty, and may change from day-to-day, as well as from electrode-to-electrode. For this reason, before using a pH electrode we calibrate it using a standard buffer of known pH. The cell potential for the standard, (Ecell)std, is $\left(E_{\text {ccll}}\right)_{\text {std}}=K-\frac{2.303 R T}{F} \mathrm{p} \mathrm{H}_{\mathrm{std}} \label{quant10}$ where pHstd is the standard’s pH. Subtracting Equation \ref{quan10} from Equation \ref{quant9} and solving for pHsamp gives $\text{pH}_\text{samp} = \text{pH}_\text{std} - \frac{\left\{\left(E_{\text {cell}}\right)_{\text {samp}}-\left(E_{\text {cell}}\right)_{\text {std}}\right\} F}{2.303 R T} \label{quant11}$ which is the operational definition of pH adopted by the International Union of Pure and Applied Chemistry [Covington, A. K.; Bates, R. B.; Durst, R. A. Pure & Appl. Chem. 1985, 57, 531–542]. Calibrating a pH electrode presents a third complication because we need a standard with an accurately known activity for H+. Table $1$ provides pH values for several primary standard buffer solutions accepted by the National Institute of Standards and Technology. Table $1$. pH Values for Selected NIST Primary Standard Buffers temp (oC) saturated (at 25oC) KHC4H4O7 (tartrate) 0.05 m KH2C6H5O7 (citrate) 0.05 m KHC8H4O4 (phthlate) 0.025 m KH2PO4, 0.025 m NaHPO4 0.008695 m KH2PO4, 0.03043 m Na2HPO4 0.01 m Na4B4O7 0.025 m NaHCO3, 0.025 m Na2CO3 0 3.863 4.003 6.984 7.534 9.464 10.317 5 3.840 3.999 6.951 7.500 9.395 10.245 10 3.820 3.998 6.923 7.472 9.332 10.179 15 3.802 3.999 6.900 7.448 9.276 10.118 20 3.788 4.002 6.881 7.429 9.225 10.062 25 3.557 3.776 4.008 6.865 7.413 9.180 10.012 30 3.552 3.766 4.015 6.854 7.400 9.139 9.966 35 3.549 3.759 4.024 6.844 7.389 9.012 9.925 40 3.547 3.753 4.035 6.838 7.380 9.068 9.889 45 3.547 3.750 4.047 6.834 7.373 9.038 9.856 50 3.549 3.749 4.060 6.833 7.367 9.011 9.828 Source: Values taken from Bates, R. G. Determination of pH: Theory and Practice, 2nd ed. Wiley: New York, 1973. See also Buck, R. P., et. al.“Measurement of pH. Definition, Standards, and Procedures,” Pure. Appl. Chem. 2002, 74, 2169–2200. All concentrations are molal (m). To standardize a pH electrode using two buffers, choose one near a pH of 7 and one that is more acidic or basic depending on your sample’s expected pH. Rinse your pH electrode in deionized water, blot it dry with a laboratory wipe, and place it in the buffer with the pH closest to 7. Swirl the pH electrode and allow it to equilibrate until you obtain a stable reading. Adjust the “Standardize” or “Calibrate” knob until the meter displays the correct pH. Rinse and dry the electrode, and place it in the second buffer. After the electrode equilibrates, adjust the “Slope” or “Temperature” knob until the meter displays the correct pH. Some pH meters can compensate for a change in temperature. To use this feature, place a temperature probe in the sample and connect it to the pH meter. Adjust the “Temperature” knob to the solution’s temperature and calibrate the pH meter using the “Calibrate” and “Slope” controls. As you are using the pH electrode, the pH meter compensates for any change in the sample’s temperature by adjusting the slope of the calibration curve using a Nernstian response of 2.303RT/F.
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/23%3A_Potentiometry/23.06%3A_Direct_Potentiometric_Measurements.txt
In a potentiometric method of analysis we determine an analyte’s concentration by measuring the potential of an electrochemical cell under static conditions in which no current flows and the concentrations of species in the electrochemical cell remain fixed. Dynamic techniques, in which current passes through the electrochemical cell and concentrations change, also are important electrochemical methods of analysis. In this Chapter we consider coulometry. Voltammetry and amperometry are covered in Chapter 25. • 24.1: Introduction to Coulometry Coulometry is based on an exhaustive electrolysis of the analyte. By exhaustive we mean that the analyte is oxidized or reduced completely at the working electrode, or that it reacts completely with a reagent generated at the working electrode. There are two forms of coulometry: controlled-potential coulometry, in which we apply a constant potential to the electrochemical cell, and controlled-current coulometry, in which we pass a constant current through the electrochemical cell. • 24.2: Controlled-Potential Coulometry In this section we consider the experimental parameters and instrumentation needed to develop a controlled-potential coulometric method of analysis and its applications. • 24.3: Controlled-Current Coulometry lled-current coulometry has two advantages over controlled-potential coulometry. First, the analysis time is shorter because the current does not decrease over time. A typical analysis time for controlled-current coulometry is less than 10 min, compared to approximately 30–60 min for controlled-potential coulometry. Second, because the total charge is simply the product of current and time, there is no need to integrate the current-time curve. 24: Coulometry Coulometry is based on an exhaustive electrolysis of the analyte. By exhaustive we mean the analyte is oxidized or reduced completely at the working electrode, or reacts completely with a reagent generated at the working electrode. There are two forms of coulometry: controlled-potential coulometry, in which we apply a constant potential to the electrochemical cell, and controlled-current coulometry, in which we pass a constant current through the electrochemical cell. During an electrolysis, the total charge, Q, in coulombs, that passes through the electrochemical cell is proportional to the absolute amount of analyte by Faraday’s law $Q=n F N_{A} \label{intro1}$ where n is the number of electrons per mole of analyte, F is Faraday’s constant (96 487 C mol–1), and NA is the moles of analyte. A coulomb is equivalent to an A•sec; thus, for a constant current, i, the total charge is $Q=i t_{e} \label{intro2}$ where te is the electrolysis time. If the current varies with time, as it does in controlled-potential coulometry, then the total charge is $Q=\int_{0}^{t_e} i(t) d t \label{intro3}$ In coulometry, we monitor current as a function of time and use either Equation \ref{intro2} or Equation \ref{intro3} to calculate Q. Knowing the total charge, we then use Equation \ref{intro1} to determine the moles of analyte. To obtain an accurate value for NA, all the current must oxidize or reduce the analyte; that is, coulometry requires 100% current efficiency or an accurate measurement of the current efficiency using a standard. Current efficiency is the percentage of current that actually leads to the analyte’s oxidation or reduction. 24.02: An Introduction to Coulometric Methods of Analysis The easiest way to ensure 100% current efficiency is to hold the working electrode at a constant potential where the analyte is oxidized or reduced completely and where no potential interfering species are oxidized or reduced. As electrolysis progresses, the analyte’s concentration and the current decrease. The resulting current-versus-time profile for controlled-potential coulometry is shown in Figure $1$. Integrating the area under the curve from t = 0 to t = te gives the total charge. In this section we consider the experimental parameters and instrumentation needed to develop a controlled-potential coulometric method of analysis and its applications. Selecting a Constant Potential To understand how an appropriate potential for the working electrode is selected, let’s develop a constant-potential coulometric method for Cu2+ based on its reduction to copper metal at a Pt working electrode. $\mathrm{Cu}^{2+}(a q)+2 e^{-} \rightleftharpoons \mathrm{Cu}(s) \label{cp1}$ Figure $2$ shows the three reduction reactions that can take place in an aqueous solution of Cu2+ and their standard state reduction potentials: the reduction of O2 to H2O, the reduction of Cu2+ to Cu, and the reduction of H3O+ to H2. From the diagram we know that reaction \ref{cp1} is favored when the working electrode’s potential is more negative than +0.342 V versus the standard hydrogen electrode. To ensure a 100% current efficiency, however, the potential must be sufficiently more positive than +0.000 V so that the reduction of H3O+ to H2 does not contribute significantly to the total current flowing through the electrochemical cell. We can use the Nernst equation for reaction \ref{cp1} to estimate the minimum potential for quantitatively reducing Cu2+. $E=E_{\mathrm{Cu}^{2+} / \mathrm{Cu}}^{\mathrm{o}}-\frac{0.05916}{2} \log \frac{1}{\left[\mathrm{Cu}^{2+}\right]} \label{cp2}$ So why are we using the concentration of Cu2+ in Equation \ref{cp2} instead of its activity as we did in Chapter 23 when we considered potentiometry? In potentiometry we used activity because we used Ecell to determine the analyte’s concentration. Here we use the Nernst equation to help us select an appropriate potential. Once we identify a potential, we can adjust its value as needed to ensure a quantitative reduction of Cu2+. In addition, in coulometry the analyte’s concentration is given by the total charge, not the applied potential. If we define a quantitative electrolysis as one in which we reduce 99.99% of Cu2+ to Cu, then the concentration of Cu2+ at te is $\left[\mathrm{Cu}^{2+}\right]_{t_{e}}=0.0001 \times\left[\mathrm{Cu}^{2+}\right]_{0} \label{cp3}$ where [Cu2+]0 is the initial concentration of Cu2+ in the sample. Substituting Equation \ref{cp3} into Equation \ref{cp2} allows us to calculate the desired potential. $E=E_{\mathrm{Cu}^{2+} / \mathrm{Cu}}^{\circ}-\frac{0.05916}{2} \log \frac{1}{0.0001 \times\left[\mathrm{Cu}^{2+}\right]} \label{cp4}$ If the initial concentration of Cu2+ is $1.00 \times 10^{-4}$ M, for example, then the working electrode’s potential must be more negative than +0.105 V to quantitatively reduce Cu2+ to Cu. Note that at this potential H3O+ is not reduced to H2, maintaining 100% current efficiency. Many controlled-potential coulometric methods for Cu2+ use a potential that is negative relative to the standard hydrogen electrode—see, for example, Rechnitz, G. A. Controlled-Potential Analysis, Macmillan: New York, 1963, p.49. Based on Figure $2$ you might expect that applying a potential <0.000 V will partially reduce H3O+ to H2, resulting in a current efficiency that is less than 100%. The reason we can use such a negative potential is that the reaction rate for the reduction of H3O+ to H2 is very slow at a Pt electrode. This results in a significant overpotential—the need to apply a potential more positive or a more negative than that predicted by thermodynamics—which shifts Eo for the H3O+/H2 redox couple to a more negative value. Minimizing Electrolysis Time In controlled-potential coulometry, as shown in Figure $1$, the current decreases over time. As a result, the rate of electrolysis—recall from Chapter 22 that current is a measure of rate—becomes slower and an exhaustive electrolysis of the analyte may require a long time. Because time is an important consideration when designing an analytical method, we need to consider the factors that affect the analysis time. We can approximate how the current changes as a function of time (Figure $1$) as an exponential decay; thus, the current at time t is $i_{t}=i_{0} e^{-k t} \label{cp5}$ where i0 is the current at t = 0 and k is a rate constant that is directly proportional to the area of the working electrode and the rate of stirring, and that is inversely proportional to the volume of solution. For an exhaustive electrolysis in which we oxidize or reduce 99.99% of the analyte, the current at the end of the analysis, te, is $i_{t_{e}} \leq 0.0001 \times i_{0} \label{cp6}$ Substituting Equation \ref{cp6} into Equation \ref{cp5} and solving for te gives the minimum time for an exhaustive electrolysis as $t_{e}=-\frac{1}{k} \times \ln (0.0001)=\frac{9.21}{k} \label{cp7}$ From this equation we see that a larger value for k reduces the analysis time. For this reason we usually carry out a controlled-potential coulometric analysis in a small volume electrochemical cell, using an electrode with a large surface area, and with a high stirring rate. A quantitative electrolysis typically requires approximately 30–60 min, although shorter or longer times are possible. Instrumentation We can use the three-electrode potentiostat in Figure ($3$) to set and control the potential in controlled-potential coulometry . The potential of the working electrode is measured relative to a constant-potential reference electrode that is connected to the working electrode through a high-impedance potentiometer. To set the working electrode’s potential we adjust the slide wire resistor that is connected to the auxiliary electrode. If the working electrode’s potential begins to drift, we adjust the slide wire resistor to return the potential to its initial value. The current flowing between the auxiliary electrode and the working electrode is measured with an ammeter. Of course, a modern potentionstat uses operational amplifiers to maintain the constant potential without our intervention. The working electrode is usually one of two types: a cylindrical Pt electrode manufactured from platinum-gauze (Figure $4$), or a Hg pool electrode. The large overpotential for the reduction of H3O+ at Hg makes it the electrode of choice for an analyte that requires a negative potential. For example, a potential more negative than –1 V versus the SHE is feasible at a Hg electrode—but not at a Pt electrode—even in a very acidic solution. Because mercury is easy to oxidize, it is less useful if we need to maintain a potential that is positive with respect to the SHE. Platinum is the working electrode of choice when we need to apply a positive potential. The auxiliary electrode, which often is a Pt wire, is separated by a salt bridge from the analytical solution. This is necessary to prevent the electrolysis products generated at the auxiliary electrode from reacting with the analyte and interfering in the analysis. A saturated calomel or Ag/AgCl electrode serves as the reference electrode. The other essential need for controlled-potential coulometry is a means for determining the total charge. One method is to monitor the current as a function of time and determine the area under the curve, as shown in Figure $1$. Modern instruments use electronic integration to monitor charge as a function of time. The total charge at the end of the electrolysis is read directly from a digital readout. Electrogravimetry If the product of controlled-potential coulometry forms a deposit on the working electrode, then we can use the change in the electrode’s mass as the analytical signal. For example, if we apply a potential that reduces Cu2+ to Cu at a Pt working electrode, the difference in the electrode’s mass before and after electrolysis is a direct measurement of the amount of copper in the sample. An analytical technique that uses mass as a signal a gravimetric technique; thus, we call this electrogravimetry. Quantitative Applications The majority of controlled-potential coulometric analyses involve the determination of inorganic cations and anions, including trace metals and halides ions. Table $1$ summarizes several of these methods. Table $1$. Representative Controlled-Potential Coulometric Analyses for Inorganic Ions analyte electrolytic reaction electrode antimony $\text{Sb}(\text{III}) + 3 e^{-} \rightleftharpoons \text{Sb}$ Pt arsenic $\text{As}(\text{III}) \rightleftharpoons \text{As(V)} + 2 e^{-}$ Pt cadmium $\text{Cd(II)} + 2 e^{-} \rightleftharpoons \text{Cd}$ Pt or Hg cobalt $\text{Co(II)} + 2 e^{-} \rightleftharpoons \text{Co}$ Pt or Hg copper $\text{Cu(II)} + 2 e^{-} \rightleftharpoons \text{Cu}$ Pt or Hg halides (X) $\text{Ag} + \text{X}^- \rightleftharpoons \text{AgX} + e^-$ Ag iron $\text{Fe(II)} \rightleftharpoons \text{Fe(III)} + e^-$ Pt lead $\text{Pb(II)} + 2 e^{-} \rightleftharpoons \text{Pb}$ Pt or Hg nickel $\text{Ni(II)} + 2 e^{-} \rightleftharpoons \text{Ni}$ Pt or Hg plutonium $\text{Pu(III)} \rightleftharpoons \text{Pu(IV)} + e^-$ Pt silver $\text{Ag(I)} + 1 e^{-} \rightleftharpoons \text{Ag}$ Pt tin $\text{Sn(II)} + 2 e^{-} \rightleftharpoons \text{Sn}$ Pt uranium $\text{U(VI)} + 2 e^{-} \rightleftharpoons \text{U(IV})$ Pt or Hg zinc $\text{Zn(II)} + 2 e^{-} \rightleftharpoons \text{Zn}$ Pt or Hg Source: Rechnitz, G. A. Controlled-Potential Analysis, Macmillan: New York, 1963. Electrolytic reactions are written in terms of the change in the analyte’s oxidation state. The actual species in solution depends on the analyte. The ability to control selectivity by adjusting the working electrode’s potential makes controlled-potential coulometry particularly useful for the analysis of alloys. For example, we can determine the composition of an alloy that contains Ag, Bi, Cd, and Sb by dissolving the sample and placing it in a matrix of 0.2 M H2SO4 along with a Pt working electrode and a Pt counter electrode. If we apply a constant potential of +0.40 V versus the SCE, Ag(I) deposits on the electrode as Ag and the other metal ions remain in solution. When electrolysis is complete, we use the total charge to determine the amount of silver in the alloy. Next, we shift the working electrode’s potential to –0.08 V versus the SCE, depositing Bi on the working electrode. When the coulometric analysis for bismuth is complete, we determine antimony by shifting the working electrode’s potential to –0.33 V versus the SCE, depositing Sb. Finally, we determine cadmium following its electrodeposition on the working electrode at a potential of –0.80 V versus the SCE. We also can use controlled-potential coulometry for the quantitative analysis of organic compounds, although the number of applications is significantly less than that for inorganic analytes. One example is the six-electron reduction of a nitro group, –NO2, to a primary amine, –NH2, at a mercury electrode. Solutions of picric acid—also known as 2,4,6-trinitrophenol, or TNP, a close relative of TNT—is analyzed by reducing it to triaminophenol. Another example is the successive reduction of trichloroacetate to dichloroacetate, and of dichloroacetate to monochloroacetate $\text{Cl}_3\text{CCOO}^-(aq) + \text{H}_3\text{O}^+(aq) + 2 e^- \rightleftharpoons \text{Cl}_2\text{HCCOO}^-(aq) + \text{Cl}^-(aq) + \text{H}_2\text{O}(l) \nonumber$ $\text{Cl}_2\text{HCCOO}^-(aq) + \text{ H}_3\text{O}^+(aq) + 2 e^- \rightleftharpoons \text{ ClH}_2\text{CCOO}^-(aq) + \text{ Cl}^-(aq) + \text{H}_2\text{O}(l) \nonumber$ We can analyze a mixture of trichloroacetate and dichloroacetate by selecting an initial potential where only the more easily reduced trichloroacetate reacts. When its electrolysis is complete, we can reduce dichloroacetate by adjusting the potential to a more negative potential. The total charge for the first electrolysis gives the amount of trichloroacetate, and the difference in total charge between the first electrolysis and the second electrolysis gives the amount of dichloroacetate. Example $1$ One useful application of controlled-potential coulometry is determining the number of electrons involved in a redox reaction. To make the determination, we complete a controlled-potential coulometric analysis using a known amount of a pure compound. The total charge at the end of the electrolysis is used to determine the value of n using Faraday’s law. A 0.3619-g sample of tetrachloropicolinic acid, C6HNO2Cl4, is dissolved in distilled water, transferred to a 1000-mL volumetric flask, and diluted to volume. An exhaustive controlled-potential electrolysis of a 10.00-mL portion of this solution at a spongy silver cathode requires 5.374 C of charge. What is the value of n for this reduction reaction? Solution The 10.00-mL portion of sample contains 3.619 mg, or $1.39 \times 10^{-5}$ mol of tetrachloropicolinic acid. Solving for n gives $n=\frac{Q}{F N_{A}}=\frac{5.374 \text{ C}}{\left(96478 \text{ C/mol } e^{-}\right)\left(1.39 \times 10^{-5} \text{ mol } \mathrm{C}_{6} \mathrm{HNO}_{2} \mathrm{Cl}_{4}\right)} = 4.01 \text{ mol e}^-/\text{mol } \mathrm{C}_{6} \mathrm{HNO}_{2} \mathrm{Cl}_{4} \nonumber$ Thus, reducing a molecule of tetrachloropicolinic acid requires four electrons. The overall reaction, which results in the selective formation of 3,6-dichloropicolinic acid, is
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/24%3A_Coulometry/24.01%3A_Current-Voltage_Relationships_During_an_Electrolysis.txt
A second approach to coulometry is to use a constant current in place of a constant potential, which results in the current-versus-time profile shown in Figure $1$. Controlled-current coulometry has two advantages over controlled-potential coulometry. First, the analysis time is shorter because the current does not decrease over time. A typical analysis time for controlled-current coulometry is less than 10 min, compared to approximately 30–60 min for controlled-potential coulometry. Second, because the total charge is simply the product of current and time, there is no need to integrate the current-time curve in Figure $1$. Using a constant current presents us with two important experimental problems. First, during electrolysis the analyte’s concentration—and, therefore, the current that results from its oxidation or reduction—decreases continuously. To maintain a constant current we must allow the potential to change until another oxidation reaction or reduction reaction occurs at the working electrode. Unless we design the system carefully, this secondary reaction results in a current efficiency that is less than 100%. The second problem is that we need a method to determine when the analyte's electrolysis is complete. In a controlled-potential coulometric analysis we know that electrolysis is complete when the current reaches zero, or when it reaches a constant background or residual current. In a controlled-current coulometric analysis, however, current continues to flow even when the analyte’s electrolysis is complete. A suitable method for determining the reaction’s endpoint, te, is needed. Maintaining Current Efficiency To illustrate why a change in the working electrode’s potential may result in a current efficiency of less than 100%, let’s consider the coulometric analysis for Fe2+ based on its oxidation to Fe3+ at a Pt working electrode in 1 M H2SO4. $\mathrm{Fe}^{2+}(a q) \rightleftharpoons \text{ Fe}^{3+}(a q)+e^{-} \label{ci1}$ Figure $2$ shows the relevant potentials for this system. At the beginning of the analysis, the potential of the working electrode remains nearly constant at a level near its initial value. As the concentration of Fe2+ decreases and the concentration of Fe3+ increases, the working electrode’s potential shifts toward more positive values until the oxidation of H2O begins. $2 \mathrm{H}_{2} \mathrm{O}(l)\rightleftharpoons \text{ O}_{2}(g)+4 \mathrm{H}^{+}(a q)+4 e^{-} \label{ci2}$ Because a portion of the total current comes from the oxidation of H2O, the current efficiency for the analysis is less than 100% and we cannot use the equation $Q = it$ to determine the amount of Fe2+ in the sample. Although we cannot prevent the potential from drifting until another species undergoes oxidation, we can maintain a 100% current efficiency if the product of that secondary oxidation reaction both rapidly and quantitatively reacts with the remaining Fe2+. To accomplish this we add an excess of Ce3+ to the analytical solution. As shown in Figure $3$, when the potential of the working electrode shifts to a more positive potential, Ce3+ begins to oxidize to Ce4+ $\mathrm{Ce}^{3+}(a q) \rightleftharpoons \text{ Ce}^{4+}(a q)+e^{-} \label{ci3}$ The Ce4+ that forms at the working electrode rapidly mixes with the solution where it reacts with any available Fe2+. $\mathrm{Ce}^{4+}(a q)+\text{ Fe}^{2+}(a q) \rightleftharpoons \text{ Ce}^{3+}(a q)+\text{ Fe}^{3+}(a q) \label{ci4}$ Combining reaction \ref{ci3} and reaction \ref{ci4} shows that the net reaction is the oxidation of Fe2+ to Fe3+ $\mathrm{Fe}^{2+}(a q) \rightleftharpoons \text{ Fe}^{3+}(a q)+e^{-} \label{ci5}$ which maintains a current efficiency of 100%. A species used to maintain 100% current efficiency is called a mediator. Endpoint Determination Adding a mediator solves the problem of maintaining 100% current efficiency, but it does not solve the problem of determining when the analyte's electrolysis is complete. Using the analysis for Fe2+ in Figure $3$, when the oxidation of Fe2+ is complete current continues to flow from the oxidation of Ce3+, and, eventually, the oxidation of H2O. What we need is a signal that tells us when no more Fe2+ is present in the solution. For our purposes, it is convenient to treat a controlled-current coulometric analysis as a reaction between the analyte, Fe2+, and the mediator, Ce3+, as shown by reaction \ref{ci4}. This reaction is identical to a redox titration; thus, we can use the end points for a redox titration—visual indicators and potentiometric or conductometric measurements—to signal the end of a controlled-current coulometric analysis. For example, ferroin provides a useful visual endpoint for the Ce3+ mediated coulometric analysis for Fe2+, changing color from red to blue when the electrolysis of Fe2+ is complete. Instrumentation We can carry out controlled-current coulometry using the two-electrode galvanostat shown in Figure $4$, which consists of a working electrode and a counter electrode. The working electrode—often a simple Pt electrode—also is called the generator electrode since it is where the mediator reacts to generate the species that reacts with the analyte. If necessary, the counter electrode is isolated from the analytical solution by a salt bridge or a porous frit to prevent its electrolysis products from reacting with the analyte. The current from the power supply through the working electrode is $i=\frac{E_{\mathrm{PS}}}{R+R_{\mathrm{cell}}} \label{ci6}$ where EPS is the potential of the power supply, R is the resistance of the resistor, and Rcell is the resistance of the electrochemical cell. If R >> Rcell, then the current between the auxiliary and working electrodes $i=\frac{E_{\mathrm{PS}}}{R} \approx \text{constant} \label{ci7}$ maintains a constant value. To monitor the working electrode’s potential, which changes as the composition of the electrochemical cell changes, we can include an optional reference electrode and a high-impedance potentiometer. Alternatively, we can generate the oxidizing agent or the reducing agent externally, and allow it to flow into the analytical solution. Figure $5$ shows one simple method for accomplishing this. A solution that contains the mediator flows into a small-volume electrochemical cell with the products exiting through separate tubes. Depending upon the analyte, the oxidizing agent or the reducing reagent is delivered to the analytical solution. For example, we can generate Ce4+ using an aqueous solution of Ce3+, directing the Ce4+ that forms at the anode to our sample. There are two other crucial needs for controlled-current coulometry: an accurate clock for measuring the electrolysis time, te, and a switch for starting and stopping the electrolysis. An analog clock can record time to the nearest ±0.01 s, but the need to stop and start the electrolysis as we approach the endpoint may result in an overall uncertainty of ±0.1 s. A digital clock allows for a more accurate measurement of time, with an overall uncertainty of ±1 ms. The switch must control both the current and the clock so that we can make an accurate determination of the electrolysis time. Coulometric Titrations A controlled-current coulometric method sometimes is called a coulometric titration because of its similarity to a conventional titration. For example, in the controlled-current coulometric analysis for Fe2+ using a Ce3+ mediator, the oxidation of Fe2+ by Ce4+ (reaction \ref{ci4}) is identical to the reaction in a redox titration. There are other similarities between controlled-current coulometry and titrimetry. If we combine the equation $Q = nFN_a$ and the equation $Q = it_e$ and solve for the moles of analyte, NA, we obtain the following equation. $N_{A}=\frac{i}{n F} \times t_{e} \label{ci8}$ Compare Equation \ref{ci8} to the relationship between the moles of analyte, NA, and the moles of titrant, NT, in a titration $N_{A}=N_{T}=M_{T} \times V_{T} \label{ci9}$ where MT and VT are the titrant’s molarity and the volume of titrant at the end point. In constant-current coulometry, the current source is equivalent to the titrant and the value of that current is analogous to the titrant’s molarity. Electrolysis time is analogous to the volume of titrant, and te is equivalent to the a titration’s end point. Finally, the switch for starting and stopping the electrolysis serves the same function as a buret’s stopcock. For simplicity, we assumed above that the stoichiometry between the analyte and titrant is 1:1. The assumption, however, is not important and does not effect our observation of the similarity between controlled-current coulometry and a titration. Quantitative Applications The use of a mediator makes a coulometric titration a more versatile analytical technique than controlled-potential coulometry. For example, the direct oxidation or reduction of a protein at a working electrode is difficult if the protein’s active redox site lies deep within its structure. A coulometric titration of the protein is possible, however, if we use the oxidation or reduction of a mediator to produce a solution species that reacts with the protein. Table $1$ summarizes several controlled-current coulometric methods based on a redox reaction using a mediator. Table $1$. Representative Examples of Coulometric Redox Titrations mediator electrochemically generated reagent and reaction representative application Ag+ $\mathrm{Ag}^{+} \rightleftharpoons \textbf{Ag}^\textbf{2+}+e^{-}$ $\mathbf{H}_{2} \mathbf{C}_{2} \mathbf{O}_{4}(a q)+2 \mathrm{Ag}^{2+}(a q)+2 \mathrm{H}_{2} \mathrm{O}(l) \rightleftharpoons \ 2\text{CO}_2(g) + 2\text{Ag}^+(aq) + 2\text{H}_3\text{O}^+(aq)$ Br $2\mathrm{Br}^{-} \rightleftharpoons \textbf{Br}_\textbf{2}+2 e^{-}$ $\textbf{H}_\textbf{2} \textbf{S}(a q)+\text{ Br}_{2}(\mathrm{aq})+2 \mathrm{H}_{2} \mathrm{O}(\mathrm{l}) \rightleftharpoons \ \text{S}(s) + 2\text{Br}^-(aq) + 2\text{H}_3\text{O}^+(aq)$ Ce3+ $\mathrm{Ce}^{3+} \rightleftharpoons \textbf{Ce}^\textbf{4+}+e^{-}$ $\textbf{Fe}(\mathbf{C N})_\textbf{6}^\textbf{4–}(a q)+\text{ Ce}^{4+}(a q) \rightleftharpoons \ \mathrm{Fe}(\mathrm{CN})_{6}^{3-}(a q)+\text{ Ce}^{3+}(a q)$ Cl $2\mathrm{Cl}^{-} \rightleftharpoons \textbf{Cl}_\textbf{2}+2 e^{-}$ $\textbf{Ti(I)}(a q)+\text{ Cl}_{2}(a q) \rightleftharpoons \mathrm{Ti}(\mathrm{III})(a q)+2 \mathrm{Cl}^{-}(a q)$ Fe3+ $\mathrm{Fe}^{3+} +e^{-} \rightleftharpoons \textbf{Fe}^\textbf{2+}$ $\mathbf{Cr}_\textbf{2} \mathbf{O}_\textbf{7}^\mathbf{2-}(a q)+6 \mathrm{Fe}^{2+}(a q)+14 \mathrm{H}_{3} \mathrm{O}^{+}(a q) \rightleftharpoons \ 2\text{Cr}^{3+}(aq) + 6\text{Fe}^{3+}(aq) + 21\text{H}_2\text{O}(l)$ I $3\mathrm{I}^{-} \rightleftharpoons \textbf{I}_\textbf{3}^\textbf{–}+2 e^{-}$ $2 \mathbf{S}_\mathbf{2} \mathbf{O}_\mathbf{3}^\mathbf{2-}(a q)+\mathrm{I}_{3}^{-}(a q) \rightleftharpoons \text{S}_{4} \mathrm{O}_{6}^{2-}(a q)+3 \mathrm{I}^{-}(a q)$ Mn2+ $\mathrm{Mn}^{2+} \rightleftharpoons \textbf{Mn}^\textbf{3+}+e^{-}$ $\textbf{As(III)}(a q)+2 \text{Mn}^{3+}(aq) \rightleftharpoons \text{As(V)}(a q)+2 \text{Mn}^{2+}(a q)$ Note: The electrochemically generated reagent and the analyte are shown in bold. For an analyte that is not easy to oxidize or reduce, we can complete a coulometric titration by coupling a mediator’s oxidation or reduction to an acid–base, precipitation, or complexation reaction that involves the analyte. For example, if we use H2O as a mediator, we can generate H3O+at the anode $6 \mathrm{H}_{2} \mathrm{O}(l) \rightleftharpoons 4 \mathrm{H}_{3} \text{O}^{+}(a q)+\text{ O}_{2}(g)+4 e^{-} \nonumber$ and generate OH at the cathode. $2 \mathrm{H}_{2} \mathrm{O}(l)+2 e^{-} \rightleftharpoons 2 \mathrm{OH}^{-}(a q)+\text{ H}_{2}(g) \nonumber$ If we carry out the oxidation or reduction of H2O using the generator cell in Figure $5$, then we can selectively dispense H3O+ or OH into a solution that contains the analyte. The resulting reaction is identical to that in an acid–base titration. Coulometric acid–base titrations have been used for the analysis of strong and weak acids and bases, in both aqueous and non-aqueous matrices. Table $2$ summarizes several examples of coulometric titrations that involve acid–base, complexation, and precipitation reactions. Table $2$. Representative Coulometric Titrations Using Acid–Base, Complexation, and Precipitation Reactions type of reaction mediator electrochemically generated reagent and reaction representative application acid-base H2O $6 \mathrm{H}_{2} \mathrm{O} \rightleftharpoons 4 \textbf{H}_\mathbf{3} \textbf{O}^\mathbf{+}+\text{ O}_{2}+e^{-}$ $\textbf{OH}^\mathbf{-}(a q)+\text{ H}_{3} \mathrm{O}^{+}(a q) \rightleftharpoons 2 \mathrm{H}_{2} \mathrm{O}(l)$ acid-base H2O $2 \mathrm{H}_{2} \mathrm{O}+2 e^{-}\rightleftharpoons 2 \textbf{OH}^\mathbf{-}+\text{ H}_{2}$ $\textbf{H}_\mathbf{3} \textbf{O}^\mathbf{+}(a q)+\text{ OH}^{-}(a q) \rightleftharpoons 2 \mathrm{H}_{2} \mathrm{O}(l)$ complexation HgNH3Y2 (Y = EDTA) $\mathrm{HgNH}_{3} \mathrm{Y}^{2-}+\text{ NH}_{4}^{+} + 2 e^{-} \rightleftharpoons \ \textbf{HY}^\mathbf{3-}+\text{ Hg}+2 \mathrm{NH}_{3}$ $\mathbf{Ca}^\mathbf{2+}(a q)+ \text{ HY}^{3-}(a q)+ \text{ H}_{2} \text{O}(l)\rightleftharpoons \ \text{CaY}^{2-}(a q)+ \text{ H}_{3} \text{O}^{+}(a q)$ precipitation Ag $\mathrm{Ag} \rightleftharpoons \textbf{ Ag}^\mathbf{+}+e^{-}$ $\mathbf{I}^\mathbf{-}(a q)+\text{ Ag}^{+}(a q) \rightleftharpoons \operatorname{Ag} \mathrm{I}(s)$ precipitation Hg $2 \mathrm{Hg} \rightleftharpoons \mathbf{H} \mathbf{g}_{2}^{2+}+2 e^{-}$ $2 \textbf{Cl}^\mathbf{-}(a q)+\text{ Hg}_{2}^{2+}(a q) \rightleftharpoons \text{ Hg}_{2} \mathrm{Cl}_{2}(s)$ precipitation $\text{Fe(CN)}_6^{3-}$ $\mathrm{Fe}(\mathrm{CN})_{6}^{3-}+e^{-}\rightleftharpoons \textbf{ Fe(CN)}_\mathbf{6}^\mathbf{4-}$ $3 \mathbf{Zn}^\mathbf{2+}(a q)+ \text{K}^{+}(a q) +2 \text{Fe(CN)}_{6}^{4-}(a q) \rightleftharpoons \ \text{K}_{2} \text{Zn}_{3}\left[\text{Fe(CN)}_{6}\right]_{2}(s)$ Note: The electrochemically generated reagent and the analyte are shown in bold. In comparison to a conventional titration, a coulometric titration has two important advantages. The first advantage is that electrochemically generating a titrant allows us to use a reagent that is unstable. Although we cannot prepare and store a solution of a highly reactive reagent, such as Ag2+ or Mn3+, we can generate them electrochemically and use them in a coulometric titration. Second, because it is relatively easy to measure a small quantity of charge, we can use a coulometric titration to determine an analyte whose concentration is too small for a conventional titration. The following example shows the calculations for a typical coulometric analysis. Example $1$ To determine the purity of a sample of Na2S2O3, a sample is titrated coulometrically using I as a mediator and $\text{I}_3^-$ as the titrant. A sample weighing 0.1342 g is transferred to a 100-mL volumetric flask and diluted to volume with distilled water. A 10.00-mL portion is transferred to an electrochemical cell along with 25 mL of 1 M KI, 75 mL of a pH 7.0 phosphate buffer, and several drops of a starch indicator solution. Electrolysis at a constant current of 36.45 mA requires 221.8 s to reach the starch indicator endpoint. Determine the sample’s purity. Solution As shown in Table $1$, the coulometric titration of $\text{S}_2 \text{O}_3^{2-}$ with $\text{I}_3^-$ is $2 \mathrm{S}_{2} \mathrm{O}_{3}^{2-}(a q)+\text{ I}_{3}^{-}(a q)\rightleftharpoons \text{ S}_{4} \mathrm{O}_{6}^{2-}(a q)+3 \mathrm{I}^{-}(a q) \nonumber$ The oxidation of $\text{S}_2 \text{O}_3^{2-}$ to $\text{S}_4 \text{O}_6^{2-}$ requires one electron per $\text{S}_2 \text{O}_3^{2-}$ (n = 1). Combining the equations $Q = nFN_A$ and $Q = it_e$, and solving for the moles and grams of Na2S2O3 gives $N_{A} =\frac{i t_{e}}{n F}=\frac{(0.03645 \text{ A})(221.8 \text{ s})}{\left(\frac{1 \text{ mol } e^{-}}{\text{mol Na}_{2} \mathrm{S}_{2} \mathrm{O}_{3}}\right)\left(\frac{96487 \text{ C}}{\text{mol } e^{-}}\right)} =8.379 \times 10^{-5} \text{ mol Na}_{2} \mathrm{S}_{2} \mathrm{O}_{3} \nonumber$ This is the amount of Na2S2O3 in a 10.00-mL portion of a 100-mL sample; thus, there are 0.1325 grams of Na2S2O3 in the original sample. The sample’s purity, therefore, is $\frac{0.1325 \text{ g} \text{ Na}_{2} \mathrm{S}_{2} \mathrm{O}_{3}}{0.1342 \text{ g} \text { sample }} \times 100=98.73 \% \text{ w} / \text{w } \mathrm{Na}_{2} \mathrm{S}_{2} \mathrm{O}_{3} \nonumber$ Note that for this calcuation, it does not matter whether $\text{S}_2 \text{O}_3^{2-}$ is oxidized at the working electrode or is oxidized by $\text{I}_3^-$.
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/24%3A_Coulometry/24.03%3A_Potentiostatic_Coulometry.txt
In voltammetry we apply a time-dependent potential to an electrochemical cell and measure the resulting current as a function of that potential. We call the resulting plot of current versus applied potential a voltammogram, and it is the electrochemical equivalent of a spectrum in spectroscopy, providing quantitative and qualitative information about the species involved in the oxidation or reduction reaction [Maloy, J. T. J. Chem. Educ. 1983, 60, 285–289]. The earliest voltammetric technique is polarography, developed by Jaroslav Heyrovsky in the early 1920s—an achievement for which he was awarded the Nobel Prize in Chemistry in 1959. Since then, many different forms of voltammetry have been developed. Before we examine some these techniques and their applications in more detail, we must first consider the basic experimental design for voltammetry and the factors influencing the shape of the resulting voltammogram. • 25.1: Potential Excitation Signals and Currents in Voltammetry In voltammetry we apply a time-dependent potential to an electrochemical cell and measure the resulting current as a function of that potential. • 25.2: Voltammetric Instrumentation Although early voltammetric methods used only two electrodes, a modern voltammeter makes use of a three-electrode potentiostat. The potential of the working electrode is measured relative to a constant-potential reference electrode that is connected to the working electrode through a high-impedance potentiometer. The auxiliary electrode generally is a platinum wire and the reference electrode usually is a SCE or a Ag/AgCl electrode. • 25.3: Linear Sweep Voltammetry In the simplest voltammetric experiment we apply a linear potential ramp as an excitation signal and record the current that flows in response to the change in potential. Among the experimental variables under our control are the initial potential, the final potential, the scan rate, and whether we choose to stir the solution or leave it unstirred. We call this linear sweep voltammetry. • 25.4: Cyclic Voltammetry In linear sweep voltammetry we scan the potential in one direction, either to more positive potentials or to more negative potentials. In cyclic voltammetry we complete a scan in both directions. • 25.5: Polarography The first important voltammetric technique to be developed—polarography—uses the dropping mercury (DME) electrode as the working electrode. In polarography, as in linear sweep voltammetry, we vary the potential and measure the current. The change in potential can be in the form of a linear ramp, as was the case for linear sweep voltammetry, or it can involve a series of pulses. • 25.6: Stripping Methods Another important voltammetric technique is stripping voltammetry, which consists of three related techniques: anodic stripping voltammetry, cathodic stripping voltammetry, and adsorptive stripping voltammetry. Because anodic stripping voltammetry is the more widely used of these techniques, we will consider it in greatest detail. • 25.7: Applications of Voltammetry Voltammetry finds use for both quantitative analyses and characterization analyses. Examples of each are highlighted in this section. 25: Voltammetry In voltammetry we apply a time-dependent potential to an electrochemical cell and measure the resulting current as a function of that potential. Potential Excitation Signals As shown in Figure \(1\), the potential may consist of (a) a linear scan or (b) a series of pulses. For the linear scan in (a), the direction of the scan can be reversed and repeated for additional cycles. The series of pulses in (b) shows just one of several different pulsed potential excitation signals; we will consider other pulse trains in the section on polarography. Current The current responses in Figure \(1\) show the three common types of signals. In (c) and (d) the current is monitored directly as the potential is changed. In (e) a change in current is recorded using the current immediately before and after the application of a potential pulse. The current itself has three components: faradic current from the oxidation or reduction of the analyte, a charging current, and residual currents. Faradaic Current Faradic current is the result of oxidation or reduction of the analyte at the working electrode. The ease with which electrons move between the electrode and the species that reacts at the electrode affects the faradiac current. When electron transfer kinetics are fast, the redox reaction is at equilibrium. Under these conditions the redox reaction is electrochemically reversible and the Nernst equation applies. If the electron transfer kinetics are sufficiently slow, the concentration of reactants and products at the electrode surface—and thus the magnitude of the faradaic current—are not what is predicted by the Nernst equation. In this case the system is electrochemically irreversible. Charging Currents In addition to the faradaic current from a redox reaction, the current in an electrochemical cell includes other, nonfaradaic sources. Suppose the charge on an electrode is zero and we suddenly change its potential so that the electrode’s surface acquires a positive charge. Cations near the electrode’s surface will respond to this positive charge by migrating away from the electrode; anions, on the other hand, will migrate toward the electrode. This migration of ions occurs until the electrode’s positive surface charge and the negative charge of the solution near the electrode are equal. Because the movement of ions and the movement of electrons are indistinguishable, the result is a small, short-lived nonfaradaic current that we call the charging current. Every time we change the electrode’s potential, a transient charging current flows. The migration of ions in response to the electrode’s surface charge leads to the formation of a structured electrode-solution interface that we call the electrical double layer, or EDL. When we change an electrode’s potential, the charging current is the result of a restructuring of the EDL. The exact structure of the electrical double layer is not important in the context of this text, but you can consult this chapter’s additional resources for additional information. See Chapter 22.1 for additional details. Residual Current Even in the absence of analyte, a small, measurable current flows through an electrochemical cell. In addition to the charging current discussed above, the residual current includes a faradaic current from the oxidation or reduction of trace impurities in the sample. Methods for discriminating between the analyte’s faradaic current and the residual current are discussed later in this chapter.
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/25%3A_Voltammetry/25.01%3A_Excitation_Signals_in_Voltammetry.txt
Although early voltammetric methods used only two electrodes, a modern voltammeter makes use of a three-electrode potentiostat, such as that shown in Figure \(1\). The potential of the working electrode is measured relative to a constant-potential reference electrode that is connected to the working electrode through a high-impedance potentiometer. The auxiliary electrode generally is a platinum wire and the reference electrode usually is a SCE or a Ag/AgCl electrode. We apply a time-dependent potential excitation signal to the working electrode—changing its potential relative to the fixed potential of the reference electrode—and measure the current that flows between the working electrode and the auxiliary electrode. Modern potentiostats include waveform generators that allow us to apply a time-dependent potential profile, such as a series of potential pulses, to the working electrode. Working Electrodes For the working electrode we can choose among several different materials, including mercury, platinum, gold, silver, and carbon. The earliest voltammetric techniques used a mercury working electrode. Because mercury is a liquid, the working electrode usually is a drop suspended from the end of a capillary tube. In the hanging mercury drop electrode, or HMDE, we extrude the drop of Hg by rotating a micrometer screw that pushes the mercury from a reservoir through a narrow capillary tube (Figure \(2\)a). In the dropping mercury electrode, or DME, mercury drops form at the end of the capillary tube as a result of gravity (Figure \(2\)b). Unlike the HMDE, the mercury drop of a DME grows continuously—as mercury flows from the reservoir under the influence of gravity—and has a finite lifetime of several seconds. At the end of its lifetime the mercury drop is dislodged, either manually or on its own, and is replaced by a new drop. The static mercury drop electrode, or SMDE, uses a solenoid driven plunger to control the flow of mercury (Figure \(2\)c). Activation of the solenoid momentarily lifts the plunger, allowing mercury to flow through the capillary, forming a single, hanging Hg drop. Repeated activation of the solenoid produces a series of Hg drops. In this way the SMDE may be used as either a HMDE or a DME. There is one additional type of mercury electrode: the mercury film electrode. A solid electrode—typically carbon, platinum, or gold—is placed in a solution of Hg2+ and held at a potential where the reduction of Hg2+ to Hg is favorable, depositing a thin film of mercury on the solid electrode’s surface. Mercury has several advantages as a working electrode. Perhaps its most important advantage is its high overpotential for the reduction of H3O+ to H2, which makes accessible potentials as negative as –1 V versus the SCE in acidic solutions and –2 V versus the SCE in basic solutions (Figure \(3\)). A species such as Zn2+, which is difficult to reduce at other electrodes without simultaneously reducing H3O+, is easy to reduce at a mercury working electrode. Other advantages include the ability of metals to dissolve in mercury—which results in the formation of an amalgam—and the ability to renew the surface of the electrode by extruding a new drop. One limitation to mercury as a working electrode is the ease with which it is oxidized. Depending on the solvent, a mercury electrode can not be used at potentials more positive than approximately –0.3 V to +0.4 V versus the SCE. Solid electrodes constructed using platinum, gold, silver, or carbon may be used over a range of potentials, including potentials that are negative and positive with respect to the SCE (Figure \(3\)). For example, the potential window for a Pt electrode extends from approximately +1.2 V to –0.2 V versus the SCE in acidic solutions, and from +0.7 V to –1 V versus the SCE in basic solutions. A solid electrode can replace a mercury electrode for many voltammetric analyses that require negative potentials, and is the electrode of choice at more positive potentials. Except for the carbon paste electrode, a solid electrode is fashioned into a disk and sealed into the end of an inert support with an electrical lead (Figure \(4\)). The carbon paste electrode is made by filling the cavity at the end of the inert support with a paste that consists of carbon particles and a viscous oil. Solid electrodes are not without problems, the most important of which is the ease with which the electrode’s surface is altered by the adsorption of a solution species or by the formation of an oxide layer. For this reason a solid electrode needs frequent reconditioning, either by applying an appropriate potential or by polishing. Electrochemical Cells A typical arrangement for a voltammetric electrochemical cell is shown in Figure \(5\). In addition to the working electrode, the reference electrode, and the auxiliary electrode, the cell also includes a N2-purge line for removing dissolved O2, and an optional stir bar. Electrochemical cells are available in a variety of sizes, allowing the analysis of solution volumes ranging from more than 100 mL to as small as 50 μL.
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/25%3A_Voltammetry/25.02%3A_Voltammetric_Instrumentation.txt
In the simplest voltammetric experiment we apply a linear potential ramp as an excitation signal and record the current that flows in response to the change in potential. Among the experimental variables under our control are the initial potential, the final potential, the scan rate, and whether we choose to stir the solution or leave it unstirred. We call this linear sweep voltammetry. To illustrate how linear sweep voltammetry works, let's consider what happens when we reduce $\text{Fe(CN)}_6^{3-}$ to $\text{Fe(CN)}_6^{4-}$ at the working electrode. The relationship between the concentrations of $\text{Fe(CN)}_6^{3-}$, the concentration of $\text{Fe(CN)}_6^{4-}$, and the potential is given by the Nernst equation $E=+0.356 \text{ V}-0.05916 \log \frac{\left[\mathrm{Fe}(\mathrm{CN})_{6}^{4-}\right]_{x=0}}{\left[\mathrm{Fe}(\mathrm{CN})_{6}^{3-}\right]_{x=0}} \label{lsv1}$ where +0.356V is the standard-state potential for the $\text{Fe(CN)}_6^{3-}$/$\text{Fe(CN)}_6^{4-}$ redox couple, and x = 0 indicates that the concentrations of $\text{Fe(CN)}_6^{3-}$ and $\text{Fe(CN)}_6^{4-}$ are those at the surface of the working electrode. We use surface concentrations instead of bulk concentrations because the equilibrium position for the redox reaction $\mathrm{Fe}(\mathrm{CN})_{6}^{3-}(a q)+e^{-}\rightleftharpoons\mathrm{Fe}(\mathrm{CN})_{6}^{4-}(a q) \label{lsv2}$ is established at the electrode’s surface. Let’s assume we have a solution for which the initial concentration of $\text{Fe(CN)}_6^{3-}$ is 1.0 mM and that $\text{Fe(CN)}_6^{4-}$ is absent. Figure $1$ shows the relationship between the applied potential and the species that are stable at the electrode's surface. If we apply a potential of +0.530 V to the working electrode, the concentrations of $\text{Fe(CN)}_6^{3-}$ and $\text{Fe(CN)}_6^{4-}$ at the surface of the electrode are unaffected, and no faradaic current is observed. If we switch the potential to +0.356 V some of the $\text{Fe(CN)}_6^{3-}$ at the electrode’s surface is reduced to $\text{Fe(CN)}_6^{4-}$until we reach a condition where $\left[\mathrm{Fe}(\mathrm{CN})_{6}^{3-}\right]_{x=0}=\left[\mathrm{Fe}(\mathrm{CN})_{6}^{4-}\right]_{x=0}=0.50 \text{ mM} \label{lsv3}$ If this is all that happens after we apply the potential, then there would be a brief surge of faradaic current that quickly returns to zero, which is not the most interesting of results (although this is the basis for chronoamperometry, an electrochemical method we will not consider in this text). Although the concentrations of $\text{Fe(CN)}_6^{3-}$ and $\text{Fe(CN)}_6^{4-}$ at the electrode surface are 0.50 mM, their concentrations in bulk solution remains unchanged. Because of this difference in concentration, there is a concentration gradient between the electrode’s surface and the bulk solution. This concentration gradient creates a driving force that transports $\text{Fe(CN)}_6^{4-}$ away from the electrode and that transports $\text{Fe(CN)}_6^{3-}$ to the electrode (Figure $2$). As the $\text{Fe(CN)}_6^{3-}$ arrives at the electrode it, too, is reduced to $\text{Fe(CN)}_6^{4-}$. A faradaic current continues to flow until there is no difference between the concentrations of $\text{Fe(CN)}_6^{3-}$ and $\text{Fe(CN)}_6^{4-}$ at the electrode and their concentrations in bulk solution (although this might take a long time!). Although the potential at the working electrode determines if a faradaic current flows, the magnitude of the current is determined by the rate of the resulting oxidation or reduction reaction. Two factors contribute to the rate of the electrochemical reaction: the rate at which the reactants and products are transported to and from the electrode—what we call mass transport—and the rate at which electrons pass between the electrode and the reactants and products in solution. Concentration Profiles at the Working Electrode There are three modes of mass transport that affect the rate at which reactants and products move toward or away from the electrode surface: diffusion, migration, and convection. Diffusion occurs whenever the concentration of an ion or a molecule at the surface of the electrode is different from that in bulk solution. If we apply a potential sufficient to completely reduce $\text{Fe(CN)}_6^{3-}$ at the electrode surface, the result is a concentration gradient similar to that shown in Figure $3$. The region of solution over which diffusion occurs is the diffusion layer. In the absence of other modes of mass transport, the width of the diffusion layer, $\delta$, increases with time as the $\text{Fe(CN)}_6^{3-}$ must diffuse from an increasingly greater distance. Convection occurs when we mix the solution, which carries reactants toward the electrode and removes products from the electrode. The most common form of convection is stirring the solution with a stir bar; other methods include rotating the electrode and incorporating the electrode into a flow-cell. The final mode of mass transport is migration, which occurs when a charged particle in solution is attracted to or repelled from an electrode that carries a surface charge. If the electrode carries a positive charge, for example, an anion will move toward the electrode and a cation will move toward the bulk solution. Unlike diffusion and convection, migration affects only the mass transport of charged particles. The movement of material to and from the electrode surface is a complex function of all three modes of mass transport. In the limit where diffusion is the only significant form of mass transport, the current, $i$, in a voltammetric cell is proportional to the slope of the concentration profile in Figure $3$ $i \propto \frac {\partial C} {\partial x} \label{lsv4}$ where $C$ is the concentration of $\text{Fe(CN)}_6^{3-}$ and $x$ is distance. For Equation \ref{lsv4} to be valid, convection and migration must not interfere with the formation of a diffusion layer. We can eliminate migration by adding a high concentration of an inert supporting electrolyte. Because ions of similar charge are equally attracted to or repelled from the surface of the electrode, each has an equal probability of undergoing migration. A large excess of an inert electrolyte ensures that few reactants or products experience migration. Although it is easy to eliminate convection by not stirring the solution, there are experimental designs where we cannot avoid convection, either because we must stir the solution or because we are using an electrochemical flow cell. Fortunately, as shown in Figure $4$, the dynamics of a fluid moving past an electrode results in a small diffusion layer—typically 1–10 μm in thickness—in which the rate of mass transport by convection drops to zero. Concentration Profiles in an Unstirred Solution Figure $5$ shows the linear sweep voltammogram (the center image, which shows the current as a function of time) and eight snapshots of the concentration profiles for the reduction of $\text{Fe(CN)}_6^{3-}$ to $\text{Fe(CN)}_6^{4-}$ in an unstirred solution. The initial potential was set to +0.530 V and the final potential was set to +0.182 V with a scan rate of 0.050 V/s. At the initial potential, only $\text{Fe(CN)}_6^{3-}$ is stable at the electrode surface, and no current flows. After 0.696 s the potential is 0.495 V (image to the left of the linear sweep voltammogram) and, because $\text{Fe(CN)}_6^{3-}$ remains stable at the electrode surface, no current flows. Moving clockwise around the linear sweep voltammogram, the applied potential becomes smaller and the concentration of $\text{Fe(CN)}_6^{3-}$ at the electrode surface decreases and the concentration of $\text{Fe(CN)}_6^{4-}$ increases. Initially the slope of the concentration gradient, and, therefore, the current increases; as the concentration of $\text{Fe(CN)}_6^{3-}$ at the electrode surface approaches zero, however, the concentration gradient becomes less steep and the current decreases. The result is the linear sweep voltammogram in the center of the diagram. Concentration Profiles in a Stirred Solution If we run the same experiment as in Figure $5$, but stir the solution, the resulting linear sweep voltammogram and concentration profiles are those in Figure $6$. Stirring the solution, as we saw in Figure $4$ creates a diffusion layer whose thickness is independent of time. As a result, instead of the peak current in Figure $5$, the current reaches a steady-state value, which we call the limiting current, $i_l$. The linear sweep voltammogram also has a characteristic half-wave potential, $E_{1/2}$, when the current is 50% of the limiting current. Figure $7$ shows how the limiting current and half-wave potential are measured. Voltammetric Currents Earlier we noted, in Equation \ref{lsv4}, that the current in linear sweep voltammetry is proportional to the slope of the concentration profile. The current is also a function of other variables, as shown here for the reduction of $\text{Fe(CN)}_6^{3-}$ to $\text{Fe(CN)}_6^{4-}$ $i = \frac{ n F A D \left( \left[ \ce{Fe(CN)6^{3-}} \right]_\text{bulk} - \left[ \ce{Fe(CN)6^{3-}} \right]_\text{x = 0} \right)} {\delta} \label{lsv5}$ where n the number of electrons in the redox reaction, F is Faraday’s constant, A is the area of the electrode, D is the diffusion coefficient for $\text{Fe(CN)}_6^{3-}$, $\delta$ is the thickness of the diffusion layer, and $\left( \left[ \ce{Fe(CN)6^{3-}} \right]_\text{bulk} - \left[ \ce{Fe(CN)6^{3-}} \right]_\text{x = 0} \right)$ is the difference in the concentration of $\ce{Fe(CN)6^{3-}}$ between the bulk solution and the electrode's surface. Because $n$, $F$, $A$, and $D$ are constants, and because $\delta$ is a constant if we stir the solution, we can write Equation \ref{lsv5} as $i = K_{\ce{Fe(CN)6^{3-}}} \left( \left[ \ce{Fe(CN)6^{3-}} \right]_\text{bulk} - \left[ \ce{Fe(CN)6^{3-}} \right]_\text{x = 0} \right) \label{lsv6}$ where $K_{\ce{Fe(CN)6^{3-}}}$ is a constant. If we use the limiting current, then $\left[ \ce{Fe(CN)6^{3-}} \right]_\text{x = 0}$ is zero, and Equation \ref{lsv6} becomes $i_l = K_{\ce{Fe(CN)6^{3-}}} \left[ \ce{Fe(CN)6^{3-}} \right]_\text{bulk} \label{lsv7}$ Current/Voltage Relationships for Reversible Reactions A reversible electrochemical reaction is one in which the concentration of the oxidized and reduced species at the electrode surface remain in thermodynamic equilibrium with each other. When this is true, the Nernst equation explains the relationship between the applied potential, their concentration, and the standard state potential. Equation \ref{lsv7} shows us that the limiting current is a measure of the concentration of $\text{Fe(CN)}_6^{3-}$ in bulk solution, which means we can use the limiting current for quantitative work. Figure $7$ also shows that there is a qualitative relationship between the half-wave potential, $E_{1/2}$, and the limiting current; however, it is not yet clear what the half-wave potential represents. If we solve Equation \ref{lsv7} for $\left[ \ce{Fe(CN)6^{3-}} \right]_\text{bulk}$ and substitute into Equation \ref{lsv6} and rearrange, we have $\left[ \ce{Fe(CN)6^{3-}} \right]_\text{x = 0} = \frac {i_l - i} {K_{\ce{Fe(CN)6^{3-}}}} \label{lsv8}$ If we take the same approach with $\text{Fe(CN)}_6^{4-}$, which forms at the electrode solution, then we have $i = -\frac{ n F A D \left( \left[ \ce{Fe(CN)6^{4-}} \right]_\text{bulk} - \left[ \ce{Fe(CN)6^{4-}} \right]_\text{x = 0} \right)} {\delta} = K_{\ce{Fe(CN)6^{4-}}} \left[ \ce{Fe(CN)6^{4-}} \right]_\text{x = 0} \label{lsv9}$ $\left[ \ce{Fe(CN)6^{4-}} \right]_\text{x = 0} = \frac {-i} {K_{\ce{Fe(CN)6^{4-}}}} \label{lsv10}$ where the minus sign accounts for the concentration profile having a negative slope. Substituting Equation \ref{lsv9} and Equation \ref{lsv10} into Equation \ref{lsv1}, which is the Nersnt equation, gives $E = E^{\circ} - 0.05916 \log \frac {-i/K_{\ce{Fe(CN)6^{4-}}}} {(i_l - i)/K_{\ce{Fe(CN)6^{3-}}}} \label{lsv11}$ $E = E^{\circ} + 0.05916 \log \frac{K_{\ce{Fe(CN)6^{3-}}}}{K_{\ce{Fe(CN)6^{4-}}}} - 0.05916 \log \frac {i} {i_l - i} \label{lsv12}$ When $i = \frac {i_l - i} {2}$, which is the definition of $E_{1/2}$, Equation \ref{lsv12} simplifies to $E_{1/2} = E^{\circ} + 0.05916 \log \frac{K_{\ce{Fe(CN)6^{3-}}}}{K_{\ce{Fe(CN)6^{4-}}}} \label{lsv13}$ The only difference between $K_{\ce{Fe(CN)6^{3-}}}$ and $K_{\ce{Fe(CN)6^{4-}}}$ are the diffusion coefficients, $D$, for $\ce{Fe(CN)6^{3-}}$ and for $\ce{Fe(CN)6^{4-}}$. As these values should be similar, we have $E_{1/2} \approx E^{\circ} \label{lsv14}$ and $E_{1/2}$ provides an estimate for the standard state reduction potential. Current/Voltage Relationships for Irreversible Reactions When an electrochemical reaction is not reversible, the Nernst equation no longer applies, which means we can no longer assume that the half-wave potential provides an estimate for the standard state reduction potential. The relationship between the limiting current and the concentration of the electroactive species in bulk solution still holds true and quantitative work remains possible. Oxygen Waves The presence of dissolved oxygen creates a complication as it is capable of undergoing reduction reactions at the electrode's surface that may interfere with the determination of the analyte's limiting current or half-wave potential. For example, O2 is reduced to H2O2 with a standard state potential of +0.695 V $\ce{O2}(g) + 2\ce{H+}(aq) + 2e^{-} \rightleftharpoons \ce{H2O2}(aq) \label{lsv15}$ and H2O2 subsequently is reduced to H2O at a standard state potential of +1.763 V. $\ce{H2O2}(aq) + 2\ce{H+}(aq) + 2e^{-} \rightleftharpoons 2 \ce{H2O}(aq) \label{lsv16}$ This is the reason that a typical cell for voltammetry (see Figure 25.2.5) includes the ability to pass N2 through the solution to remove dissolved O2. Once the solution is deaerated, N2 is allowed to flow over the solution to prevent O2 from reentering the solution. Applications of Linear Sweep Voltammetry As we learned in the previous section, the limiting current in linear sweep voltammetry is proportional to the concentration of the species undergoing oxidation or reduction at the electrode surface, which makes it a useful tool for a quantitative analysis. Because we are interested only in the limiting current, most quantitative methods simply hold the potential of the working electrode at a fixed value and measure the limiting current. Because we are measuring the current as a function of time instead of potential, these are called amperometric methods (where ampere is the unit for current). Several examples of amperometic methods are gathered here. Amperometric Detectors in Chromatography and Flow-Injection Analysis One important detector for high-performance liquid chromatography (HPLC) is one in which the mobile phase eluting from the column passes through a small volume electrochemical cell in which the working electrode is held at a potential that will oxidize or reduce the analytes. The resulting current is plotted as function of time to yield the chromatogram. A similar arrangement is used in flow-injection analysis (FIA). See Chapter 28 (HPLC) and Chapter 33 (FIA) for further details. Amperometric Senors One important application of amperometry is in the construction of chemical sensors. One of the first amperometric sensors was developed in 1956 by L. C. Clark to measure dissolved O2 in blood. Figure $9$ shows the sensor’s design, which is similar to a potentiometric membrane electrode. A thin, gas-permeable membrane is stretched across the end of the sensor and is separated from the working electrode and the counter electrode by a thin solution of KCl. The working electrode is a Pt disk cathode, and a Ag ring anode serves as the counter electrode. Although several gases can diffuse across the membrane, including O2, N2, and CO2, only oxygen undergoes reduction at the cathode $\mathrm{O}_{2}(g)+4 \mathrm{H}_{3} \mathrm{O}^{+}(a q)+4 e^{-}\rightleftharpoons 6 \mathrm{H}_{2} \mathrm{O}(l) \label{lsv17}$ with its concentration at the electrode’s surface quickly reaching zero. The concentration of O2 at the membrane’s inner surface is fixed by its diffusion through the membrane, which creates a limiting current. The result is a steady-state current that is proportional to the concentration of dissolved oxygen. Because the electrode consumes oxygen, the sample is stirred to prevent the depletion of O2 at the membrane’s outer surface. The oxidation of the Ag anode is the other half-reaction. $\mathrm{Ag}(s)+\text{ Cl}^{-}(a q)\rightleftharpoons \mathrm{AgCl}(s)+e^{-} \nonumber$ Another example of an amperometric sensor is a glucose sensor. In this sensor the single membrane in Figure $10$ is replaced with three membranes. The outermost membrane of polycarbonate is permeable to glucose and O2. The second membrane contains an immobilized preparation of glucose oxidase that catalyzes the oxidation of glucose to gluconolactone and hydrogen peroxide. $\beta-\mathrm{D}-\text {glucose }(a q)+\text{ O}_{2}(a q)+\mathrm{H}_{2} \mathrm{O}(l)\rightleftharpoons \text {gluconolactone }(a q)+\text{ H}_{2} \mathrm{O}_{2}(a q) \label{lsv18}$ The hydrogen peroxide diffuses through the innermost membrane of cellulose acetate where it undergoes oxidation at a Pt anode. $\mathrm{H}_{2} \mathrm{O}_{2}(a q)+2 \mathrm{OH}^{-}(a q) \rightleftharpoons \text{ O}_{2}(a q)+2 \mathrm{H}_{2} \mathrm{O}(l)+2 e^{-} \label{lsv19}$ Figure $10$ summarizes the reactions that take place in this amperometric sensor. FAD is the oxidized form of flavin adenine nucleotide—the active site of the enzyme glucose oxidase—and FADH2 is the active site’s reduced form. Note that O2 serves a mediator, carrying electrons to the electrode. By changing the enzyme and mediator, it is easy to extend to the amperometric sensor in Figure $10$ to the analysis of other analytes. For example, a CO2 sensor has been developed using an amperometric O2 sensor with a two-layer membrane, one of which contains an immobilized preparation of autotrophic bacteria [Karube, I.; Nomura, Y.; Arikawa, Y. Trends in Anal. Chem. 1995, 14, 295–299]. As CO2 diffuses through the membranes it is converted to O2 by the bacteria, increasing the concentration of O2 at the Pt cathode.
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/25%3A_Voltammetry/25.03%3A_Hydrodynamic_Voltammetry.txt
In linear sweep voltammetry we scan the potential in one direction, either to more positive potentials or to more negative potentials. In cyclic voltammetry we complete a scan in both directions. Figure $1$a shows a typical potential-excitation signal. In this example, we first scan the potential to more positive values, resulting in the following oxidation reaction for the species R. $R \rightleftharpoons O+n e^{-} \label{cv1}$ When the potential reaches a predetermined switching potential, we reverse the direction of the scan toward more negative potentials. Because we generated the species O on the forward scan, during the reverse scan it reduces back to R. $O+n e^{-} \rightleftharpoons R \label{cv2}$ Cyclic voltammetry is carried out in an unstirred solution, which, as shown in Figure $1$b, results in peak currents instead of limiting currents. The voltammogram has separate peaks for the oxidation reaction and for the reduction reaction, each characterized by a peak potential and a peak current. The peak current in cyclic voltammetry is given by the Randles-Sevcik equation $i_{p}=\left(2.69 \times 10^{5}\right) n^{3 / 2} A D^{1 / 2} \nu^{1 / 2} C_{A} \label{cv3}$ where n is the number of electrons in the redox reaction, A is the area of the working electrode, D is the diffusion coefficient for the electroactive species, $\nu$ is the scan rate, and CA is the concentration of the electroactive species at the electrode. For a well-behaved system, the anodic and the cathodic peak currents are equal, and the ratio ip,a/ip,c is 1.00. The half-wave potential, E1/2, is midway between the anodic and cathodic peak potentials. $E_{1 / 2}=\frac{E_{p, a}+E_{p, c}}{2} \label{cv4}$ Scanning the potential in both directions provides an opportunity to explore the electrochemical behavior of species generated at the electrode. This is a distinct advantage of cyclic voltammetry over other voltammetric techniques. Figure $2$ shows the cyclic voltammogram for the same redox couple at both a faster and a slower scan rate. At the faster scan rate, $2$a, we see two peaks. At the slower scan rate in Figure $2$b, however, the peak on the reverse scan disappears. One explanation for this is that the products from the reduction of R on the forward scan have sufficient time to participate in a chemical reaction whose products are not electroactive. 25.05: Polarography The first important voltammetric technique to be developed—polarography—uses the dropping mercury (DME) electrode as the working electrode (see Figure 25.2.2 for a schematic diagram of the DME as well as two other types of Hg electrodes). In polarography, as in linear sweep voltammetry, we vary the potential and measure the current. The change in potential can be in the form of a linear ramp, as was the case for linear sweep voltammetry, or it can involve a series of pulses. Normal Polarography As shown in Figure $1$, the current is measured while applying a linear potential ramp. Although polarography takes place in an unstirred solution, we obtain a limiting current instead of a peak current. When a Hg drop separates from the glass capillary and falls to the bottom of the electrochemical cell, it mixes the solution. Each new Hg drop, therefore, grows into a solution whose composition is identical to the bulk solution. The oscillations in the current are a result of the Hg drop’s growth, which leads to a time-dependent change in the area of the working electrode. The limiting current—which also is called the diffusion current—is measured using either the maximum current, imax, or from the average current, iavg. The relationship between the analyte’s concentration, CA, and the limiting current is given by the Ilkovic equations $i_{\max }=706 n D^{1 / 2} m^{2 / 3} t^{1 / 6} C_{A}=K_{\max } C_{A} \label{pol1}$ $i_{avg}=607 n D^{1 / 2} m^{2 / 3} t^{1 / 6} C_{A}=K_{\mathrm{avg}} C_{A} \label{pol2}$ where n is the number of electrons in the redox reaction, D is the analyte’s diffusion coefficient, m is the flow rate of Hg, t is the drop’s lifetime and Kmax and Kavg are constants. The half-wave potential, E1/2, provides qualitative information about the redox reaction. Pulse Polarography Normal polarography has been replaced by various forms of pulse polarography, several examples of which are shown in Figure $2$ [see Osteryoung, J. J. Chem. Educ. 1983, 60, 296–298 for a comprehensive review]. Normal pulse polarography (Figure $2$a), for example, uses a series of potential pulses characterized by a cycle of time $\tau$, a pulse-time of tp, a pulse potential of $\Delta E_\text{p}$, and a change in potential per cycle of $\Delta E_\text{s}$. Typical experimental conditions for normal pulse polarography are $\tau \approx 1 \text{ s}$, tp ≈ 50 ms, and $\Delta E_\text{s} \approx 2 \text{ mV}$. The initial value of $\Delta E_\text{p} \approx 2 \text{ mV}$, and it increases by ≈ 2 mV with each pulse. The current is sampled at the end of each potential pulse for approximately 17 ms before returning the potential to its initial value. The shape of the resulting voltammogram is similar to Figure $1$, but without the current oscillations. Because we apply the potential for only a small portion of the drop’s lifetime, there is less time for the analyte to undergo oxidation or reduction and a smaller diffusion layer. As a result, the faradaic current in normal pulse polarography is greater than in the polarography, resulting in better sensitivity and smaller detection limits. In differential pulse polarography (Figure $2$b) the current is measured twice per cycle: for approximately 17 ms before applying the pulse and for approximately 17 ms at the end of the cycle. The difference in the two currents gives rise to the peak-shaped voltammogram. Typical experimental conditions for differential pulse polarography are $\tau \approx 1 \text{ s}$, tp ≈ 50 ms, $\Delta E_\text{p}$ ≈ 50 mV, and $\Delta E_\text{s}$ ≈ 2 mV. The voltammogram for differential pulse polarography is approximately the first derivative of the voltammogram for normal pulse polarography. To see why this is the case, note that the change in current over a fixed change in potential, $\Delta i / \Delta E$, approximates the slope of the voltammogram for normal pulse polarography. You may recall that the first derivative of a function returns the slope of the function at each point. The first derivative of a sigmoidal function is a peak-shaped function. Other forms of pulse polarography include staircase polarography (Figure $2$c) and square-wave polarography (Figure $2$d). One advantage of square-wave polarography is that we can make $\tau$ very small—perhaps as small as 5 ms, compared to 1 s for other forms of pulse polarography—which significantly decreases analysis time. For example, suppose we need to scan a potential range of 400 mV. If we use normal pulse polarography with a $\Delta E_\text{s}$ of 2 mV/cycle and a $\tau$ of 1 s/cycle, then we need 200 s to complete the scan. If we use square-wave polarography with a $\Delta E_\text{s}$ of 2 mV/cycle and a $\tau$ of 5 ms/cycle, we can complete the scan in 1 s. At this rate, we can acquire a complete voltammogram using a single drop of Hg! Applications Polarography is used extensively for the analysis of metal ions and inorganic anions, such as $\text{IO}_3^-$ and $\text{NO}_3^-$. We also can use polarography to study organic compounds with easily reducible or oxidizable functional groups, such as carbonyls, carboxylic acids, and carbon-carbon double bonds.
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/25%3A_Voltammetry/25.04%3A_Cyclic_Voltammetry.txt
Another important voltammetric technique is stripping voltammetry, which consists of three related techniques: anodic stripping voltammetry, cathodic stripping voltammetry, and adsorptive stripping voltammetry. Because anodic stripping voltammetry is the more widely used of these techniques, we will consider it in greatest detail. Anodic stripping voltammetry consists of two steps (Figure $1$). The first step is a controlled potential electrolysis in which we hold the working electrode—usually a hanging mercury drop or a mercury film electrode—at a cathodic potential sufficient to deposit the metal ion on the electrode. For example, when analyzing Cu2+ the deposition reaction is $\mathrm{Cu}^{2+}+2 e^{-} \rightleftharpoons \mathrm{Cu}(\mathrm{Hg}) \label{sv1}$ where Cu(Hg) indicates that the copper is amalgamated with the mercury. This step serves as a means of concentrating the analyte by transferring it from the larger volume of the solution to the smaller volume of the electrode. During most of the electrolysis we stir the solution to increase the rate of deposition. Near the end of the deposition time we stop the stirring—eliminating convection as a mode of mass transport—and allow the solution to become quiescent. Typical deposition times of 1–30 min are common, with analytes at lower concentrations requiring longer times. In the second step, we scan the potential anodically—that is, toward a more positive potential. When the working electrode’s potential is sufficiently positive, the analyte is stripped from the electrode, returning to solution in its oxidized form. $\mathrm{Cu}(\mathrm{Hg})\rightleftharpoons \text{ Cu}^{2+}+2 e^{-} \label{sv2}$ Monitoring the current during the stripping step gives a peak-shaped voltammogram, as shown in Figure $1$. The peak current is proportional to the analyte’s concentration in the solution. Because we are concentrating the analyte in the electrode, detection limits are much smaller than other electrochemical techniques. An improvement of three orders of magnitude—the equivalent of parts per billion instead of parts per million—is routine. Applications Anodic stripping voltammetry is very sensitive to experimental conditions, which we must carefully control to obtain results that are accurate and precise. Key variables include the area of the mercury film or the size of the hanging Hg drop, the deposition time, the rest time, the rate of stirring, and the scan rate during the stripping step. Anodic stripping voltammetry is particularly useful for metals that form amalgams with mercury, several examples of which are listed in Table $1$. Table $1$. Representative Examples of Analytes Determined by Stripping Voltammetry anodic stripping voltammetry cathodic stripping voltammetry adsorptive stripping voltammetry Bi3+ Br bilirubin Cd2+ Cl codeine Cu2+ I cocaine Ga3+ mercaptans (RSH) digitoxin In3+ S2– dopamine Pb2+ SCN heme Tl+   monesin Sn2+   testosterone Zn2+ Source: Compiled from Peterson, W. M.; Wong, R. V. Am. Lab. November 1981, 116–128; Wang, J. Am. Lab. May 1985, 41–50. The experimental design for cathodic stripping voltammetry is similar to anodic stripping voltammetry with two exceptions. First, the deposition step involves the oxidation of the Hg electrode to $\text{Hg}_2^{2+}$, which then reacts with the analyte to form an insoluble film at the surface of the electrode. For example, when Cl is the analyte the deposition step is $2 \mathrm{Hg}(l)+2 \mathrm{Cl}^{-}(a q) \rightleftharpoons \text{ Hg}_{2} \mathrm{Cl}_{2}(s)+2 e^{-} \label{sv3}$ Second, stripping is accomplished by scanning cathodically toward a more negative potential, reducing $\text{Hg}_2^{2+}$ back to Hg and returning the analyte to solution. $\mathrm{Hg}_{2} \mathrm{Cl}_{2}(s)+2 e^{-}\rightleftharpoons 2 \mathrm{Hg}( l)+2 \mathrm{Cl}^{-}(a q) \label{sv4}$ Table $1$ lists several analytes analyzed successfully by cathodic stripping voltammetry. In adsorptive stripping voltammetry, the deposition step occurs without electrolysis. Instead, the analyte adsorbs to the electrode’s surface. During deposition we maintain the electrode at a potential that enhances adsorption. For example, we can adsorb a neutral molecule on a Hg drop if we apply a potential of –0.4 V versus the SCE, a potential where the surface charge of mercury is approximately zero. When deposition is complete, we scan the potential in an anodic or a cathodic direction, depending on whether we are oxidizing or reducing the analyte. Examples of compounds that have been analyzed by absorptive stripping voltammetry also are listed in Table $1$.
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/25%3A_Voltammetry/25.06%3A_Stripping_Methods.txt
Voltammetry finds use for both quantitative analyses and characterization analyses. Examples of each are highlighted in this section. Quantitative Applications Voltammetry has been used for the quantitative analysis of a wide variety of samples, including environmental samples, clinical samples, pharmaceutical formulations, steels, gasoline, and oil. Selecting the Voltammetric Technique The choice of which voltammetric technique to use depends on the sample’s characteristics, including the analyte’s expected concentration and the sample’s location. For example, amperometry is ideally suited for detecting analytes in flow systems, including the in vivo analysis of a patient’s blood or as a selective sensor for the rapid analysis of a single analyte. The portability of amperometric sensors, which are similar to potentiometric sensors, also make them ideal for field studies. Although cyclic voltammetry is used to determine an analyte’s concentration, other methods described in this chapter are better suited for quantitative work. Pulse polarography and stripping voltammetry frequently are interchangeable. The choice of which technique to use often depends on the analyte’s concentration and the desired accuracy and precision. Detection limits for normal pulse polarography generally are on the order of 10–6 M to 10–7 M, and those for differential pulse polarography, staircase, and square wave polarography are between 10–7 M and 10–9 M. Because we concentrate the analyte in stripping voltammetry, the detection limit for many analytes is as little as 10–10 M to 10–12 M. On the other hand, the current in stripping voltammetry is much more sensitive than pulse polarography to changes in experimental conditions, which may lead to poorer precision and accuracy. We also can use pulse polarography to analyze a wider range of inorganic and organic analytes because there is no need to first deposit the analyte at the electrode surface. Stripping voltammetry also suffers from occasional interferences when two metals, such as Cu and Zn, combine to form an intermetallic compound in the mercury amalgam. The deposition potential for Zn is sufficiently negative that any Cu2+ in the sample also deposits into the mercury drop or film, leading to the formation of intermetallic compounds such as CuZn and CuZn2. During the stripping step, zinc in the intermetallic compounds strips at potentials near that of copper, decreasing the current for zinc at its usual potential and increasing the apparent current for copper. It is possible to overcome this problem by adding an element that forms a stronger intermetallic compound with the interfering metal. Thus, adding Ga3+ minimizes the interference of Cu when analyzing for Zn by forming an intermetallic compound of Cu and Ga. Correcting for the Residual Current In any quantitative analysis we must correct the analyte’s signal for signals that arise from other sources. The total current, itot, in voltammetry consists of two parts: the current from the analyte’s oxidation or reduction, iA, and a background or residual current, ir. $i_{t o t}=i_{A}+i_{r} \label{app1}$ The residual current, in turn, has two sources. One source is a faradaic current from the oxidation or reduction of trace interferents in the sample, iint. The other source is the charging current, ich, that accompanies a change in the working electrode’s potential. $i_{r}=i_{\mathrm{int}}+i_{c h} \label{app2}$ We can minimize the faradaic current due to impurities by carefully preparing the sample. For example, one important impurity is dissolved O2, which undergoes a two-step reduction: first to H2O2 at a potential of –0.1 V versus the SCE, and then to H2O at a potential of –0.9 V versus the SCE. Removing dissolved O2 by bubbling an inert gas such as N2 through the sample eliminates this interference. After removing the dissolved O2, maintaining a blanket of N2 over the top of the solution prevents O2 from reentering the solution. There are two methods to compensate for the residual current. One method is to measure the total current at potentials where the analyte’s faradaic current is zero and extrapolate it to other potentials. This is the method shown in Figure 25.3.7. One advantage of extrapolating is that we do not need to acquire additional data. An important disadvantage is that an extrapolation assumes that any change in the residual current with potential is predictable, which may not be the case. A second, and more rigorous approach, is to obtain a voltammogram for an appropriate blank. The blank’s residual current is then subtracted from the sample’s total current. Analysis for Single Components The analysis of a sample with a single analyte is straightforward using any of the standardization methods discussed in Chapter 1. Example $1$ The concentration of As(III) in water is determined by differential pulse polarography in 1 M HCl. The initial potential is set to –0.1 V versus the SCE and is scanned toward more negative potentials at a rate of 5 mV/s. Reduction of As(III) to As(0) occurs at a potential of approximately –0.44 V versus the SCE. The peak currents for a set of standard solutions, corrected for the residual current, are shown in the following table. [As(III)] (µM) ip (µM) 1.00 0.298 3.00 0.947 6.00 1.83 9.00 2.72 What is the concentration of As(III) in a sample of water if its peak current is 1.37 μA? Solution Linear regression gives the calibration curve shown in Figure $1$, with an equation of $i_{p}=0.0176+3.01 \times[\mathrm{As}(\mathrm{III})] \nonumber$ Substituting the sample’s peak current into the regression equation gives the concentration of As(III) as 4.49 μM. Multicomponent Analysis Voltammetry is a particularly attractive technique for the analysis of samples that contain two or more analytes. Provided that the analytes behave independently, the voltammogram of a multicomponent mixture is a summation of each analyte’s individual voltammograms. As shown in Figure $2$, if the separation between the half-wave potentials or between the peak potentials is sufficient, we can determine the presence of each analyte as if it is the only analyte in the sample. The minimum separation between the half-wave potentials or peak potentials for two analytes depends on several factors, including the type of electrode and the potential-excitation signal. For normal polarography the separation is at least ±0.2–0.3 V, and differential pulse voltammetry requires a minimum separation of ±0.04–0.05 V. If the voltammograms for two analytes are not sufficiently separated, a simultaneous analysis may be possible. An example of this approach is outlined the following example. Example $2$ The differential pulse polarographic analysis of a mixture of indium and cadmium in 0.1 M HCl is complicated by the overlap of their respective voltammograms [Lanza P. J. Chem. Educ. 1990, 67, 704–705]. The peak potential for indium is at –0.557 V and that for cadmium is at –0.597 V. When a 0.800-ppm indium standard is analyzed, $\Delta i_p$ (in arbitrary units) is 200.5 at –0.557 V and 87.5 at –0.597 V relative to a saturated Ag/AgCl reference electorde. A standard solution of 0.793 ppm cadmium has a $\Delta i_p$ of 58.5 at –0.557 V and 128.5 at –0.597 V. What is the concentration of indium and cadmium in a sample if $\Delta i_p$ is 167.0 at a potential of –0.557 V and 99.5 at a potential of –0.597V. Solution The change in current, $\Delta i_p$, in differential pulse polarography is a linear function of the analyte’s concentration $\Delta i_{p}=k_{A} C_{A} \nonumber$ where kA is a constant that depends on the analyte and the applied potential, and CA is the analyte’s concentration. To determine the concentrations of indium and cadmium in the sample we must first find the value of kA for each analyte at each potential. For simplicity we will identify the potential of –0.557 V as E1, and that for –0.597 V as E2. The values of kA are \begin{aligned} k_{\mathrm{In}, E_{1}} &=\frac{200.5}{0.800 \ \mathrm{ppm}}=250.6 \ \mathrm{ppm}^{-1} \ k_{\mathrm{In}, E_{2}} &=\frac{87.5}{0.800 \ \mathrm{ppm}}=109.4 \ \mathrm{ppm}^{-1} \ k_{\mathrm{Cd} E_{1}} &=\frac{58.5}{0.793 \ \mathrm{ppm}}=73.8 \ \mathrm{ppm}^{-1} \ k_{\mathrm{Cd} E_{2}} &=\frac{128.5}{0.793 \ \mathrm{ppm}}=162.0 \ \mathrm{ppm}^{-1} \end{aligned} \nonumber Next, we write simultaneous equations for the current at the two potentials. $\begin{array}{l}{\Delta i_{E_{1}}=167.0=250.6 \ \mathrm{ppm}^{-1} \times C_{\mathrm{In}}+73.8 \ \mathrm{ppm}^{-1} \times C_{\mathrm{Cd}}} \ {\triangle i_{E_{2}}=99.5=109.4 \ \mathrm{ppm}^{-1} \times C_{\mathrm{In}}+162.0 \ \mathrm{ppm}^{-1} \times C_{\mathrm{Cd}}}\end{array} \nonumber$ Solving the simultaneous equations, which is left as an exercise, gives the concentration of indium as 0.606 ppm and the concentration of cadmium as 0.205 ppm. Environmental Samples Voltammetry is one of several important analytical techniques for the analysis of trace metals in environmental samples, including groundwater, lakes, rivers and streams, seawater, rain, and snow. Detection limits at the parts-per-billion level are routine for many trace metals using differential pulse polarography, with anodic stripping voltammetry providing parts-per-trillion detection limits for some trace metals. One interesting environmental application of anodic stripping voltammetry is the determination of a trace metal’s chemical form within a water sample. Speciation is important because a trace metal’s bioavailability, toxicity, and ease of transport through the environment often depends on its chemical form. For example, a trace metal that is strongly bound to colloidal particles generally is not toxic because it is not available to aquatic lifeforms. Unfortunately, anodic stripping voltammetry can not distinguish a trace metal’s exact chemical form because closely related species, such as Pb2+ and PbCl+, produce a single stripping peak. Instead, trace metals are divided into “operationally defined” categories that have environmental significance. Operationally defined means that an analyte is divided into categories by the specific methods used to isolate it from the sample. There are many examples of operational definitions in the environmental literature. The distribution of trace metals in soils and sediments, for example, often is defined in terms of the reagents used to extract them; thus, you might find an operational definition for Zn2+ in a lake sediment as that extracted using 1.0 M sodium acetate, or that extracted using 1.0 M HCl. Although there are many speciation schemes in the environmental literature, we will consider one proposed by Batley and Florence [see (a) Batley, G. E.; Florence, T. M. Anal. Lett. 1976, 9, 379–388; (b) Batley, G. E.; Florence, T. M. Talanta 1977, 24, 151–158; (c) Batley, G. E.; Florence, T. M. Anal. Chem. 1980, 52, 1962–1963; (d) Florence, T. M., Batley, G. E.; CRC Crit. Rev. Anal. Chem. 1980, 9, 219–296]. This scheme, which is outlined in Table $2$, combines anodic stripping voltammetry with ion-exchange and UV irradiation, dividing soluble trace metals into seven groups. In the first step, anodic stripping voltammetry in a pH 4.8 acetic acid buffer differentiates between labile metals and nonlabile metals. Only labile metals—those present as hydrated ions, weakly bound complexes, or weakly adsorbed on colloidal surfaces—deposit at the electrode and give rise to a signal. Total metal concentration are determined by ASV after digesting the sample in 2 M HNO3 for 5 min, which converts all metals into an ASV-labile form. Table $1$. Operational Speciation of Soluble Trace Metals method speciation of soluble metals ASV labile metals nonlabile or bound metals Ion-Exchange removed not removed removed not removed UV Irradiation   released not released released not released released not released Groups I II III IV V VI VII Group I: free metal ions; weaker labile organic complexes and inorganic complexes Group II: stronger labile organic complexes; labile metals absorbed on organic solids Group III: stronger labile inorganic complexes; labile metals absorbed on inorganic solids Group IV: weaker nonlabile organic complexes Group V: weaker nonlabile inorganic complexes Group VI: stronger nonlabile organic complexes; nonlabile metals absorbed on organic solids Group VII: stronger nonlabile inorganic complexes; nonlabile metals absorbed on inorganic solids Operational definitions of speciation from (a)Batley,G.E.;Florence,T.M.Anal.Lett.1976,9,379–388;(b)Batley,G.E.;Florence,T.M.Talanta1977,24,151–158; (c) Batley, G. E.; Florence, T. M. Anal. Chem. 1980, 52, 1962–1963; (d) Florence, T. M., Batley, G. E.; CRC Crit. Rev. Anal. Chem. 1980, 9, 219–296. A Chelex-100 ion-exchange resin further differentiates between strongly bound metals—usually metals bound to inorganic and organic solids, but also those tightly bound to chelating ligands—and more loosely bound metals. Finally, UV radiation differentiates between metals bound to organic phases and inorganic phases. The analysis of seawater samples, for example, suggests that cadmium, copper, and lead are present primarily as labile organic complexes or as labile adsorbates on organic colloids (Group II in Table $1$). Differential pulse polarography and stripping voltammetry are used to determine trace metals in airborne particulates, incinerator fly ash, rocks, minerals, and sediments. The trace metals, of course, are first brought into solution using a digestion or an extraction. Amperometric sensors also are used to analyze environmental samples. For example, the dissolved O2 sensor described earlier is used to determine the level of dissolved oxygen and the biochemical oxygen demand, or BOD, of waters and wastewaters. The latter test—which is a measure of the amount of oxygen required by aquatic bacteria as they decompose organic matter—is important when evaluating the efficiency of a wastewater treatment plant and for monitoring organic pollution in natural waters. A high BOD suggests that the water has a high concentration of organic matter. Decomposition of this organic matter may seriously deplete the level of dissolved oxygen in the water, adversely affecting aquatic life. Other amperometric sensors are available to monitor anionic surfactants in water, and CO2, H2SO4, and NH3 in atmospheric gases. Clinical Samples Differential pulse polarography and stripping voltammetry are used to determine the concentration of trace metals in a variety of clinical samples, including blood, urine, and tissue. The determination of lead in blood is of considerable interest due to concerns about lead poisoning. Because the concentration of lead in blood is so small, anodic stripping voltammetry frequently is the more appropriate technique. The analysis is complicated, however, by the presence of proteins that may adsorb to the mercury electrode, inhibiting either the deposition or stripping of lead. In addition, proteins may prevent the electrodeposition of lead through the formation of stable, nonlabile complexes. Digesting and ashing the blood sample mini- mizes this problem. Differential pulse polarography is useful for the routine quantitative analysis of drugs in biological fluids, at concentrations of less than 10–6 M [Brooks, M. A. “Application of Electrochemistry to Pharmaceutical Analysis,” Chapter 21 in Kissinger, P. T.; Heinemann, W. R., eds. Laboratory Techniques in Electroanalytical Chemistry, Marcel Dekker, Inc.: New York, 1984, pp 539–568.]. Amperometric sensors using enzyme catalysts also have many clinical uses, several examples of which are shown in Table $2$. Table $2$. Representative Amperometric Biosensors analyte enzyme species detected choline choline oxidase H2O2 ethanol alcohol oxidase H2O2 formaldehyde formaldehyde dehydrogenase NADH glucose glucose oxidase H2O2 glutamine glutaminase, glutamine oxidase H2O2 glycerol glycerol dehydrogenase NADH, O2 lactate lactate oxidase H2O2 phenol polyphenol oxidase quinone inorganic phosphorous nucleoside phosphoylase O2 Source: Cammann, K.; Lemke, U.; Rohen, A.; Sander, J.; Wilken, H.; Winter, B. Angew. Chem. Int. Ed. Engl. 1991, 30, 516–539. Miscellaneous Samples In addition to environmental samples and clinical samples, differential pulse polarography and stripping voltammetry are used for the analysis of trace metals in other sample, including food, steels and other alloys, gasoline, gunpowder residues, and pharmaceuticals. Voltammetry is an important technique for the quantitative analysis of organics, particularly in the pharmaceutical industry where it is used to determine the concentration of drugs and vitamins in formulations. For example, voltammetric methods are available for the quantitative analysis of vitamin A, niacinamide, and riboflavin. When the compound of interest is not electroactive, it often can be derivatized to an electroactive form. One example is the differential pulse polarographic determination of sulfanilamide, which is converted into an electroactive azo dye by coupling with sulfamic acid and 1-napthol. In the previous section we learned how to use voltammetry to determine an analyte’s concentration in a variety of different samples. We also can use voltammetry to characterize an analyte’s properties, including verifying its electrochemical reversibility, determining the number of electrons transferred during its oxidation or reduction, and determining its equilibrium constant in a coupled chemical reaction. Characterization Applications In a characterization application we study the properties of a system. Three examples are described here: determining if a redox reaction is electrochemically reversible, determining the number of electrons involved in the redox reaction, and studying metal-ligand complexation. Electrochemical Reversibility and Determination of n Earlier in this chapter we derived a relationship between E1/2 and the standard-state potential for a redox couple using the Nernst equation, noting that a redox reaction must be electrochemically reversible. How can we tell if a redox reaction is reversible by looking at its voltammogram? As we learned in Chapter 25.3, for a reversible redox reaction the relationship between potential and current is $E=E_{½} - \frac{0.05916}{n} \log \frac{i}{i_{l} - i} \label{app3}$ If a reaction is electrochemically reversible, a plot of E versus $\log \frac{i}{i_l - i}$ is a straight line with a slope of –0.05916/n. In addition, the slope should yield an integer value for n. Example $3$ The following data were obtained from a linear scan hydrodynamic voltammogram of a reversible reduction reaction. E (V vs. SCE) current (μA) –0.358 0.37 –0.372 0.95 –0.382 1.71 –0.400 3.48 –0.410 4.20 –0.435 4.97 The limiting current is 5.15 μA. Show that the reduction reaction is reversible, and determine values for n and for E1/2. Solution Figure $3$ shows a plot of E versus $\log \frac{i}{i_l - i}$. Because the result is a straight-line, we know the reaction is electrochemically reversible under the conditions of the experiment. A linear regression analysis gives the equation for the straight line as $E=-0.391 \mathrm{V}-0.0300 \log \frac{i}{i_{l}-i} \nonumber$ From Equation \ref{app3}, the slope is equivalent to –0.05916/n; solving for n gives a value of 1.97, or 2 electrons. From Equation \ref{app3} we also know that E1/2 is the y-intercept for a plot of E versus $\log \frac{i}{i_l - i}$; thus, E1/2 for the data in this example is –0.391 V versus the SCE. We also can use cyclic voltammetry to evaluate electrochemical reversibility by looking at the difference between the peak potentials for the anodic and the cathodic scans. For an electrochemically reversible reaction, the following equation holds true. $\Delta E_{p}=E_{p, a}-E_{p, c}=\frac{0.05916 \ \mathrm{V}}{n} \label{app4}$ As an example, for a two-electron reduction we expect a $\Delta E_p$ of approximately 29.6 mV. For an electrochemically irreversible reaction the value of $\Delta E_p$ is larger than expected. Determining Equilibrium Constants for Coupled Chemical Reactions Another important application of voltammetry is determining the equilibrium constant for a solution reaction that is coupled to a redox reaction. The presence of the solution reaction affects the ease of electron transfer in the redox reaction, shifting E1/2 to a more negative or to a more positive potential. Consider, for example, the reduction of O to R $O+n e^{-} \rightleftharpoons R \label{app5}$ the voltammogram for which is shown in Figure $4$. If we introduce a ligand, L, that forms a strong complex with O, then we also must consider the reaction $O+p L\rightleftharpoons O L_{p} \label{app6}$ In the presence of the ligand, the overall redox reaction is $O L_{p}+n e^{-} \rightleftharpoons R+p L \label{app7}$ Because of its stability, the reduction of the OLp complex is less favorable than the reduction of O. As shown in Figure $4$, the resulting voltammogram shifts to a potential that is more negative than that for O. Furthermore, the shift in the voltammogram increases as we increase the ligand’s concentration. We can use this shift in the value of E1/2 to determine both the stoichiometry and the formation constant for a metal-ligand complex. To derive a relationship between the relevant variables we begin with two equations: the Nernst equation for the reduction of O $E=E_{O / R}^{\circ}-\frac{0.05916}{n} \log \frac{[R]_{x=0}}{[O]_{x=0}} \label{app8}$ and the stability constant, $\beta_p$ for the metal-ligand complex at the electrode surface. $\beta_{p} = \frac{\left[O L_p\right]_{x = 0}}{[O]_{x = 0}[L]_{x = 0}^p} \label{app9}$ In the absence of ligand the half-wave potential occurs when [R]x = 0 and [O]x = 0 are equal; thus, from the Nernst equation we have $\left(E_{1 / 2}\right)_{n c}=E_{O / R}^{\circ} \label{app10}$ where the subscript “nc” signifies that the complex is not present. When ligand is present we must account for its effect on the concentration of O. Solving Equation \ref{app9} for [O]x = 0 and substituting into the Equation \ref{app8} gives $E=E_{O/R}^{\circ}-\frac{0.05916}{n} \log \frac{[R]_{x=0}[L]_{x=0}^{p} \beta_{p}}{\left[O L_{p}\right]_{x=0}} \label{app11}$ If the formation constant is sufficiently large, such that essentially all O is present as the complex OLp, then $[R]_{x = 0}$ and $[OL_p]_{x = 0}$ are equal at the half-wave potential, and Equation \ref{app11} simplifies to $\left(E_{1 / 2}\right)_{c} = E_{O/R}^{\circ} - \frac{0.05916}{n} \log{} [L]_{x=0}^{p} \beta_{p} \label{app12}$ where the subscript “c” indicates that the complex is present. Defining $\Delta E_{1/2}$ as $\triangle E_{1 / 2}=\left(E_{1 / 2}\right)_{c}-\left(E_{1 / 2}\right)_{n c} \label{app13}$ and substituting Equation \ref{app10} and Equation \ref{app12} and expanding the log term leaves us with the following equation. $\Delta E_{1 / 2}=-\frac{0.05916}{n} \log \beta_{p}-\frac{0.05916 p}{n} \log {[L]} \label{app14}$ A plot of $\Delta E_{1/2}$ versus log[L] is a straight-line, with a slope that is a function of the metal-ligand complex’s stoichiometric coefficient, p, and a y-intercept that is a function of its formation constant $\beta_p$. Example $4$ A voltammogram for the two-electron reduction (n = 2) of a metal, M, has a half-wave potential of –0.226 V versus the SCE. In the presence of an excess of ligand, L, the following half-wave potentials are recorded. [L] (M) (E1/2)c (V vs. SCE) 0.020 –0.494 0.040 –0.512 0.060 –0.523 0.080 –0.530 0.100 –0.536 Determine the stoichiometry of the metal-ligand complex and its formation constant. Solution We begin by calculating values of $\Delta E_{1/2}$ using Equation \ref{app13}, obtaining the values in the following table. [L] (M) $\Delta E_{1/2}$ (V vs. SCE) 0.020 –0.268 0.040 –0.286 0.060 –0.297 0.080 –0.304 0.100 –0.310 Figure $5$ shows the resulting plot of $\Delta E_{1/2}$ as a function of log[L]. A linear regression analysis gives the equation for the straight line as $\triangle E_{1 / 2}=-0.370 \mathrm{V}-0.0601 \log {[L]} \nonumber$ From Equation \ref{app14} we know that the slope is equal to –0.05916p/n. Using the slope and n = 2, we solve for p obtaining a value of 2.03 ≈ 2. The complex’s stoichiometry, therefore, is ML2. We also know, from Equation \ref{app14}, that the y-intercept is equivalent to –(0.05916/n)log$\beta_p$. Solving for $\beta_2$ gives a formation constant of $3.2 \times 10^{12}$. Cyclic voltammetry is one of the most powerful electrochemical techniques for exploring the mechanism of coupled electrochemical and chemical reactions. The treatment of this aspect of cyclic voltammetry is beyond the level of this text, although you can consult this chapter’s additional resources for additional information.
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/25%3A_Voltammetry/25.07%3A_Voltammetry_with_Ultramicroelectrodes.txt
In previous chapters we explored the application of spectroscopy and electroanalytical chemistry to the quantitative analysis of an analyte in a sample. Despite the power of these instrumental methods of analysis, their use is often limited if the sample contains species that will interfere with the analysis. A UV/Vis analysis, for example, is easy to complete if the analyte is the only species present that absorbs light at the analytical wavelength. If two species contribute to the overall absorbance, a quantitative analysis is still possible if we can measure the sample's absorbance at two wavelengths. Things become more complex, however, as the number of analytes or interferents increase or if we do not know the identity of an intereferent. Chromatography provides a solution to the analysis of complex samples by providing a way to separate the individual species in a sample prior to their analysis by a spectroscopic or electroanalytical method of analysis. In this chapter we provide a general introduction to chromatographic separations. In the four chapters that follow, we will consider specific chromatographic methods. • 26.1: A General Description of Chromatography In chromatography we pass a sample-free phase, which we call the mobile phase, over a second sample-free stationary phase that remains fixed in space. We inject the sample into the mobile phase where its components partition between the mobile phase and the stationary phase. The types of mobile phases and stationary phases, how these two phases contact each other, and how the solutes interact with the two phases are useful ways describe a chromatographic method. • 26.2: Migration Rates of Solutes Our ability to separate two solutes depends on the equilibrium interactions of the solute with the stationary phase and the mobile phase, which effects both the time it takes a solute to travel through the column and how the width of the solute's elution profile. In this section we consider the rate at which the solute moves through the column. • 26.3: Zone Broadening and Column Efficiency Suppose we inject a sample that has a single component. At the moment we inject the sample it is a narrow band of finite width. As the sample passes through the column, the width of this band continually increases in a process we call band broadening. Column efficiency is a quantitative measure of the extent of band broadening. • 26.4: Optimization and Column Performance The goal of a chromatographic separation is to take a sample with more than one solute and to separate the solutes such that each solute elutes by itself. Our ability to separate two solutes from each other—to resolve them—is affected by a number of variables; how we can optimize the separation of two solutes, is the subject of this section. • 26.5: Summary of Important Relationships for Chromatography In this chapter we have introduced many chromatographic variables, some directly measured from the chromatogram, provided by the manufacturer, or from the operating conditions, and some derived from these variables. This section summarizes these variables. • 26.6: Applications of Chromatography Although the primary purpose of chromatography is the separation of a complex mixture into its component parts, as outlined here, a chromatographic separation also provides qualitative and quantitative information about our samples. More detailed examples of qualitative and quantitative applications are found in the chapters that follow. 26: Introduction to Chromatographic Separations In chromatography we pass a sample-free phase, which we call the mobile phase, over a second sample-free stationary phase that remains fixed in space (Figure $1$). We inject or place the sample into the mobile phase. As the sample moves with the mobile phase, its components partition between the mobile phase and the stationary phase. A component whose distribution ratio favors the stationary phase requires more time to pass through the system. Given sufficient time and sufficient stationary and mobile phase, we can separate solutes even if they have similar distribution ratios. Classification of Chromatographic Methods There are many ways in which we can identify a chromatographic separation: by describing the physical state of the mobile phase and the stationary phase; by describing how we bring the stationary phase and the mobile phase into contact with each other; or by describing the chemical or physical interactions between the solute and the stationary phase. Let’s briefly consider how we might use each of these classifications. We can trace the history of chromatography to the turn of the century when the Russian botanist Mikhail Tswett used a column packed with calcium carbonate and a mobile phase of petroleum ether to separate colored pigments from plant extracts. As the sample moved through the column, the plant’s pigments separated into individual colored bands. After effecting the separation, the calcium carbonate was removed from the column, sectioned, and the pigments recovered. Tswett named the technique chromatography, combining the Greek words for “color” and “to write.” There was little interest in Tswett’s technique until Martin and Synge’s pioneering development of a theory of chromatography (see Martin, A. J. P.; Synge, R. L. M. “A New Form of Chromatogram Employing Two Liquid Phases,” Biochem. J. 1941, 35, 1358–1366). Martin and Synge were awarded the 1952 Nobel Prize in Chemistry for this work. Types of Mobile Phases and Stationary Phases The mobile phase is a liquid or a gas, and the stationary phase is a solid or a liquid film coated on a solid substrate. We often name chromatographic techniques by listing the type of mobile phase followed by the type of stationary phase. In gas–liquid chromatography, for example, the mobile phase is a gas and the stationary phase is a liquid film coated on a solid substrate. If a technique’s name includes only one phase, as in gas chromatography, it is the mobile phase. Contact Between the Mobile Phase and the Stationary Phase There are two common methods for bringing the mobile phase and the stationary phase into contact. In column chromatography we pack the stationary phase into a narrow column and pass the mobile phase through the column using gravity or by applying pressure. The stationary phase is a solid particle or a thin liquid film coated on either a solid particulate packing material or on the column’s walls. In planar chromatography the stationary phase is coated on a flat surface—typically, a glass, metal, or plastic plate. One end of the plate is placed in a reservoir that contains the mobile phase, which moves through the stationary phase by capillary action. In paper chromatography, for example, paper is the stationary phase. Interaction Between the Solute and the Stationary Phase The interaction between the solute and the stationary phase provides a third method for describing a separation (Figure $2$). In adsorption chromatography, solutes separate based on their ability to adsorb to a solid stationary phase. In partition chromatography, the stationary phase is a thin liquid film on a solid support. Separation occurs because there is a difference in the equilibrium partitioning of solutes between the stationary phase and the mobile phase. A stationary phase that consists of a solid support with covalently attached anionic (e.g., $-\text{SO}_3^-$ ) or cationic (e.g., $-\text{N(CH}_3)_3^+$) functional groups is the basis for ion-exchange chromatography in which ionic solutes are attracted to the stationary phase by electrostatic forces. In size-exclusion chromatography the stationary phase is a porous particle or gel, with separation based on the size of the solutes. Larger solutes are unable to penetrate as deeply into the porous stationary phase and pass more quickly through the column. There are other interactions that can serve as the basis of a separation. In affinity chromatography the interaction between an antigen and an antibody, between an enzyme and a substrate, or between a receptor and a ligand forms the basis of a separation. Elution Chromatography on Columns Of the two methods for bringing the stationary phase and the mobile phases into contact, the most important is column chromatography. In this section we develop a general theory that we may apply to any form of column chromatography. Figure $3$ provides a simple view of a liquid–solid column chromatography experiment. The sample is introduced as a narrow band at the top of the column. Ideally, the solute’s initial concentration profile is rectangular (Figure $4$a). As the sample moves down (or throught) the column, the solutes begin to separate (Figure $3$b,c) and the individual solute bands begin to broaden and develop a Gaussian profile (Figure $4$b,c). If the strength of each solute’s interaction with the stationary phase is sufficiently different, then the solutes separate into individual bands (Figure $3$d and Figure $4$d). Figure $4$. An alternative view of the separation in Figure $3$ showing the concentration of each solute as a function of distance down the column. We can follow the progress of the separation by collecting fractions as they elute from the column (Figure $3$e,f), or by placing a suitable detector at the end of the column. A plot of the detector’s response as a function of elution time, or as a function of the volume of mobile phase, is known as a chromatogram (Figure $5$), and consists of a peak for each solute. There are many possible detectors that we can use to monitor the separation. Later sections of this chapter describe some of the most popular.
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/26%3A_Introduction_to_Chromatographic_Separations/26.01%3A_A_General_Description_of_Chromatography.txt
Our ability to separate two solutes depends on the equilibrium interactions of the solute with the stationary phase and the mobile phase, which effects both the time it takes a solute to travel through the column and the width of the solute's elution profile. In this section we consider the rate at which the solute moves through the column. Distribution Constants Let’s assume we can describe a solute’s distribution between the mobile phase and stationary phase using the following equilibrium reaction $S_{\text{m}} \rightleftharpoons S_{\text{s}} \label{mig1}$ where Sm is the solute in the mobile phase and Ss is the solute in the stationary phase. The equilibrium constant for this reaction is an equilibrium partition coefficient, KD. $K_{D}=\frac{\left[S_{\mathrm{s}}\right]}{\left[S_\text{m}\right]} \label{mig2}$ This is not a trivial assumption. In this section we are, in effect, treating the solute’s equilibrium between the mobile phase and the stationary phase as if it is identical to the equilibrium in a simple liquid–liquid extraction carried out in a separatory funnel. You might question whether this is a reasonable assumption. There is an important difference between the two experiments that we need to consider. In a liquid–liquid extraction the two phases remain in contact with each other at all times, allowing for a true equilibrium. In chromatography, however, the mobile phase is in constant motion. A solute that moves into the stationary phase from the mobile phase will equilibrate back into a different portion of the mobile phase; this does not describe a true equilibrium. So, we ask again: Can we treat a solute’s distribution between the mobile phase and the stationary phase as an equilibrium process? The answer is yes, if the mobile phase velocity is slow relative to the kinetics of the solute’s movement back and forth between the two phase. In general, this is a reasonable assumption. Retention Time We can characterize a chromatographic peak’s properties in several ways, two of which are shown in Figure $1$. Retention time, tr, is the time between the sample’s injection and the maximum response for the solute’s peak. A chromatographic peak’s baseline width, w, as shown in Figure $1$, is determined by extending tangent lines from the inflection points on either side of the peak through the baseline. Although usually we report tr and w using units of time, we can report them using units of volume by multiplying each by the mobile phase’s velocity, or report them in linear units by measuring distances with a ruler. For example, a solute’s retention volume,Vr, is $t_\text{r} \times u$ where u is the mobile phase’s velocity through the column. In addition to the solute’s peak, Figure $1$ also shows a small peak that elutes shortly after the sample is injected into the mobile phase. This peak contains all nonretained solutes, which move through the column at the same rate as the mobile phase. The time required to elute the nonretained solutes is called the column’s void time, tm. The Rate of Solute Migration: The Retention Factor In the absence of any additional equilibrium reactions in the mobile phase or the stationary phase, KD is equivalent to the distribution ratio, D, $D=\frac{\left[S_{0}\right]}{\left[S_\text{m}\right]}=\frac{(\operatorname{mol} \text{S})_\text{s} / V_\text{s}}{(\operatorname{mol} \text{S})_\text{m} / V_\text{m}}=K_{D} \label{mig3}$ where Vs and Vm are the volumes of the stationary phase and the mobile phase, respectively. A conservation of mass requires that the total moles of solute remain constant throughout the separation; thus, we know that the following equation is true. $(\operatorname{mol} \text{S})_{\operatorname{tot}}=(\operatorname{mol} \text{S})_{\mathrm{m}}+(\operatorname{mol} \text{S})_\text{s} \label{mig4}$ Solving Equation \ref{mig4} for the moles of solute in the stationary phase and substituting into Equation \ref{mig2} leaves us with $D = \frac{\left\{(\text{mol S})_{\text{tot}} - (\text{mol S})_\text{m}\right\} / V_{\mathrm{s}}}{(\text{mol S})_{\mathrm{m}} / V_{\mathrm{m}}} \label{mig5}$ Rearranging this equation and solving for the fraction of solute in the mobile phase, fm, gives $f_\text{m} = \frac {(\text{mol S})_\text{m}} {(\text{mol S})_\text{tot}} = \frac {V_\text{m}} {DV_\text{s} + V_\text{m}} \label{mig6}$ Because we may not know the exact volumes of the stationary phase and the mobile phase, we simplify Equation \ref{mig6} by dividing both the numerator and the denominator by Vm; thus $f_\text{m} = \frac {V_\text{m}/V_\text{m}} {DV_\text{s}/V_\text{m} + V_\text{m}/V_\text{m}} = \frac {1} {DV_\text{s}/V_\text{m} + 1} = \frac {1} {1+k} \label{mig7}$ where k $k=D \times \frac{V_\text{s}}{V_\text{m}} \label{mig8}$ is the solute’s retention factor. Note that the larger the retention factor, the more the distribution ratio favors the stationary phase, leading to a more strongly retained solute and a longer retention time. Other (older) names for the retention factor are capacity factor, capacity ratio, and partition ratio, and it sometimes is given the symbol $k^{\prime}$. Keep this in mind if you are using other resources. Retention factor is the approved name from the IUPAC Gold Book. We can determine a solute’s retention factor from a chromatogram by measuring the column’s void time, tm, and the solute’s retention time, tr (see Figure $1$). Solving Equation \ref{mig7} for k, we find that $k=\frac{1-f_\text{m}}{f_\text{m}} \label{mig9}$ Earlier we defined fm as the fraction of solute in the mobile phase. Assuming a constant mobile phase velocity, we also can define fm as $f_\text{m}=\frac{\text { time spent in the mobile phase }}{\text { time spent in the stationary phase }}=\frac{t_\text{m}}{t_\text{r}} \label{mig10}$ Substituting back into Equation \ref{mig9} and rearranging leaves us with $k=\frac{1-\frac{t_{m}}{t_{r}}}{\frac{t_{\mathrm{m}}}{t_{\mathrm{r}}}}=\frac{t_{\mathrm{r}}-t_{\mathrm{m}}}{t_{\mathrm{m}}}=\frac{t_{\mathrm{r}}^{\prime}}{t_{\mathrm{m}}} \label{mig11}$ where $t_\text{r}^{\prime}$ is the adjusted retention time. Example $1$ In a chromatographic analysis of low molecular weight acids, butyric acid elutes with a retention time of 7.63 min. The column’s void time is 0.31 min. Calculate the retention factor for butyric acid. Solution $k_{\mathrm{but}}=\frac{t_{\mathrm{r}}-t_{\mathrm{m}}}{t_{\mathrm{m}}}=\frac{7.63 \text{ min}-0.31 \text{ min}}{0.31 \text{ min}}=23.6 \nonumber$ Relative Migration Rates: The Selectivity Factor Selectivity is a relative measure of the retention of two solutes, which we define using a selectivity factor, $\alpha$ $\alpha=\frac{k_{B}}{k_{A}}=\frac{t_{r, B}-t_{\mathrm{m}}}{t_{r, A}-t_{\mathrm{m}}} \label{mig12}$ where solute A has the smaller retention time. When two solutes elute with identical retention time, $\alpha = 1.00$; for all other conditions $\alpha > 1.00$. Example $2$ In the chromatographic analysis for low molecular weight acids described in Example $1$, the retention time for isobutyric acid is 5.98 min. What is the selectivity factor for isobutyric acid and butyric acid? Solution First we must calculate the retention factor for isobutyric acid. Using the void time from Example $1$ we have $k_{\mathrm{iso}}=\frac{t_{\mathrm{r}}-t_{\mathrm{m}}}{t_{\mathrm{m}}}=\frac{5.98 \text{ min}-0.31 \text{ min}}{0.31 \text{ min}}=18.3 \nonumber$ The selectivity factor, therefore, is $\alpha=\frac{k_{\text {but }}}{k_{\text {iso }}}=\frac{23.6}{18.3}=1.29 \nonumber$
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/26%3A_Introduction_to_Chromatographic_Separations/26.02%3A_Migration_Rates_of_Solutes.txt
Suppose we inject a sample that has a single component. At the moment we inject the sample it is a narrow band of finite width. As the sample passes through the column, the width of this band continually increases in a process we call band broadening. Column efficiency is a quantitative measure of the extent of band broadening. The Shape of Chromatographic Peaks When we inject a sample onto a column it has a uniform, or rectangular concentration profile with respect to distance down the column. As the sample passes through the column, the individual solute particles move in and out of the stationary phase, remaining in place when in the stationary phase and moving down the column when in the mobile phase. Because some solute particles will, on average, spend more time in the mobile phase and some will, on average, spend more time in the stationary phase, the original rectangular band increases in width and takes on a Gaussian shape, as we see in Figure $1$. Our treatment of chromatography in this section assumes that a solute elutes as a symmetrical Gaussian peak, such as that shown in Figure $1$. This ideal behavior occurs when the solute’s partition coefficient, KD $K_{\mathrm{D}}=\frac{[S_\text{s}]}{\left[S_\text{m}\right]} \label{shape1}$ is the same for all concentrations of solute. If this is not the case, then the chromatographic peak has an asymmetric peak shape similar to those shown in Figure $2$. The chromatographic peak in Figure $2$a is an example of peak tailing, which occurs when some sites on the stationary phase retain the solute more strongly than other sites. Figure $2$b, which is an example of peak fronting most often is the result of overloading the column with sample. As shown in Figure $2$a, we can report a peak’s asymmetry by drawing a horizontal line at 10% of the peak’s maximum height and measuring the distance from each side of the peak to a line drawn vertically through the peak’s maximum. The asymmetry factor, T, is defined as $T=\frac{b}{a} \label{shape2}$ Methods for Describing Column Efficiency In their original theoretical model of chromatography, Martin and Synge divided the chromatographic column into discrete sections, which they called theoretical plates. Within each theoretical plate Martin and Synge assumed that there is an equilibrium between the solute present in the stationary phase and the solute present in the mobile phase [Martin, A. J. P.; Synge, R. L. M. Biochem. J. 1941, 35, 1358–1366]. They described column efficiency in terms of the number of theoretical plates, N, $N=\frac{L}{H} \label{eff1}$ where L is the column’s length and H is the height of a theoretical plate. For any given column, the column efficiency improves—and chromatographic peaks become narrower—when there are more theoretical plates. If we assume that a chromatographic peak has a Gaussian profile, then the extent of band broadening is given by the peak’s variance or standard deviation. The height of a theoretical plate is the peak’s variance per unit length of the column $H=\frac{\sigma^{2}}{L} \label{eff2}$ where the standard deviation, $\sigma$, has units of distance. Because retention times and peak widths usually are measured in seconds or minutes, it is more convenient to express the standard deviation in units of time, $\tau$, by dividing $\sigma$ by the solute’s average linear velocity, $\overline{u}$, which is equivalent to dividing the distance it travels, L, by its retention time, tr. $\tau=\frac{\sigma}{\overline{u}}=\frac{\sigma t_{r}}{L} \label{eff3}$ For a Gaussian peak shape, the width at the baseline, w, is four times its standard deviation, $\tau$. $w = 4 \tau \label{eff4}$ Combining Equation \ref{eff2}, Equation \ref{eff3}, and Equation \ref{eff4} defines the height of a theoretical plate in terms of the easily measured chromatographic parameters tr and w. $H=\frac{L w^{2}}{16 t_\text{r}^{2}} \label{eff5}$ Combing Equation \ref{eff5} and Equation \ref{eff1} gives the number of theoretical plates. $N=16 \frac{t_{\mathrm{r}}^{2}}{w^{2}}=16\left(\frac{t_{\mathrm{r}}}{w}\right)^{2} \label{eff6}$ Example $4$ A chromatographic analysis for the chlorinated pesticide Dieldrin gives a peak with a retention time of 8.68 min and a baseline width of 0.29 min. Calculate the number of theoretical plates? Given that the column is 2.0 m long, what is the height of a theoretical plate in mm? Solution Using Equation \ref{eff6}, the number of theoretical plates is $N=16 \frac{t_{\mathrm{r}}^{2}}{w^{2}}=16 \times \frac{(8.68 \text{ min})^{2}}{(0.29 \text{ min})^{2}}=14300 \text{ plates} \nonumber$ Solving Equation \ref{eff1} for H gives the average height of a theoretical plate as $H=\frac{L}{N}=\frac{2.00 \text{ m}}{14300 \text{ plates}} \times \frac{1000 \text{ mm}}{\mathrm{m}}=0.14 \text{ mm} / \mathrm{plate} \nonumber$ It is important to remember that a theoretical plate is an artificial construct and that a chromatographic column does not contain physical plates. In fact, the number of theoretical plates depends on both the properties of the column and the solute. As a result, the number of theoretical plates for a column may vary from solute to solute. The number of theoretical plates for an asymmetric peak shape is approximately $N \approx \frac{41.7 \times \frac{t_{r}^{2}}{\left(w_{0.1}\right)^{2}}}{T+1.25}=\frac{41.7 \times \frac{t_{r}^{2}}{(a+b)^{2}}}{T+1.25} \label{eff7}$ where w0.1 is the width at 10% of the peak’s height [Foley, J. P.; Dorsey, J. G. Anal. Chem. 1983, 55, 730–737]. Asymmetric peaks have fewer theoretical plates, and the more asymmetric the peak the smaller the number of theoretical plates. For example, the following table gives values for N for a solute eluting with a retention time of 10.0 min and a peak width of 1.00 min. b a T N 0.5 0.5 1.00 1850 0.6 0.4 1.50 1520 0.7 0.3 2.33 1160 0.8 0.2 4.00 790 Kinetic Variables That Affect Zone Broadening Another approach to understanding the broadening of a solute band as it passes through a column is to consider the factors that affect the rate at which a solute moves through the column and how that is affected by the velocity with which the mobile phase moves through the column. We will consider one approach that considers four contributions: variations in path lengths, longitudinal diffusion, mass transfer in the stationary phase, and mass transfer in the mobile phase. Multiple Paths: Variations in Path Length As solute molecules pass through the column they travel paths that differ in length. Because of this difference in path length, two solute molecules that enter the column at the same time will exit the column at different times. The result, as shown in Figure $3$, is a broadening of the solute’s profile on the column. The contribution of multiple paths to the height of a theoretical plate, Hp, is $H_{p}=2 \lambda d_{p} \label{van1}$ where dp is the average diameter of the particulate packing material and $\lambda$ is a constant that accounts for the consistency of the packing. A smaller range of particle sizes and a more consistent packing produce a smaller value for $\lambda$. For a column without packing material, Hp is zero and there is no contribution to band broadening from multiple paths. An inconsistent packing creates channels that allow some solute molecules to travel quickly through the column. It also can creates pockets that temporarily trap some solute molecules, slowing their progress through the column. A more uniform packing minimizes these problems. Longitudinal Diffusion The second contribution to band broadening is the result of the solute’s longitudinal diffusion in the mobile phase. Solute molecules are in constant motion, diffusing from regions of higher solute concentration to regions where the concentration of solute is smaller. The result is an increase in the solute’s band width (Figure $4$). The contribution of longitudinal diffusion to the height of a theoretical plate, Hd, is $H_{d}=\frac{2 \gamma D_{m}}{u} \label{van2}$ where Dm is the solute’s diffusion coefficient in the mobile phase, u is the mobile phase’s velocity, and $\gamma$ is a constant related to the efficiency of column packing. Note that the effect of Hd on band broadening is inversely proportional to the mobile phase velocity: a higher velocity provides less time for longitudinal diffusion. Because a solute’s diffusion coefficient is larger in the gas phase than in a liquid phase, longitudinal diffusion is a more serious problem in gas chromatography. Mass Transfer As the solute passes through the column it moves between the mobile phase and the stationary phase. We call this movement between phases mass transfer. As shown in Figure $5$, band broadening occurs if the solute’s movement within the mobile phase or within the stationary phase is not fast enough to maintain an equilibrium in its concentration between the two phases. On average, a solute molecule in the mobile phase moves some distance down the column before it passes into the stationary phase. A solute molecule in the stationary phase, on the other hand, takes longer than expected to move back into the mobile phase. The contributions of mass transfer in the stationary phase, Hs, and mass transfer in the mobile phase, Hm, are given by the following equations $H_{s}=\frac{q k d_{f}^{2}}{(1+k)^{2} D_{s}} u \label{van3}$ $H_{m}=\frac{f n\left(d_{p}^{2}, d_{c}^{2}\right)}{D_{m}} u \label{van4}$ where df is the thickness of the stationary phase, dc is the diameter of the column, Ds and Dm are the diffusion coefficients for the solute in the stationary phase and the mobile phase, k is the solute’s retention factor, and q is a constant related to the column packing material. Although the exact form of Hm is not known, it is a function of particle size and column diameter. Note that the effect of Hs and Hm on band broadening is directly proportional to the mobile phase velocity because a smaller velocity provides more time for mass transfer. The abbreviation fn in Equation \ref{van4} means “is a function of.” Putting It All Together The height of a theoretical plate is a summation of the contributions from each of the terms that affect band broadening. $H=H_{p}+H_{d}+H_{s}+H_{m} \label{van5}$ An alternative form of this equation is the van Deemter equation $H=A+\frac{B}{u}+C u \label{van6}$ which emphasizes the importance of the mobile phase’s velocity. In the van Deemter equation, A accounts for the contribution of multiple paths (Hp), B/u accounts for the contribution of longitudinal diffusion (Hd), and Cu accounts for the combined contribution of mass transfer in the stationary phase and in the mobile phase (Hs and Hm). There is some disagreement on the best equation for describing the relationship between plate height and mobile phase velocity [Hawkes, S. J. J. Chem. Educ. 1983, 60, 393–398]. In addition to the van Deemter equation, other equations include $H=\frac{B}{u}+\left(C_s+C_{m}\right) u \label{van7}$ where Cs and Cm are the mass transfer terms for the stationary phase and the mobile phase and $H=A u^{1 / 3}+\frac{B}{u}+C u \label{van8}$ All three equations, and others, have been used to characterize chromatographic systems, with no single equation providing the best explanation in every case [Kennedy, R. T.; Jorgenson, J. W. Anal. Chem. 1989, 61, 1128–1135]. To increase the number of theoretical plates without increasing the length of the column, we need to decrease one or more of the terms in Equation \ref{van5}. The easiest way to decrease H is to adjust the velocity of the mobile phase. For smaller mobile phase velocities, column efficiency is limited by longitudinal diffusion, and for higher mobile phase velocities efficiency is limited by the two mass transfer terms. As shown in Figure 26.3.6 —which uses the van Deemter equation—the optimum mobile phase velocity is the minimum in a plot of H as a function of u. The remaining parameters that affect the terms in Equation \ref{van5} are functions of the column’s properties and suggest other possible approaches to improving column efficiency. For example, both Hp and Hm are a function of the size of the particles used to pack the column. Decreasing particle size, therefore, is another useful method for improving efficiency. For a more detailed discussion of ways to assess the quality of a column, see Desmet, G.; Caooter, D.; Broeckhaven, K. “Graphical Data Represenation Methods to Assess the Quality of LC Columns,” Anal. Chem. 2015, 87, 8593–8602. Perhaps the most important advancement in chromatography columns is the development of open-tubular, or capillary columns. These columns have very small diameters (dc ≈ 50–500 μm) and contain no packing material (dp = 0). Instead, the capillary column’s interior wall is coated with a thin film of the stationary phase. Plate height is reduced because the contribution to H from Hp (Equation \ref{van1}) disappears and the contribution from Hm (Equation \ref{van4}) becomes smaller. Because the column does not contain any solid packing material, it takes less pressure to move the mobile phase through the column, which allows for longer columns. The combination of a longer column and a smaller height for a theoretical plate increases the number of theoretical plates by approximately $100 \times$. Capillary columns are not without disadvantages. Because they are much narrower than packed columns, they require a significantly smaller amount of sample, which may be difficult to inject reproducibly. Another approach to improving resolution is to use thin films of stationary phase, which decreases the contribution to H from Hs (Equation \ref{van3}). The smaller the particles, the more pressure is needed to push the mobile phase through the column. As a result, for any form of chromatography there is a practical limit to particle size.
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/26%3A_Introduction_to_Chromatographic_Separations/26.03%3A_Zone_Broadening_and_Column_Efficiency.txt
The goal of a chromatographic separation is to take a sample with more than one solute and to separate the solutes such that each elutes by itself. Our ability to separate two solutes from each other—to resolve them—is affected by a number of variables; how we can optimize the separation of two solutes is the subject of this section. Column Resolution The goal of chromatography is to separate a mixture into a series of chromatographic peaks, each of which constitutes a single component of the mixture. The resolution between two chromatographic peaks, RAB, is a quantitative measure of their separation, and is defined as $R_{A B}=\frac{t_{t, B}-t_{t,A}}{0.5\left(w_{B}+w_{A}\right)}=\frac{2 \Delta t_{r}}{w_{B}+w_{A}} \label{res1}$ where B is the later eluting of the two solutes. As shown in Figure $1$, the separation of two chromatographic peaks improves with an increase in RAB. If the areas under the two peaks are identical—as is the case in Figure $1$—then a resolution of 1.50 corresponds to an overlap of only 0.13% for the two elution profiles. Because resolution is a quantitative measure of a separation’s success, it is a useful way to determine if a change in experimental conditions leads to a better separation. Example $1$ In a chromatographic analysis of lemon oil a peak for limonene has a retention time of 8.36 min with a baseline width of 0.96 min. $\gamma$-Terpinene elutes at 9.54 min with a baseline width of 0.64 min. What is the resolution between the two peaks? Solution Using Equation \ref{res1} we find that the resolution is $R_{A B}=\frac{2 \Delta t_{r}}{w_{B}+w_{A}}=\frac{2(9.54 \text{ min}-8.36 \text{ min})}{0.64 \text{ min}+0.96 \text{ min}}=1.48 \nonumber$ The Effect of Retention and Selectivity Factors on Resolution Now that we have defined the solute retention factor, selectivity, and column efficiency we are able to consider how they affect the resolution of two closely eluting peaks. Because the two peaks have similar retention times, it is reasonable to assume that their peak widths are nearly identical. If the number of theoretical plates is the same for all solutes—not strictly true, but not a bad assumption—then the ratio tr/w is a constant. If two solutes have similar retention times, then their peak widths must be similar. Equation \ref{res1}, therefore, becomes $R_{A B}=\frac{t_{r, B}-t_{r, A}}{0.5\left(w_{B}+w_{A}\right)} \approx \frac{t_{r, B}-t_{r, A}}{0.5\left(2 w_{B}\right)}=\frac{t_{r, B}-t_{r, A}}{w_{B}} \label{res2}$ where B is the later eluting of the two solutes. Solving equation 26.3.8 for wB and substituting into Equation \ref{res2} leaves us with the following result. $R_{A B}=\frac{\sqrt{N_{B}}}{4} \times \frac{t_{r, B}-t_{r, A}}{t_{r, B}} \label{res3}$ Rearranging equation 26.2.11 provides us with the following equations for the retention times of solutes A and B. $t_{r, A}=k_{A} t_{\mathrm{m}}+t_{\mathrm{m}} \label{res4}$ $t_{\mathrm{r}, B}=k_{B} t_{\mathrm{m}}+t_{\mathrm{m}} \label{res5}$ After substituting these equations into Equation \ref{res3} and simplifying, we have $R_{A B}=\frac{\sqrt{N_{B}}}{4} \times \frac{k_{B}-k_{A}}{1+k_{B}} \label{ref6}$ Finally, we can eliminate solute A’s retention factor by substituting in equation 26.2.12. After rearranging, we end up with the following equation for the resolution between the chromatographic peaks for solutes A and B. $R_{A B}=\frac{\sqrt{N_{B}}}{4} \times \frac{\alpha-1}{\alpha} \times \frac{k_{B}}{1+k_{B}} \label{res7}$ Although Equation \ref{res7} is useful for considering how a change in N, $\alpha$, or k qualitatively affects resolution—which suits our purpose here—it is less useful for making accurate quantitative predictions of resolution, particularly for smaller values of N and for larger values of R. For more accurate predictions use the equation $R_{A B}=\frac{\sqrt{N}}{4} \times(\alpha-1) \times \frac{k_{B}}{1+k_{\mathrm{avg}}} \nonumber$ where kavg is (kA + kB)/2. For a derivation of this equation and for a deeper discussion of resolution in column chromatography, see Foley, J. P. “Resolution Equations for Column Chromatography,” Analyst, 1991, 116, 1275-1279. Equation \ref{res7} contains terms that correspond to column efficiency, selectivity, and the solute retention factor. We can vary these terms, more or less independently, to improve resolution and analysis time. The first term, which is a function of the number of theoretical plates (for Equation \ref{res7}), accounts for the effect of column efficiency. The second term is a function of $\alpha$ and accounts for the influence of column selectivity. Finally, the third term in both equations is a function of kB and accounts for the effect of solute B’s retention factor. A discussion of how we can use these parameters to improve resolution is the subject of the remainder of this section. The Effect of Resolution on Retention Time In addition to resolution, another important factor in chromatography is the amount of time needed to elute a pair of solutes, which we can approximate using the retention time for solute B. $t_{r, B}=\frac{16 R_{AB}^{2} H}{u} \times\left(\frac{\alpha}{\alpha-1}\right)^{2} \times \frac{\left(1+k_{B}\right)^{3}}{k_{B}^{2}} \label{res8}$ where u is the mobile phase’s velocity. Variables That Affect Column Performance Using the Retention Factor to Optimize Resolution One of the simplest ways to improve resolution is to adjust the retention factor for solute B. If all other terms in Equation \ref{res7} remain constant, an increase in kB will improve resolution. As shown by the green curve in Figure 26.4.2 , however, the improvement is greatest if the initial value of kB is small. Once kB exceeds a value of approximately 10, a further increase produces only a marginal improvement in resolution. For example, if the original value of kB is 1, increasing its value to 10 gives an 82% improvement in resolution; a further increase to 15 provides a net improvement in resolution of only 87.5%. Any improvement in resolution from increasing the value of kB generally comes at the cost of a longer analysis time. The red curve in Figure 26.4.2 shows the relative change in the retention time for solute B as a function of its retention factor. Note that the minimum retention time is for kB = 2. Increasing kB from 2 to 10, for example, approximately doubles solute B’s retention time. The relationship between retention factor and analysis time in Figure 26.4.2 works to our advantage if a separation produces an acceptable resolution with a large kB. In this case we may be able to decrease kB with little loss in resolution and with a significantly shorter analysis time. To increase kB without changing selectivity, $\alpha$, any change to the chromatographic conditions must result in a general, nonselective increase in the retention factor for both solutes. In gas chromatography, we can accomplish this by decreasing the column’s temperature. Because a solute’s vapor pressure is smaller at lower temperatures, it spends more time in the stationary phase and takes longer to elute. In liquid chromatography, the easiest way to increase a solute’s retention factor is to use a mobile phase that is a weaker solvent. When the mobile phase has a lower solvent strength, solutes spend proportionally more time in the stationary phase and take longer to elute. Using Selectivity to Optimize Resolution A second approach to improving resolution is to adjust the selectivity, $\alpha$. In fact, for $\alpha \approx 1$ it usually is not possible to improve resolution by adjusting the solute retention factor, kB, or the column efficiency, N. A change in $\alpha$ often has a more dramatic effect on resolution than a change in kB. For example, changing $\alpha$ from 1.1 to 1.5, while holding constant all other terms, improves resolution by 267%. In gas chromatography, we adjust $\alpha$ by changing the stationary phase; in liquid chromatography, we change the composition of the mobile phase to adjust $\alpha$. To change $\alpha$ we need to selectively adjust individual solute retention factors. Figure 26.4.3 shows one possible approach for the liquid chromatographic separation of a mixture of substituted benzoic acids. Because the retention time of a compound’s weak acid form and its weak base form are different, its retention time will vary with the pH of the mobile phase, as shown in Figure 26.4.3 a. The intersections of the curves in Figure 26.4.3 a show pH values where two solutes co-elute. For example, at a pH of 3.8 terephthalic acid and p-hydroxybenzoic acid elute as a single chromatographic peak. Figure 26.4.3 a shows that there are many pH values where some separation is possible. To find the optimum separation, we plot $\alpha$ for each pair of solutes. The red, green, and orange curves in Figure 26.4.3 b show the variation in $\alpha$ with pH for the three pairs of solutes that are hardest to separate (for all other pairs of solutes, $\alpha$ > 2 at all pH levels). The blue shading shows windows of pH values in which at least a partial separation is possible—this figure is sometimes called a window diagram—and the highest point in each window gives the optimum pH within that range. The best overall separation is the highest point in any window, which, for this example, is a pH of 3.5. Because the analysis time at this pH is more than 40 min (Figure 26.4.3 a), choosing a pH between 4.1–4.4 might produce an acceptable separation with a much shorter analysis time. Let’s use benzoic acid, C6H5COOH, to explain why pH can affect a solute’s retention time. The separation uses an aqueous mobile phase and a nonpolar stationary phase. At lower pHs, benzoic acid is predominately in its weak acid form, C6H5COOH, and partitions easily into the nonpolar stationary phase. At more basic pHs, however, benzoic acid is in its weak base form, C6H5COO. Because it now carries a charge, its solubility in the mobile phase increases and its solubility in the nonpolar stationary phase decreases. As a result, it spends more time in the mobile phase and has a shorter retention time. Although the usual way to adjust pH is to change the concentration of buffering agents, it also is possible to adjust pH by changing the column’s temperature because a solute’s pKa value is pH-dependent; for a review, see Gagliardi, L. G.; Tascon, M.; Castells, C. B. “Effect of Temperature on Acid–Base Equilibria in Separation Techniques: A Review,” Anal. Chim. Acta, 2015, 889, 35–57. Using Column Efficiency to Optimize Resolution A third approach to improve resolution is to adjust the column’s efficiency by increasing the number of theoretical plates, N. If we have values for kB and $\alpha$, then we can use Equation \ref{res7} to calculate the number of theoretical plates for any resolution. Table 26.4.1 provides some representative values. For example, if $\alpha$ = 1.05 and kB = 2.0, a resolution of 1.25 requires approximately 24 800 theoretical plates. If our column provides only 12 400 plates, half of what is needed, then a separation is not possible. How can we double the number of theoretical plates? The easiest way is to double the length of the column, although this also doubles the analysis time. A better approach is to cut the height of a theoretical plate, H, in half, providing the desired resolution without changing the analysis time. Even better, if we can decrease H by more than 50%, it may be possible to achieve the desired resolution with an even shorter analysis time by also decreasing kB or $\alpha$. Table 26.4.1 . Minimum Number of Theoretical Plates to Achieve Desired Resolution for Selected Values of kB and $\alpha$ RAB = 1.00 RAB = 1.25 RAB = 1.50 kB $\alpha = 1.05$ $\alpha = 1.10$ $\alpha = 1.05$ $\alpha = 1.10$ $\alpha = 1.05$ $\alpha = 1.10$ 0.5 63500 17400 99200 27200 143000 39200 1.0 28200 7740 44100 12100 63500 17400 1.5 19600 5380 30600 8400 44100 12100 2.0 15900 4360 24800 6810 35700 9800 3.0 12500 3440 19600 5380 28200 7740 5.0 10200 2790 15900 4360 22900 6270 10.0 8540 2340 13300 3660 19200 5270 The General Elution Problem Adjusting the retention factor to improve the resolution between one pair of solutes may lead to unacceptably long retention times for other solutes. For example, suppose we need to analyze a four-component mixture with baseline resolution and with a run-time of less than 20 min. Our initial choice of conditions gives the chromatogram in Figure 26.4.4 a. Although we successfully separate components 3 and 4 within 15 min, we fail to separate components 1 and 2. Adjusting conditions to improve the resolution for the first two components by increasing k2 provides a good separation of all four components, but the run-time is too long (Figure 26.4.4 b). This problem of finding a single set of acceptable operating conditions is known as the general elution problem. One solution to the general elution problem is to make incremental adjustments to the retention factor as the separation takes place. At the beginning of the separation we set the initial chromatographic conditions to optimize the resolution for early eluting solutes. As the separation progresses, we adjust the chromatographic conditions to decrease the retention factor—and, therefore, to decrease the retention time—for each of the later eluting solutes (Figure 26.4.4 c). In gas chromatography this is accomplished by temperature programming. The column’s initial temperature is selected such that the first solutes to elute are resolved fully. The temperature is then increased, either continuously or in steps, to bring off later eluting components with both an acceptable resolution and a reasonable analysis time. In liquid chromatography the same effect is obtained by increasing the solvent’s eluting strength. This is known as a gradient elution. We will have more to say about each of these in later sections of this chapter.
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/26%3A_Introduction_to_Chromatographic_Separations/26.04%3A_Optimization_and_Column_Performance.txt
In this chapter we have introduced many chromatographic variables, some directly measured from the chromatogram, provided by the manufacturer, or from the operating conditions, and some derived from these variables. The following two tables summarize these variables. Table $1$. Chromatographic Variables Directly Measured From Chromatogram, Provided by Manufacturer, or From Operating Conditions variable name source $t_r$ retention time for solute chromatogram $t_m$ retention time for non-retained solute chromatogram $w$ peak width chromatogram $u$ mobile phase flow rate operating conditions $L$ length of column's stationary phase manufacturer $d_c$ diameter of column manufacturer $d_p$ diameter of packing material manufacturer $d_f$ thickness of stationary phase manufacturer $V_s$ volume of stationary phase operating conditions Table $2$. Derived Chromatographic Variables variable name equation to derive value $V_m$ volume of mobile phase $V_m = t_m u$ $k$ retention factor $k = \frac{t_r - t_m}{t_m}$ $t_r^{\prime}$ adjusted retention time $t_r^{\prime} = t_r - t_m$ $D$ distribution ratio $D = k \times \frac{V_s}{V_m}$ $\alpha$ selectivity factor $\alpha = \frac{k_B}{k_A}$ $R_{AB}$ resolution $R_{AB} = \frac{\sqrt{N_B}}{4} \times \frac{\alpha - 1}{\alpha} \times \frac{k_B}{1 + k_B}$ $N$ number of theoretical plates $N = 16 \left(\frac{t_r}{w} \right)^2$ $H$ height of theoretical plates $H = \frac{Lw^2}{16 t_r^2} = \frac{L}{N}$ $H_p$ height due to multiple paths $H_p = 2 \lambda d_p$ $H_d$ height due to longitudinal diffusion $H_d = \frac{2 \gamma D_m}{u}$ $H_s$ height due to mass transfer in stationary phase $H_s = \frac{qkd_f^2}{(1 + k)^2 D_s}u$ $H_m$ height due to mass transfer in mobile phase $H_m = \frac{fn(d_p^2, d_c^2)}{D_m}u$ 26.06: Applications of Chromatography Although the primary purpose of chromatography is the separation of a complex mixture into its component parts, as outlined here, a chromatographic separation also provides qualitative and quantitative information about our samples. More detailed examples of qualitative and quantitative applications are found in the chapters that follow. Qualitative Analysis As we learned in Section 26.2, solutes migrate through a chromatographic system at rate that is a function of the properties of the mobile phase and the stationary phase. This means that a particular solute will elute with a consistent retention time. If we expect a solute to elute with a retention time of 5.0 min, the presence of a peak at 5.0 min is suggestive of, but not definitive evidence of the solute's presence in our sample; however, the absence of a peak at 5.0 min is strong evidence that the solute is not present in our sample. For a complex mixture, this sort of screening technique is a useful qualitative application of chromatography. As we will see in the chapters that follow, the type of detector used to monitor a chromatographic separation may provide useful qualitative information. Quantitative Analysis A chromatographic separation yields a sequence of peaks that, ideally, represent a single solute. These peak's are characterized by an area that is proportional to the amount of analyte injected into the mobile phase. By injecting a series of standards, a calibration curve of peak area as a function of the analyte's concentration provides a way to determine the analyte's concentration in a sample. Any of the calibration strategies discussed in Chapter 1.5—external standards, standard additions, and internal standards—find use in a quantitative chromatographic analysis. Determining a solute's peak area is relatively straight-forward when using a computer-interfaced instrument with appropriate software. Alternatively, peak height, which is easier to measure, can be used as a substitute, although care must be taken to ensure that the peaks are symmetrical and that peak widths are consistent for the standards and the samples.
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/26%3A_Introduction_to_Chromatographic_Separations/26.05%3A_Summary_of_Important_Relationships_for_Chromatography.txt
In gas chromatography (GC) we inject the sample, which may be a gas or a liquid, into a gaseous mobile phase (often called the carrier gas). The mobile phase carries the sample through a packed or capillary column that separates the sample's components based on their ability to partition between the mobile phase and the stationary phase. Because it combines separation with analysis, gas chromatography provides excellent selectivity. By adjusting conditions it usually is possible to design a separation so that analytes elute by themselves, even when the mixture is complex. Additional selectivity is possible by using a detector that does not respond to all analytes. • 27.1: Principles of Gas Chromatography In Chapter 26 we covered several important elements of chromatography, including the factors that affect the migration of solutes, the factors that contribute to band broadening, and the factors under our control that we can use to optimize the separation of a mixture. Here we consider two topics that apply to a gas chromatographic separation, both of which are a function of the properties of gases. • 27.2: Instruments for Gas Chromatography A typical gas chromatograph includes a supply of compressed gas for the mobile phase; a heated injector to volatilize the sample; a column that holds the stationary phase, and which is placed within an oven whose temperature we can control during the separation; and a detector to monitor the eluent as it comes off the column. • 27.3: Gas Chromatographic Columns and Stationary Phases There are two broad classes of chromatographic columns: packed columns and capillary columns. In general, a packed column can handle larger samples and a capillary column can separate more complex mixtures. • 27.4: Applications of Gas Chromatography Gas chromatography is widely used for the quantitative analysis of a diverse array of samples in environmental, clinical, pharmaceutical, biochemical, forensic, food science and petrochemical laboratories. It also finds use for qualitative analyses, although these are less common. 27: Gas Chromatography In Chapter 26 we covered several important elements of chromatography, including the factors that affect the migration of solutes, the factors that contribute to band broadening, and the factors under our control that we can use to optimize the separation of a mixture. Here we consider two topics that apply to a gas chromatographic separation, both of which are a function of the properties of gases. Retention Times and Retention Volumes Many of the chromatographic variables in gathered in the tables included in Chapter 26.5—both those that are measured directly, provided by the manufacturer, or given by the operating conditions, and those derived from these variables—are given in terms or retention times for the solutes, $t_r$, and for the mobile phase, $t_m$. The product of time and flow rate is a volume $V_r = t_r \times u \nonumber$ $V_m = t_r\times u \nonumber$ where $V_r$ and $V_m$ are the volume of mobile phase needed to elute a solute and the volume of mobile phase needed to elute a non-retained solute, which allows us to describe the retention in terms of volumes instead of times. Because the volume of gas is a function of pressure, and the pressure drops across the column from an inlet pressure of $P_i$ to an outlet pressure of $P_o$, the retention times are particularly sensitive to operating condition. We can, however, correct the retention volumes by accounting for the compressibility of the gas $V_r^o = j t_r u \nonumber$ $V_m^o = j t_m u \nonumber$ where $j$ is a correction factor that accounts for the drop in pressure $j = \frac {3 \times (P_i/P_o)^2 - 1} {2 \times (P_i/P_o)^3 - 1} \nonumber$ and where $V_r^o$ and $V_m^o$ are the corrected retention volumes for the solute and the non-retained solutes, respectively. The solute's corrected retention volume can be further normalized by dividing the adjusted retention volume, $V_r^o - V_m^o$, by the mass of the stationary phase, $w$, and by adjusting for the column's temperature, $T_c$, relative to 273 K $V_g = \frac {V_r^o - V_m^o} {w} \times \frac {273} {T_c} \nonumber$ yielding the solute's specific retention volume, $V_g$. This value is reasonably insensitive to operating conditions, which makes it useful for qualitative purposes. Effect of Diffusion in the Gas Phase on Band Broadening In Chapter 26 we considered three factors that affect band broadening—multiple paths, longitudinal diffusion, and mass transfer—expressing the relationship between the height of a theoretical plate, $H$, as a function of the mobile phase's velocity, $u$, using the van Deemter equation $H = A + \frac{B}{u} + Cu \nonumber$ where $A$ is the contribution from multiple paths, $B$ is the contribution from longitudinal diffusion, and $C$ is the contribution from mass transfer. Because solutes have large diffusion coefficients in the gas phase, the term $B/u$ is often the limiting factor in gas chromatography.
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/27%3A_Gas_Chromatography/27.01%3A_Principles_of_Gas-Liquid_Chromatography.txt
In gas chromatography (GC) we inject the sample, which may be a gas or a liquid, into an gaseous mobile phase (often called the carrier gas). The mobile phase carries the sample through a packed or a capillary column that separates the sample’s components based on their ability to partition between the mobile phase and the stationary phase. Figure 27.2.1 shows an example of a typical gas chromatograph, which consists of several key components: a supply of compressed gas for the mobile phase; a heated injector, which rapidly volatilizes the components in a liquid sample; a column, which is placed within an oven whose temperature we can control during the separation; and a detector to monitor the eluent as it comes off the column. Let’s consider each of these components. Mobile Phase The most common mobile phases for gas chromatography are He, Ar, and N2, which have the advantage of being chemically inert toward both the sample and the stationary phase. The choice of carrier gas often is determined by the needs of the instrument’s detector. For a packed column the mobile phase velocity usually is 25–150 mL/min. The typical flow rate for a capillary column is 1–25 mL/min. Sample Introduction Three factors determine how we introduce a sample to the gas chromatograph. First, all of the sample’s constituents must be volatile. Second, the analytes must be present at an appropriate concentration. Finally, the physical process of injecting the sample must not degrade the separation. Each of these needs is considered in this section. Preparing a Volatile Sample Not every sample can be injected directly into a gas chromatograph. To move through the column, the sample’s constituents must be sufficiently volatile. A solute of low volatility, for example, may be retained by the column and continue to elute during the analysis of subsequent samples. A nonvolatile solute will condense at the top of the column, degrading the column’s performance. An attractive approach to isolating analytes is a solid-phase microextraction (SPME). In one approach, which is illustrated in Figure 27.2.2 , a fused-silica fiber is placed inside a syringe needle. The fiber, which is coated with a thin film of an adsorbent material, such as polydimethyl siloxane, is lowered into the sample by depressing a plunger and is exposed to the sample for a predetermined time. After withdrawing the fiber into the needle, it is transferred to the gas chromatograph for analysis. Two additional methods for isolating volatile analytes are a purge-and-trap and headspace sampling. In a purge-and-trap, we bubble an inert gas, such as He or N2, through the sample, releasing—or purging—the volatile compounds. These compounds are carried by the purge gas through a trap that contains an absorbent material, such as Tenax, where they are retained. Heating the trap and back-flushing with carrier gas transfers the volatile compounds to the gas chromatograph. In headspace sampling we place the sample in a closed vial with an overlying air space. After allowing time for the volatile analytes to equilibrate between the sample and the overlying air, we use a syringe to extract a portion of the vapor phase and inject it into the gas chromatograph. Alternatively, we can sample the headspace with an SPME. Thermal desorption is a useful method for releasing volatile analytes from solids. We place a portion of the solid in a glass-lined, stainless steel tube. After purging with carrier gas to remove any O2 that might be present, we heat the sample. Volatile analytes are swept from the tube by an inert gas and carried to the GC. Because volatilization is not a rapid process, the volatile analytes often are concentrated at the top of the column by cooling the column inlet below room temperature, a process known as cryogenic focusing. Once volatilization is complete, the column inlet is heated rapidly, releasing the analytes to travel through the column. The reason for removing O2 is to prevent the sample from undergoing an oxidation reaction when it is heated. To analyze a nonvolatile analyte we must convert it to a volatile form. For example, amino acids are not sufficiently volatile to analyze directly by gas chromatography. Reacting an amino acid, such as valine, with 1-butanol and acetyl chloride produces an esterified amino acid. Subsequent treatment with trifluoroacetic acid gives the amino acid’s volatile N-trifluoroacetyl-n-butyl ester derivative. Adjusting the Analyte's Concentration If an analyte’s concentration is too small to give an adequate signal, then we must concentrate the analyte before we inject the sample into the gas chromatograph. A side benefit of many extraction methods is that they often concentrate the analytes. Volatile organic materials isolated from an aqueous sample by a purge-and-trap, for example, are concentrated by as much as $1000 \times$. If an analyte is too concentrated, it is easy to overload the column, resulting in peak fronting and a poor separation. In addition, the analyte’s concentration may exceed the detector’s linear response. Injecting less sample or diluting the sample with a volatile solvent, such as methylene chloride, are two possible solutions to this problem. Injecting the Sample In Chapter 26 we examined several explanations for why a solute’s band increases in width as it passes through the column, a process we called band broadening. We also introduce an additional source of band broadening if we fail to inject the sample into the minimum possible volume of mobile phase. There are two principal sources of this precolumn band broadening: injecting the sample into a moving stream of mobile phase and injecting a liquid sample instead of a gaseous sample. The design of a gas chromatograph’s injector helps minimize these problems. An example of a simple injection port for a packed column is shown in Figure 27.2.3 . The top of the column fits within a heated injector block, with carrier gas entering from the bottom. The sample is injected through a rubber septum using a microliter syringe, such as the one shown in in Figure 27.2.4 . Injecting the sample directly into the column minimizes band broadening because it mixes the sample with the smallest possible amount of carrier gas. The injector block is heated to a temperature at least 50oC above the boiling point of the least volatile solute, which ensures a rapid vaporization of the sample’s components. Because a capillary column’s volume is significantly smaller than that for a packed column, it requires a different style of injector to avoid overloading the column with sample. Figure 27.2.5 shows a schematic diagram of a typical split/splitless injector for use with a capillary column. In a split injection we inject the sample through a rubber septum using a microliter syringe. Instead of injecting the sample directly into the column, it is injected into a glass liner where it mixes with the carrier gas. At the split point, a small fraction of the carrier gas and sample enters the capillary column with the remainder exiting through the split vent. By controlling the flow rate of the carrier gas as it enters the injector, and its flow rate through the septum purge and the split vent, we can control the fraction of sample that enters the capillary column, typically 0.1–10%. For example, if the carrier gas flow rate is 50 mL/min, and the flow rates for the septum purge and the split vent are 2 mL/min and 47 mL/min, respectively, then the flow rate through the column is 1 mL/min (= 50 – 2 – 47). The ratio of sample entering the column is 1/50, or 2%. In a splitless injection, which is useful for trace analysis, we close the split vent and allow all the carrier gas that passes through the glass liner to enter the column—this allows virtually all the sample to enters the column. Because the flow rate through the injector is low, significant precolumn band broadening is a problem. Holding the column’s temperature approximately 20–25oC below the solvent’s boiling point allows the solvent to condense at the entry to the capillary column, forming a barrier that traps the solutes. After allowing the solutes to concentrate, the column’s temperature is increased and the separation begins. For samples that decompose easily, an on-column injection may be necessary. In this method the sample is injected directly into the column without heating. The column temperature is then increased, volatilizing the sample with as low a temperature as is practical. Temperature Control Control of the column’s temperature is critical to attaining a good separation when using gas chromatography. For this reason the column is placed inside a thermostated oven (see Figure 27.2.1 ). In an isothermal separation we maintain the column at a constant temperature. To increase the interaction between the solutes and the stationary phase, the temperature usually is set slightly below that of the lowest-boiling solute. One difficulty with an isothermal separation is that a temperature that favors the separation of a low-boiling solute may lead to an unacceptably long retention time for a higher-boiling solute. Temperature programming provides a solution to this problem. At the beginning of the analysis we set the column’s initial temperature below that for the lowest-boiling solute. As the separation progresses, we slowly increase the temperature at either a uniform rate or in a series of steps. Detectors for Gas Chromatography The final part of a gas chromatograph is the detector. The ideal detector has several desirable features: a low detection limit, a linear response over a wide range of solute concentrations (which makes quantitative work easier), sensitivity for all solutes or selectivity for a specific class of solutes, and an insensitivity to a change in flow rate or temperature. Thermal Conductivity Detector (TCD) One of the earliest gas chromatography detectors takes advantage of the mobile phase’s thermal conductivity. As the mobile phase exits the column it passes over a tungsten-rhenium wire filament (see Figure 27.2.6 ). The filament’s electrical resistance depends on its temperature, which, in turn, depends on the thermal conductivity of the mobile phase. Because of its high thermal conductivity, helium is the mobile phase of choice when using a thermal conductivity detector (TCD). Thermal conductivity, as the name suggests, is a measure of how easily a substance conducts heat. A gas with a high thermal conductivity moves heat away from the filament—and, thus, cools the filament—more quickly than does a gas with a low thermal conductivity. When a solute elutes from the column, the thermal conductivity of the mobile phase in the TCD cell decreases and the temperature of the wire filament, and thus it resistance, increases. A reference cell, through which only the mobile phase passes, corrects for any time-dependent variations in flow rate, pressure, or electrical power, all of which affect the filament’s resistance. Because all solutes affect the mobile phase’s thermal conductivity, the thermal conductivity detector is a universal detector. Another advantage is the TCD’s linear response over a concentration range spanning 104–105 orders of magnitude. The detector also is non-destructive, which allows us to recover analytes using a postdetector cold trap. One significant disadvantage of the TCD detector is its poor detection limit for most analytes. Flame Ionization Detector (FID) The combustion of an organic compound in an H2/air flame results in a flame that contains electrons and organic cations, presumably CHO+. Applying a potential of approximately 300 volts across the flame creates a small current of roughly 10–9 to 10–12 amps. When amplified, this current provides a useful analytical signal. This is the basis of the popular flame ionization detector, a schematic diagram of which is shown in Figure 27.2.7 . Most carbon atoms—except those in carbonyl and carboxylic groups—generate a signal, which makes the FID an almost universal detector for organic compounds. Most inorganic compounds and many gases, such as H2O and CO2, are not detected, which makes the FID detector a useful detector for the analysis of organic analytes in atmospheric and aqueous environmental samples. Advantages of the FID include a detection limit that is approximately two to three orders of magnitude smaller than that for a thermal conductivity detector, and a linear response over 106–107 orders of magnitude in the amount of analyte injected. The sample, of course, is destroyed when using a flame ionization detector. Electron Capture Detector (ECD) The electron capture detector is an example of a selective detector. As shown in Figure 27.2.8 , the detector consists of a $\beta$-emitter, such as 63Ni. The emitted electrons ionize the mobile phase, usually N2, generating a standing current between a pair of electrodes. When a solute with a high affinity for capturing electrons elutes from the column, the current decreases, which serves as the signal. The ECD is highly selective toward solutes with electronegative functional groups, such as halogens and nitro groups, and is relatively insensitive to amines, alcohols, and hydrocarbons. Although its detection limit is excellent, its linear range extends over only about two orders of magnitude. A $\beta$-particle is an electron. Mass Spectrometer (MS) A mass spectrometer is an instrument that ionizes a gaseous molecule using sufficient energy that the resulting ion breaks apart into smaller ions. Because these ions have different mass-to-charge ratios, it is possible to separate them using a magnetic field or an electrical field. The resulting mass spectrum contains both quantitative and qualitative information about the analyte. Figure 27.2.9 shows a mass spectrum for toluene. Figure 27.2.10 shows a block diagram of a typical gas chromatography-mass spectrometer (GC–MS) instrument. The effluent from the column enters the mass spectrometer’s ion source in a manner that eliminates the majority of the carrier gas. In the ionization chamber the remaining molecules—a mixture of carrier gas, solvent, and solutes—undergo ionization and fragmentation. The mass spectrometer’s mass analyzer separates the ions by their mass-to-charge ratio and a detector counts the ions and displays the mass spectrum. There are several options for monitoring a chromatogram when using a mass spectrometer as the detector. The most common method is to continuously scan the entire mass spectrum and report the total signal for all ions that reach the detector during each scan. This total ion scan provides universal detection for all analytes. We can achieve some degree of selectivity by monitoring one or more specific mass-to-charge ratios, a process called selective-ion monitoring. A mass spectrometer provides excellent detection limits, typically 25 fg to 100 pg, with a linear range of 105 orders of magnitude. Because we continuously record the mass spectrum of the column’s eluent, we can go back and examine the mass spectrum for any time increment. This is a distinct advantage for GC–MS because we can use the mass spectrum to help identify a mixture’s components. Other Detectors Two additional detectors are similar in design to a flame ionization detector. In the flame photometric detector, optical emission from phosphorous and sulfur provides a detector selective for compounds that contain these elements. The thermionic detector responds to compounds that contain nitrogen or phosphorous. A Fourier transform infrared spectrophotometer (FT–IR) also can serve as a detector. In GC–FT–IR, effluent from the column flows through an optical cell constructed from a 10–40 cm Pyrex tube with an internal diameter of 1–3 mm. The cell’s interior surface is coated with a reflecting layer of gold. Multiple reflections of the source radiation as it is transmit- ted through the cell increase the optical path length through the sample. As is the case with GC–MS, an FT–IR detector continuously records the column eluent’s spectrum, which allows us to examine the IR spectrum for any time increment.
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/27%3A_Gas_Chromatography/27.02%3A_Instruments_for_Gas-Liquid_Chromatography.txt
There are two broad classes of chromatographic columns: packed columns and capillary columns. In general, a packed column can handle larger samples and a capillary column can separate more complex mixtures. Packed Columns Packed columns are constructed from glass, stainless steel, copper, or aluminum, and typically are 2–6 m in length with internal diameters of 2–4 mm. The column is filled with a particulate solid support, with particle diameters ranging from 37–44 μm to 250–354 μm. Figure 27.3.1 shows a typical example of a packed column. The most widely used particulate support is diatomaceous earth, which is composed of the silica skeletons of diatoms. These particles are very porous, with surface areas ranging from 0.5–7.5 m2/g, which provides ample contact between the mobile phase and the stationary phase. When hydrolyzed, the surface of a diatomaceous earth contains silanol groups (–SiOH), that serve as active sites for absorbing solute molecules in gas-solid chromatography (GSC). In gas-liquid chromatography (GLC), we coat the packing material with a liquid mobile phase. To prevent uncoated packing material from adsorbing solutes, which degrades the quality of the separation, surface silanols are deactivated by reacting them with dimethyldichlorosilane and rinsing with an alcohol—typically methanol—before coating the particles with stationary phase. The column in Figure 27.3.1 , for example, has approximately 1800 plates/m, or a total of approximately 3600 theoretical plates. Capillary Columns A capillary, or open tubular column is constructed from fused silica and is coated with a protective polymer coating. Columns range from 15–100 m in length with an internal diameter of approximately 150–300 μm. Figure 27.3.2 shows an example of a typical capillary column. Capillary columns are of three principal types. In a wall-coated open tubular column (WCOT) a thin layer of stationary phase, typically 0.25 nm thick, is coated on the capillary’s inner wall. In a porous-layer open tubular column (PLOT), a porous solid support—alumina, silica gel, and molecular sieves are typical examples—is attached to the capillary’s inner wall. A support-coated open tubular column (SCOT) is a PLOT column that includes a liquid stationary phase. Figure 27.3.3 shows the differences between these types of capillary columns. A capillary column provides a significant improvement in separation efficiency because it has more theoretical plates per meter and is longer than a packed column. For example, the capillary column in Figure 27.3.2 has almost 4300 plates/m, or a total of 129 000 theoretical plates. On the other hand, a packed column can handle a larger sample. Because of its smaller diameter, a capillary column requires a smaller sample, typically less than 10–2 μL. Stationary Phases for Gas-Liquid Chromatography Elution order in gas–liquid chromatography depends on two factors: the boiling point of the solutes and the interaction between the solutes and the stationary phase. If a mixture’s components have significantly different boiling points, then the choice of stationary phase is less critical. If two solutes have similar boiling points, then a separation is possible only if the stationary phase selectively interacts with one of the solutes. As a general rule, nonpolar solutes are separated more easily when using a nonpolar stationary phase, and polar solutes are easier to separate when using a polar stationary phase. There are several important criteria for choosing a stationary phase: it must not react with the solutes, it must be thermally stable, it must have a low volatility, and it must have a polarity that is appropriate for the sample’s components. Table 27.3.1 summarizes the properties of several popular stationary phases. Table 27.3.1 . Selected Examples of Stationary Phases for Gas-Liquid Chromatography stationary phase polarity trade name temperature limit (oC) representative applications squalane nonpolar Squalane 150 low-boiling aliphatics hydrocarbons Apiezon L nonpolar Apiezon L 300 amides, fatty acid methyl esters, terpenoids polydimethyl siloxane slightly polar SE-30 300–350 alkaloids, amino acid derivatives, drugs, pesticides, phenols, steroids phenylmethyl polysiloxane (50% phenyl, 50% methyl) moderately polar OV-17 375 alkaloids, drugs, pesticides, polyaromatic hydrocarbons, polychlorinated biphenyls trifluoropropylmethyl polysiloxane (50% trifluoropropyl, 50% methyl) moderately polar OV-210 275 alkaloids, amino acid derivatives, drugs, halogenated compounds, ketones cyanopropylphenylmethyl polysiloxane (50%cyanopropyl, 50% phenylmethyl) polar OV-225 275 nitriles, pesticides, steroids polyethylene glycol polar Carbowax 20M 225 aldehydes, esters, ethers, phenols Many stationary phases have the general structure shown in Figure 27.3.4 a. A stationary phase of polydimethyl siloxane, in which all the –R groups are methyl groups, –CH3, is nonpolar and often makes a good first choice for a new separation. The order of elution when using polydimethyl siloxane usually follows the boiling points of the solutes, with lower boiling solutes eluting first. Replacing some of the methyl groups with other substituents increases the stationary phase’s polarity and provides greater selectivity. For example, replacing 50% of the –CH3 groups with phenyl groups, –C6H5, produces a slightly polar stationary phase. Increasing polarity is provided by substituting trifluoropropyl, –C3H6CF, and cyanopropyl, –C3H6CN, functional groups, or by using a stationary phase of polyethylene glycol (Figure 27.3.4 b). An important problem with all liquid stationary phases is their tendency to elute, or bleed from the column when it is heated. The temperature limits in Table 27.3.1 minimize this loss of stationary phase. Capillary columns with bonded or cross-linked stationary phases provide superior stability. A bonded stationary phase is attached chemically to the capillary’s silica surface. Cross-linking, which is done after the stationary phase is in the capillary column, links together separate polymer chains to provide greater stability. Another important consideration is the thickness of the stationary phase with thinner films of stationary phase improving separation efficiency, as we learned in Chapter 26.4. The most common thickness is 0.25 μm, although a thicker films is useful for highly volatile solutes, such as gases, because it has a greater capacity for retaining such solutes. Thinner films are used when separating low volatility solutes, such as steroids. A few stationary phases take advantage of chemical selectivity. The most notable are stationary phases that contain chiral functional groups, which are used to separate enantiomers [Hinshaw, J. V. LC .GC 1993, 11, 644–648].
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/27%3A_Gas_Chromatography/27.03%3A_Gas_Chromatographic_Columns_and_Stationary_Phases.txt
Quantitative Applications Gas chromatography is widely used for the analysis of a diverse array of samples in environmental, clinical, pharmaceutical, biochemical, forensic, food science and petrochemical laboratories. Table 27.4.1 provides some representative examples of applications. Table 27.4.1 . Representative Applications of Gas Chromatography area applications environmental analysis green house gases (CO2, CH4, NOx) in air pesticides in water, wastewater, and soil vehicle emissions trihalomethanes in drinking water clinical analysis drugs blood alcohols forensic analysis analysis of arson accelerants detection of explosives consumer products volatile organics in spices and fragrances trace organics in whiskey monomers in latex paint petrochemical and chemical industry purity of solvents refinery gas composition of gasoline Quantitative Calculations In a GC analysis the area under the peak is proportional to the amount of analyte injected onto the column. A peak’s area is determined by integration, which usually is handled by the instrument’s computer or by an electronic integrating recorder. If two peak are resolved fully, the determination of their respective areas is straightforward. Before electronic integrating recorders and computers, two methods were used to find the area under a curve. One method used a manual planimeter; as you use the planimeter to trace an object’s perimeter, it records the area. A second approach for finding a peak’s area is the cut-and-weigh method. The chromatogram is recorded on a piece of paper and each peak of interest is cut out and weighed. Assuming the paper is uniform in thickness and density of fibers, the ratio of weights for two peaks is the same as the ratio of areas. Of course, this approach destroys your chromatogram. Overlapping peaks, however, require a choice between one of several options for dividing up the area shared by the two peaks (Figure 27.4.1 ). Which method we use depends on the relative size of the two peaks and their resolution. In some cases, the use of peak heights provides more accurate results [(a) Bicking, M. K. L. Chromatography Online, April 2006; (b) Bicking, M. K. L. Chromatography Online, June 2006]. For quantitative work we need to establish a calibration curve that relates the detector’s response to the analyte’s concentration. If the injection volume is identical for every standard and sample, then an external standardization provides both accurate and precise results. Unfortunately,even under the best conditions the relative precision for replicate injections may differ by 5%; often it is substantially worse. For quantitative work that requires high accuracy and precision, the use of internal standards is recommended. Example 27.4.1 Marriott and Carpenter report the following data for five replicate injections of a mixture that contains 1% v/v methyl isobutyl ketone and 1% v/v p-xylene in dichloromethane [Marriott, P. J.; Carpenter, P. D. J. Chem. Educ. 1996, 73, 96–99]. injection peak peak area (arb. units) I 1 48075 2 78112 II 1 85829 2 135404 III 1 84136 2 132332 IV 1 71681 2 112889 V 1 58054 2 91287 Assume that p-xylene (peak 2) is the analyte, and that methyl isobutyl ketone (peak 1) is the internal standard. Determine the 95% confidence interval for a single-point standardization with and without using the internal standard. Solution For a single-point external standardization we ignore the internal standard and determine the relationship between the peak area for p-xylene, A2, and the concentration, C2, of p-xylene. $A_{2}=k C_{2} \nonumber$ Substituting the known concentration for p-xylene (1% v/v) and the appropriate peak areas, gives the following values for the constant k. $78112 \quad 135404 \quad 132332 \quad 112889 \quad 91287 \nonumber$ The average value for k is 110 000 with a standard deviation of 25 100 (a relative standard deviation of 22.8%). The 95% confidence interval is $\mu=\overline{X} \pm \frac{t s}{\sqrt{n}}=111000 \pm \frac{(2.78)(25100)}{\sqrt{5}}=111000 \pm 31200 \nonumber$ For an internal standardization, the relationship between the analyte’s peak area, A2, the internal standard’s peak area, A1, and their respective concentrations, C2 and C1, is $\frac{A_{2}}{A_{1}}=k \frac{C_{2}}{C_{1}} \nonumber$ Substituting in the known concentrations and the appropriate peak areas gives the following values for the constant k. $1.5917 \quad 1.5776 \quad 1.5728 \quad 1.5749 \quad 1.5724 \nonumber$ The average value for k is 1.5779 with a standard deviation of 0.0080 (a relative standard deviation of 0.507%). The 95% confidence interval is $\mu=\overline{X} \pm \frac{t s}{\sqrt{n}}=1.5779 \pm \frac{(2.78)(0.0080)}{\sqrt{5}}=1.5779 \pm 0.0099 \nonumber$ Although there is a substantial variation in the individual peak areas for this set of replicate injections, the internal standard compensates for these variations, providing a more accurate and precise calibration. Exercise 27.4.1 Figure 27.4.2 shows chromatograms for five standards and for one sample. Each standard and sample contains the same concentration of an internal standard, which is 2.50 mg/mL. For the five standards, the concentrations of analyte are 0.20 mg/mL, 0.40 mg/mL, 0.60 mg/mL, 0.80 mg/mL, and 1.00 mg/mL, respectively. Determine the concentration of analyte in the sample by (a) ignoring the internal standards and creating an external standards calibration curve, and by (b) creating an internal standard calibration curve. For each approach, report the analyte’s concentration and the 95% confidence interval. Use peak heights instead of peak areas. Answer The following table summarizes my measurements of the peak heights for each standard and the sample, and their ratio (although your absolute values for peak heights will differ from mine, depending on the size of your monitor or printout, your relative peak height ratios should be similar to mine). [standard] (mg/mL) peak height of standard (mm) peak height of analyte (mm) peak height ratio 0.20 35 7 0.20 0.40 41 16 0.39 0.60 44 27 0.61 0.80 48 39 0.81 1.00 41 41 1.00 sample 39 21 0.54 Figure (a) shows the calibration curve and the calibration equation when we ignore the internal standard. Substituting the sample’s peak height into the calibration equation gives the analyte’s concentration in the sample as 0.49 mg/mL. The 95% confidence interval is ±0.24 mg/mL. The calibration curve shows quite a bit of scatter in the data because of uncertainty in the injection volumes. Figure (b) shows the calibration curve and the calibration equation when we include the internal standard. Substituting the sample’s peak height ratio into the calibration equation gives the analyte’s concentration in the sample as 0.54 mg/mL. The 95% confidence interval is ±0.04 mg/mL. The data for this exercise were created so that the analyte’s actual concentration is 0.55 mg/mL. Given the resolution of my ruler’s scale, my answer is pretty reasonable. Your measurements may be slightly different, but your answers should be close to the actual values. Qualitative Applications In addition to a quantitative analysis, we also can use chromatography to identify the components of a mixture. As noted earlier, when we use an FT–IR or a mass spectrometer as the detector we have access to the eluent’s full spectrum for any retention time. By interpreting the spectrum or by searching against a library of spectra, we can identify the analyte responsible for each chromatographic peak. In addition to identifying the component responsible for a particular chromatographic peak, we also can use the saved spectra to evaluate peak purity. If only one component is responsible for a chromatographic peak, then the spectra should be identical throughout the peak’s elution. If a spectrum at the beginning of the peak’s elution is different from a spectrum taken near the end of the peak’s elution, then at least two components are co-eluting. When using a nonspectroscopic detector, such as a flame ionization detector, we must find another approach if we wish to identify the components of a mixture. One approach is to spike a sample with the suspected compound and look for an increase in peak height. We also can compare a peak’s retention time to the retention time for a known compound if we use identical operating conditions. Because a compound’s retention times on two identical columns are not likely to be the same—differences in packing efficiency, for example, will affect a solute’s retention time on a packed column—creating a table of standard retention times is not possible. Kovat’s retention index provides one solution to the problem of matching retention times. Under isothermal conditions, the adjusted retention times for normal alkanes increase logarithmically. Kovat defined the retention index, I, for a normal alkane as 100 times the number of carbon atoms. For example, the retention index is 400 for butane, C4H10, and 500 for pentane, C5H12. To determine the a compound’s retention index, Icpd, we use the following formula $I_{cpd} = 100 \times \frac {\log t_{r,cpd}^{\prime} - \log t_{r,x}^{\prime}} {\log t_{r, x+1}^{\prime} - \log t_{r,x}^{\prime}} + I_x \label{12.1}$ where $t_{r,cpd}^{\prime}$ is the compound’s adjusted retention time, $t_{r,x}^{\prime}$ and $t_{r,x+1}^{\prime}$ are the adjusted retention times for the normal alkanes that elute immediately before the compound and immediately after the compound, respectively, and Ix is the retention index for the normal alkane that elutes immediately before the compound. A compound’s retention index for a particular set of chromatographic conditions—such as the choice of stationary phase, mobile phase, column type, column length, and temperature—is reasonably consistent from day-to-day and between different columns and instruments. Tables of Kovat’s retention indices are available; see, for example, the NIST Chemistry Webbook. A search for toluene returns 341 values of I for over 20 different stationary phases, and for both packed columns and capillary columns. Example 27.4.2 In a separation of a mixture of hydrocarbons the following adjusted retention times are measured: 2.23 min for propane, 5.71 min for isobutane, and 6.67 min for butane. What is the Kovat’s retention index for each of these hydrocarbons? Solution Kovat’s retention index for a normal alkane is 100 times the number of carbons; thus, for propane, I = 300 and for butane, I = 400. To find Kovat’s retention index for isobutane we use Equation \ref{12.1}. $I_\text{isobutane} =100 \times \frac{\log (5.71)-\log (2.23)}{\log (6.67)-\log (2.23)}+300=386 \nonumber$ Exercise 27.4.2 When using a column with the same stationary phase as in Example 27.4.2 , you find that the retention times for propane and butane are 4.78 min and 6.86 min, respectively. What is the expected retention time for isobutane? Answer Because we are using the same column we can assume that isobutane’s retention index of 386 remains unchanged. Using Equation \ref{12.1}, we have $386=100 \times \frac{\log x-\log (4.78)}{\log (6.86)-\log (4.78)}+300 \nonumber$ where x is the retention time for isobutane. Solving for x, we find that $0.86=\frac{\log x-\log (4.78)}{\log (6.86)-\log (4.78)} \nonumber$ $0.135=\log x-0.679 \nonumber$ $0.814=\log x \nonumber$ $x=6.52 \nonumber$ the retention time for isobutane is 6.5 min.
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/27%3A_Gas_Chromatography/27.04%3A_Applications_of_Gas-Liquid_Chromatography.txt
In high-performance liquid chromatography (HPLC) we inject the sample, which is in solution form, into a liquid mobile phase. The mobile phase carries the sample through a packed or capillary column that separates the sample’s components based on their ability to partition between the mobile phase and the stationary phase. Because it combines separation with analysis, HPLC provides excellent selectivity. By adjusting conditions it usually is possible to design a separation so that analytes elute by themselves, even when the mixture is complex. Additional selectivity is possible by using a detector that does not respond to all analytes. • 28.1: Scope of HPLC High-performance liquid chromatography consists of four broad types of separations: the partitioning of a solute between two liquid phases, adsorption of a solute on a solid substrate, the attraction of an ionic solute to an ion-exchange resin, and the exclusion of a sufficiently large solute from entering into a solid substrate. • 28.2: Column Efficiency in Liquid Chromatography Unlike gas chromatography, an HPLC instrument must often include additional tubing to connect together the sample injection port and the column, and the column and the detector. Solutes moving through this tubing, which does not include stationary phase, travel with a velocity that is slower at the walls of the tubing and faster at the center of the tubing; the result is additional band broadening. • 28.3: Instruments for Liquid Chromatography In high-performance liquid chromatography (HPLC) we inject the sample, which is in solution form, into a liquid mobile phase. The mobile phase carries the sample through a packed or capillary column that separates the sample’s components based on their ability to partition between the mobile phase and the stationary phase. In this section we consider each of these. • 28.4: Partition Chromatography In partition chromatography, a solute's retention time is determined by the extent to which it moves from the mobile phase into the stationary phase, and from the stationary phase back into the mobile phase. The extent of this equilibrium partitioning is determined by the polarity of the solutes, the stationary phase, and the mobile phase. • 28.5: Adsorption Chromatography In adsorption chromatography (or liquid-solid chromatography, LSC) the column packing also serves as the stationary phase. For most samples, liquid–solid chromatography does not offer any special advantages over liquid–liquid chromatography. One exception is the analysis of isomers, where LSC excels. • 28.6: Ion-Exchange Chromatography In ion-exchange chromatography (IEC) the stationary phase is a cross-linked polymer resin, usually divinylbenzene cross-linked polystyrene, with covalently attached ionic functional groups. The counterions to these fixed charges are mobile and are displaced by ions that compete more favorably for the exchange sites. • 28.7: Size-Exclusion Chromatography In size-exclusion chromatography—which also is known by the terms molecular-exclusion or gel permeation chromatography—the separation of solutes depends upon their ability to enter into the pores of the stationary phase. Smaller solutes spend proportionally more time within the pores and take longer to elute from the column. 28: High-Performance Liquid Chromatography Gas chromatography consists largely of two specific types of interactions, both of which involve the stationary phase: the partitioning of the solute into a polar or a non-polar stationary phase, or the adsorption of the solute onto a solid packing material. The separation of a complex mixture into its component parts is determined primarily by the boiling points of the solutes and differences in the solubility of the solutes in the stationary phase. The properties of the mobile phase, on the other hand, are less important. It is not surprising that there is not much variety in the basic types of gas chromatography. High-performance liquid chromatography consists of much richer group of techniques, both because the separation depends on the ability of the solutes to partition into the stationary phase and to partition into the mobile phase. The range of the possible types of interactions between the solutes and the stationary phase also is greater in HPLC than in GC. In addition to separations based on differences in solubility of the solutes in the stationary phase and the mobile phase (normal and reverse phase partition chromatography) and separations based on the adsorption of solutes on a solid substrate (adsorption chromatography), the separation of ions is possible using ion-exchange resins as stationary phases (ion-exchange chromatography) and the separation of ions by size (size-exclusion chromatography). 28.02: Column Efficiency in Liquid Chromatography In Chapter 26 we considered three factors that affect band broadening—multiple paths, longitudinal diffusion, and mass transfer—expressing the relationship between the height of a theoretical plate, $H$, as a function of the mobile phase's velocity, $u$, using the van Deemter equation $H = A + \frac{B}{u} + Cu \nonumber$ where $A$ is the contribution from multiple paths, $B$ is the contribution from longitudinal diffusion, and $C$ is the contribution from mass transfer. Unlike gas chromatography, where there is little distance between the point of injection and the column, and little distance between the column and the detector, an HPLC instrument must often include additional tubing to connect together the sample injection port and the column, and the column and the detector. Solutes moving through this tubing, which does not include stationary phase, travel with a velocity that is slower at the walls of the tubing and faster at the center of the tubing; the result is additional band broadening. The magnitude of this contribution to band broadening is minimized by keeping the length of connecting tubing as short as possible. by using tubing with a smaller internal diameter, and by using lower flow rates.
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/28%3A_High-Performance_Liquid_Chromatography/28.01%3A_Scope_of_HPLC.txt
In high-performance liquid chromatography (HPLC) we inject the sample, which is in solution form, into a liquid mobile phase. The mobile phase carries the sample through a packed or capillary column that separates the sample’s components based on their ability to partition between the mobile phase and the stationary phase. Figure 28.3.1 shows an example of a typical HPLC instrument, which has several key components: reservoirs that store the mobile phase; a pump for pushing the mobile phase through the system; an injector for introducing the sample; a column for separating the sample into its component parts; and a detector for monitoring the eluent as it comes off the column. Let’s consider each of these components. HPLC Columns An HPLC typically includes two columns: an analytical column, which is responsible for the separation, and a guard column that is placed before the analytical column to protect it from contamination. Analytical Columns The most common type of HPLC column is a stainless steel tube with an internal diameter between 2.1 mm and 4.6 mm and a length between 30 mm and 300 mm (Figure 28.3.2 ). The column is packed with 3–10 µm porous silica particles with either an irregular or a spherical shape. Typical column efficiencies are 40 000–60 000 theoretical plates/m. A 25-cm column with 50 000 plates/m has 12 500 theoretical plates. Capillary columns use less solvent and, because the sample is diluted to a lesser extent, produce larger signals at the detector. These columns are made from fused silica capillaries with internal diameters from 44–200 μm and lengths of 50–250 mm. Capillary columns packed with 3–5 μm particles have been prepared with column efficiencies of up to 250 000 theoretical plates [Novotony, M. Science, 1989, 246, 51–57]. One limitation to a packed capillary column is the back pressure that develops when pumping the mobile phase through the small interstitial spaces between the particulate micron-sized packing material (Figure 28.3.3 ). Because the tubing and the fittings that carry the mobile phase have pressure limits, a higher back pressure requires a lower flow rate and a longer analysis time. Monolithic columns, in which the solid support is a single, porous rod, offer column efficiencies equivalent to a packed capillary column while allowing for faster flow rates. A monolithic column—which usually is similar in size to a conventional packed column, although smaller, capillary columns also are available—is prepared by forming the mono- lithic rod in a mold and covering it with PTFE tubing or a polymer resin. Monolithic rods made of a silica-gel polymer typically have macropores with diameters of approximately 2 μm and mesopores—pores within the macropores—with diameters of approximately 13 nm [Cabrera, K. Chromatography Online, April 1, 2008]. Guard Columns Two problems tend to shorten the lifetime of an analytical column. First, solutes that bind irreversibly to the stationary phase degrade the column’s performance by decreasing the amount of stationary phase available for effecting a separation. Second, particulate material injected with the sample may clog the analytical column. To minimize these problems we place a guard column before the analytical column. A guard column usually contains the same particulate packing material and stationary phase as the analytical column, but is significantly shorter and less expensive—a length of 7.5 mm and a cost one-tenth of that for the corresponding analytical column is typical. Because they are intended to be sacrificial, guard columns are replaced regularly. If you look closely at Figure 28.3.1 , you will see the small guard column just above the analytical column. HPLC Plumbing In a gas chromatograph the pressure from a compressed gas cylinder is sufficient to push the mobile phase through the column. Pushing a liquid mobile phase through a column, however, takes a great deal more effort, generating pressures in excess of several hundred atmospheres. In this section we consider the basic plumbing needed to move the mobile phase through the column and to inject the sample into the mobile phase. Moving the Mobile Phase A typical HPLC includes between 1–4 reservoirs for storing mobile phase solvents. The instrument in Figure 28.3.1 , for example, has two mobile phase reservoirs that are used for an isocratic elution or a gradient elution by drawing solvents from one or both reservoirs. Before using a mobile phase solvent we must remove dissolved gases, such as N2 and O2, and small particulate matter, such as dust. Because there is a large drop in pressure across the column—the pressure at the column’s entrance is as much as several hundred atmospheres, but it is atmospheric pressure at the column’s exit—gases dissolved in the mobile phase are released as gas bubbles that may interfere with the detector’s response. Degassing is accomplished in several ways, but the most common are the use of a vacuum pump or sparging with an inert gas, such as He, which has a low solubility in the mobile phase. Particulate materials, which may clog the HPLC tubing or column, are removed by filtering the solvents. Bubbling an inert gas through the mobile phase releases volatile dissolved gases. This process is called sparging. The mobile phase solvents are pulled from their reservoirs by the action of one or more pumps. Figure 28.3.4 shows a close-up view of the pumps for the instrument in Figure 28.3.1 . The working pump and the equilibrating pump each have a piston whose back and forth movement maintains a constant flow rate of up to several mL/min and provides the high output pressure needed to push the mobile phase through the chromatographic column. In this particular instrument, each pump sends its mobile phase to a mixing chamber where they combine to form the final mobile phase. The relative speed of the two pumps determines the mobile phase’s final composition. The back and forth movement of a reciprocating pump creates a pulsed flow that contributes noise to the chromatogram. To minimize these pulses, each pump in Figure 28.3.4 has two cylinders. During the working cylinder’s forward stoke it fills the equilibrating cylinder and establishes flow through the column. When the working cylinder is on its reverse stroke, the flow is maintained by the piston in the equilibrating cylinder. The result is a pulse-free flow. There are other ways to control the mobile phase’s composition and flow rate. For example, instead of the two pumps in Figure 28.3.4 , we can place a solvent proportioning valve before a single pump. The solvent proportioning value connects two or more solvent reservoirs to the pump and determines how much of each solvent is pulled during each of the pump’s cycles. Another approach for eliminating a pulsed flow is to include a pulse damper between the pump and the column. A pulse damper is a chamber filled with an easily compressed fluid and a flexible diaphragm. During the piston’s forward stroke the fluid in the pulse damper is compressed. When the piston withdraws to refill the pump, pressure from the expanding fluid in the pulse damper maintains the flow rate. Injecting the Sample The operating pressure within an HPLC is sufficiently high that we cannot inject the sample into the mobile phase by inserting a syringe through a septum, as is possible in gas chromatography. Instead, we inject the sample using a loop injector, a diagram of which is shown in Figure 28.3.5 . In the load position a sample loop—which is available in a variety of sizes ranging from 0.5 μL to 5 mL—is isolated from the mobile phase and open to the atmosphere. The sample loop is filled using a syringe with a capacity several times that of the sample loop, with excess sample exiting through the waste line. After loading the sample, the injector is turned to the inject position, which redirects the mobile phase through the sample loop and onto the column. The instrument in Figure 28.3.1 uses an autosampler to inject samples. Instead of using a syringe to push the sample into the sample loop, the syringe draws sample into the sample loop. Detectors for HPLC Many different types of detectors have been use to monitor HPLC separations, most of which use spectroscopy or electrochemistry to generate a measurable signal. Spectroscopic Detectors The most popular HPLC detectors take advantage of an analyte’s UV/Vis absorption spectrum. These detectors range from simple designs, in which the analytical wavelength is selected using appropriate filters, to a modified spectrophotometer in which the sample compartment includes a flow cell. Figure 28.3.6 shows the design of a typical flow cell when using a diode array spectrometer as the detector. The flow cell has a volume of 1–10 μL and a path length of 0.2–1 cm. When using a UV/Vis detector the resulting chromatogram is a plot of absorbance as a function of elution time. If the detector is a diode array spectrometer, then we also can display the result as a three-dimensional chromatogram that shows absorbance as a function of wavelength and elution time. One limitation to using absorbance is that the mobile phase cannot absorb at the wavelengths we wish to monitor. Absorbance detectors provide detection limits of as little as 100 pg–1 ng of injected analyte. If an analyte is fluorescent, we can place the flow cell in a spectrofluorimeter. Detection limits are as little as 1–10 pg of injected analyte. Electrochemical Detectors Another common group of HPLC detectors are those based on electrochemical measurements such as amperometry, voltammetry, coulometry, and conductivity. Figure 28.3.7 , for example, shows an amperometric flow cell. Effluent from the column passes over the working electrode—held at a constant potential relative to a downstream reference electrode—that completely oxidizes or reduces the analytes. The current flowing between the working electrode and the auxiliary electrode serves as the analytical signal. Detection limits for amperometric electrochemical detection are from 10 pg–1 ng of injected analyte. Other Detectors Several other detectors have been used in HPLC. Measuring a change in the mobile phase’s refractive index is analogous to monitoring the mobile phase’s thermal conductivity in gas chromatography. A refractive index detector is nearly universal, responding to almost all compounds, but has a relatively poor detection limit of 0.1–1 μg of injected analyte. An additional limitation of a refractive index detector is that it cannot be used for a gradient elution unless the mobile phase components have identical refractive indexes. Another useful detector is a mass spectrometer. Figure 28.3.8 shows a block diagram of a typical HPLC–MS instrument. The effluent from the column enters the mass spectrometer’s ion source using an interface the removes most of the mobile phase, an essential need because of the incompatibility between the liquid mobile phase and the mass spectrometer’s high vacuum environment. In the ionization chamber the remaining molecules—a mixture of the mobile phase components and solutes—undergo ionization and fragmentation. The mass spectrometer’s mass analyzer separates the ions by their mass-to-charge ratio (m/z). A detector counts the ions and displays the mass spectrum. There are several options for monitoring the chromatogram when using a mass spectrometer as the detector. The most common method is to continuously scan the entire mass spectrum and report the total signal for all ions reaching the detector during each scan. This total ion scan provides universal detection for all analytes. We can achieve some degree of selectivity by monitoring only specific mass-to-charge ratios, a process called selective-ion monitoring. The advantages of using a mass spectrometer in HPLC are the same as for gas chromatography. Detection limits are very good, typically 0.1–1 ng of injected analyte, with values as low as 1–10 pg for some samples. In addition, a mass spectrometer provides qualitative, structural information that can help to identify the analytes. The interface between the HPLC and the mass spectrometer is technically more difficult than that in a GC–MS because of the incompatibility of a liquid mobile phase with the mass spectrometer’s high vacuum requirement.
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/28%3A_High-Performance_Liquid_Chromatography/28.03%3A_Instruments_for_Liquid_Chromatography.txt
Of the many forms of liquid chromatography, partition chromatography is the most common. In partition chromatography, a solute's retention time is determined by the extent to which it moves from the mobile phase into the stationary phase, and from the stationary phase back into the mobile phase. The extent of this equilibrium partitioning is determined by the polarity of the solutes, the stationary phase, and the mobile phase. In normal-phase partition chromatography, the stationary phase is polar and the mobile phase is non-polar (or of low polarity), with more polar solutes taking longer to elute as they are more strongly retained by the polar stationary phase. In reverse-phase partition chromatography, the stationary phase is non-polar and the mobile phase is polar, with more polar solutes eluting more quickly as they are less strongly retained by the stationary phase. Of the two modes, reverse-phase partition chromatography is the more common. Stationary Phases for Partition Chromatography In partition chromatography the stationary phase is a liquid film coated on a packing material, typically 3–10 μm porous silica particles. Because the stationary phase may be partially soluble in the mobile phase, it may elute, or bleed from the column over time. To prevent the loss of stationary phase, which shortens the column’s lifetime, it is bound covalently to the silica particles. Bonded stationary phases are created by reacting the silica particles with an organochlorosilane of the general form Si(CH3)2RCl, where R is an alkyl or substituted alkyl group. To prevent unwanted interactions between the solutes and any remaining –SiOH groups, Si(CH3)3Cl is used to convert unreacted sites to $–\text{SiOSi(CH}_3)_3$; such columns are designated as end-capped. The properties of a stationary phase depend on the organosilane’s alkyl group. If R is a polar functional group, then the stationary phase is polar. Examples of polar stationary phases include those where R contains a cyano (–C2H4CN), a diol (–C3H6OCH2CHOHCH2OH), or an amino (–C3H6NH2) functional group. The most common nonpolar stationary phases use an organochlorosilane where the R group is an n-octyl (C8) or n-octyldecyl (C18) hydrocarbon chain. Most reversed-phase separations are carried out using a buffered aqueous solution as a polar mobile phase, or using other polar solvents, such as methanol and acetonitrile. Because the silica substrate may undergo hydrolysis in basic solutions, the pH of the mobile phase must be less than 7.5. It seems odd that the more common form of liquid chromatography is identified as reverse-phase instead of normal phase. One of the earliest examples of chromatography was Mikhail Tswett’s separation of plant pigments, which used a polar column of calcium carbonate and a nonpolar mobile phase of petroleum ether. The assignment of normal and reversed, therefore, is all about precedence. Mobile Phases for Partition Chromatography The elution order of solutes in HPLC is governed by polarity. For a normal-phase separation, a solute of lower polarity spends proportionally less time in the polar stationary phase and elutes before a solute that is more polar. Given a particular stationary phase, retention times in normal-phase HPLC are controlled by adjusting the mobile phase’s properties. For example, if the resolution between two solutes is poor, switching to a less polar mobile phase keeps the solutes on the column for a longer time and provides more opportunity for their separation. In reversed-phase HPLC the order of elution is the opposite that in a normal-phase separation, with more polar solutes eluting first. Increasing the polarity of the mobile phase leads to longer retention times. Shorter retention times require a mobile phase of lower polarity. Choosing a Mobile Phase: Using the Polarity Index There are several indices that help in selecting a mobile phase, one of which is the polarity index [Snyder, L. R.; Glajch, J. L.; Kirkland, J. J. Practical HPLC Method Development, Wiley-Inter- science: New York, 1988]. Table 28.4.1 provides values of the polarity index, $P^{\prime}$, for several common mobile phases, where larger values of $P^{\prime}$ correspond to more polar solvents. Mixing together two or more mobile phases—assuming they are miscible—creates a mobile phase of intermediate polarity. For example, a binary mobile phase made by combining solvent A and solvent B has a polarity index, $P_{AB}^{\prime}$, of $P_{A B}^{\prime}=\Phi_{A} P_{A}^{\prime}+\Phi_{B} P_{B}^{\prime} \label{12.1}$ where $P_A^{\prime}$ and $P_B^{\prime}$ are the polarity indices for solvents A and B, and $\Phi_A$ and $\Phi_B$ are the volume fractions for the two solvents. Table 28.4.1 . Properties of HPLC Mobile Phases mobile phase polarity index ($P^{\prime}$) UV cutoff (nm) cyclohexane 0.04 210 n-hexane 0.1 210 carbon tetrachloride 1.6 265 i-propyl ether 2.4 220 toluene 2.4 286 diethyl ether 2.8 218 tetrahydrofuran 4.0 220 ethanol 4.3 210 ethyl acetate 4.4 255 dioxane 4.8 215 methanol 5.1 210 acetonitrile 5.8 190 water 10.2 Example 28.4.1 A reversed-phase HPLC separation is carried out using a mobile phase of 60% v/v water and 40% v/v methanol. What is the mobile phase’s polarity index? Solution Using Equation \ref{12.1} and the values in Table 28.4.1 , the polarity index for a 60:40 water–methanol mixture is $P_{A B}^{\prime}=\Phi_\text{water} P_\text{water}^{\prime}+\Phi_\text{methanol} P_\text{methanol}^{\prime} \nonumber$ $P_{A B}^{\prime}=0.60 \times 10.2+0.40 \times 5.1=8.2 \nonumber$ Exercise 28.4.1 Suppose you need a mobile phase with a polarity index of 7.5. Explain how you can prepare this mobile phase using methanol and water. Answer If we let x be the fraction of water in the mobile phase, then 1 – x is the fraction of methanol. Substituting these values into Equation \ref{12.1} and solving for x $7.5=10.2 x+5.1(1-x) \nonumber$ $7.5=10.2 x+5.1-5.1 x \nonumber$ $2.4=5.1 x \nonumber$ gives x as 0.47. The mobile phase is 47% v/v water and 53% v/v methanol. As a general rule, a two unit change in the polarity index corresponds to an approximately 10-fold change in a solute’s retention factor. Here is a simple example. If a solute’s retention factor, k, is 22 when using water as a mobile phase ($P^{\prime}$ = 10.2), then switching to a mobile phase of 60:40 water–methanol ($P^{\prime}$ = 8.2) decreases k to approximately 2.2. Note that the retention factor becomes smaller because we are switching from a more polar mobile phase to a less polar mobile phase in a reversed-phase separation. Choosing a Mobile Phase: Adjusting Selectivity Changing the mobile phase’s polarity index changes a solute’s retention factor. As we learned in Chapter 26.4, however, a change in k is not an effective way to improve resolution when the initial value of k is greater than 10. To effect a better separation between two solutes we must improve the selectivity factor, $\alpha$. There are two common methods for increasing $\alpha$: adding a reagent to the mobile phase that reacts with the solutes in a secondary equilibrium reaction or switching to a different mobile phase. Taking advantage of a secondary equilibrium reaction is a useful strategy for improving a separation [(a) Foley, J. P. Chromatography, 1987, 7, 118–128; (b) Foley, J. P.; May, W. E. Anal. Chem. 1987, 59, 102–109; (c) Foley, J. P.; May, W. E. Anal. Chem. 1987, 59, 110–115]. Figure 28.4.1 shows the reversed-phase separation of four weak acids—benzoic acid, terephthalic acid, p-aminobenzoic acid, and p-hydroxybenzoic acid—on a nonpolar C18 column using an aqueous buffer of acetic acid and sodium acetate as the mobile phase. The retention times for these weak acids are shorter when using a less acidic mobile phase because each solute is present in an anionic, weak base form that is less soluble in the nonpolar stationary phase. If the mobile phase’s pH is sufficiently acidic, the solutes are present as neutral weak acids that are more soluble in the stationary phase and take longer to elute. Because the weak acid solutes do not have identical pKa values, the pH of the mobile phase has a different effect on each solute’s retention time, allowing us to find the optimum pH for effecting a complete separation of the four solutes. In Example 28.4.1 we learned how to adjust the mobile phase’s polarity by blending together two solvents. A polarity index, however, is just a guide, and binary mobile phase mixtures with identical polarity indices may not resolve equally a pair of solutes. Table 28.4.2 , for example, shows retention times for four weak acids in two mobile phases with nearly identical values for $P^{\prime}$. Although the order of elution is the same for both mobile phases, each solute’s retention time is affected differently by the choice of organic solvent. If we switch from using acetonitrile to tetrahydrofuran, for example, we find that benzoic acid elutes more quickly and that p-hydroxybenzoic acid elutes more slowly. Although we can resolve fully these two solutes using mobile phase that is 16% v/v acetonitrile, we cannot resolve them if the mobile phase is 10% tetrahydrofuran. Table 28.4.2 . Retention Times for Four Weak Acids in Mobile Phases With Similar Polarity Indexes retention time (min) 16% acetonitrile (CH3CN) 84% pH 4.11 aqueous buffer ($P^{\prime}$ = 9.5) 10% tetrahydrofuran (THF) 90% pH 4.11 aqueous buffer ($P^{\prime}$ = 9.6) $t_\text{r, BA}$ 5.18 4.01 $t_\text{r, PH}$ 1.67 2.91 $t_\text{r, PA}$ 1.21 1.05 $t_\text{r, TP}$ 0.23 0.54 Key: BA is benzoic acid; PH is p-hydroxybenzoic acid; PA is p-aminobenzoic acid; TP is terephthalic acid Source: Harvey, D. T.; Byerly, S.; Bowman, A.; Tomlin, J. “Optimization of HPLC and GC Separations Using Response Surfaces,” J. Chem. Educ. 1991, 68, 162–168. One strategy for finding the best mobile phase is to use the solvent triangle shown in Figure 28.4.2 , which allows us to explore a broad range of mobile phases with only seven experiments. We begin by adjusting the amount of acetonitrile in the mobile phase to produce the best possible separation within the desired analysis time. Next, we use Table 28.4.3 to estimate the composition of methanol/H2O and tetrahydrofuran/H2O mobile phases that will produce similar analysis times. Four additional mobile phases are prepared using the binary and ternary mobile phases shown in Figure 28.4.2 . When we examine the chromatograms from these seven mobile phases we may find that one or more provides an adequate separation, or we may identify a region within the solvent triangle where a separation is feasible. Figure 28.4.3 shows a resolution map for the reversed-phase separation of benzoic acid, terephthalic acid, p-aminobenzoic acid, and p-hydroxybenzoic acid on a nonpolar C18 column in which the maximum desired analysis time is set to 6 min [Harvey, D. T.; Byerly, S.; Bowman, A.; Tomlin, J. J. Chem. Educ. 1991, 68, 162–168]. The areas in blue, green, and red show mobile phase compositions that do not provide baseline resolution. The unshaded area represents mobile phase compositions where a separation is possible. The choice to start with acetonitrile is arbitrary—we can just as easily choose to begin with methanol or with tetrahydrofuran. Table 28.4.3 . Composition of Mobile Phases With Approximately Equal Solvent Strengths %v/v CH3OH % v/v CH3CN %v/v THF 0 0 0 10 6 4 20 14 10 30 22 16 40 32 24 50 40 30 6 50 36 70 60 44 80 72 52 90 87 62 100 99 71 Choosing a Mobile Phase: Isocratic and Gradient Elutions A separation using a mobile phase that has a fixed composition is an isocratic elution. One difficulty with an isocratic elution is that an appropriate mobile phase strength for resolving early-eluting solutes may lead to unacceptably long retention times for late-eluting solutes. Optimizing the mobile phase for late-eluting solutes, on the other hand, may provide an inadequate separation of early-eluting solutes. Changing the mobile phase’s composition as the separation progresses is one solution to this problem. For a reversed-phase separation we use an initial mobile phase that is more polar. As the separation progresses, we adjust the composition of mobile phase so that it becomes less polar (see Figure 28.4.4 ). Such separations are called gradient elutions. Choosing a Detector The availability of different types detectors provides another way to build selectivity into an analysis. Figure 28.4.5 , for example, shows the reverse-phase separation of a mixture of flavonoids using UV/Vis detection at two different wavelengths. In this case, a wavelength of 260 nm increases the method's sensitivity for rutin relative to that for taxifolin. As shown in Figure 28.4.6 , a fluorescence detector provides additional selectivity because only a few of a sample’s components are fluorescent. With a mass spectrometer as a detector, there are several options for monitoring the chromatogram. The most common method is to continuously scan the entire mass spectrum and report the total signal for all ions reaching the detector during each scan. This total ion scan provides universal detection for all analytes. As seen in Figure 28.4.7 , we can achieve some degree of selectivity by monitoring only specific mass-to-charge ratios, a process called selective-ion monitoring. Quantitative Applications of Partition Chromatography Partition chromatography is used routinely for both qualitative and quantitative analyses of environmental, pharmaceutical, industrial, forensic, clinical, and consumer product samples. Preparing Samples for Analysis Samples in liquid form are injected into the HPLC after a suitable clean-up to remove any particulate materials, or after a suitable extraction to remove matrix interferents. In determining polyaromatic hydrocarbons (PAH) in wastewater, for example, an extraction with CH2Cl2 serves the dual purpose of concentrating the analytes and isolating them from matrix interferents. Solid samples are first dissolved in a suitable solvent or the analytes of interest brought into solution by extraction. For example, an HPLC analysis for the active ingredients and the degradation products in a pharmaceutical tablet often begins by extracting the powdered tablet with a portion of mobile phase. Gas samples are collected by bubbling them through a trap that contains a suitable solvent. Organic isocyanates in industrial atmospheres are collected by bubbling the air through a solution of 1-(2-methoxyphenyl)piperazine in toluene. The reaction between the isocyanates and 1-(2-methoxyphenyl)piperazine both stabilizes them against degradation before the HPLC analysis and converts them to a chemical form that can be monitored by UV absorption. Quantitative Calculations A quantitative HPLC analysis is often easier than a quantitative GC analysis because a fixed volume sample loop provides a more precise and accurate injection. As a result, most quantitative HPLC methods do not need an internal standard and, instead, use external standards and a normal calibration curve. An internal standard is necessary when using HPLC–MS because the interface between the HPLC and the mass spectrometer does not allow for a reproducible transfer of the column’s eluent into the MS’s ionization chamber. Example 28.4.2 The concentration of polynuclear aromatic hydrocarbons (PAH) in soil is determined by first extracting the PAHs with methylene chloride. The extract is diluted, if necessary, and the PAHs separated by HPLC using a UV/Vis or fluorescence detector. Calibration is achieved using one or more external standards. In a typical analysis a 2.013-g sample of dried soil is extracted with 20.00 mL of methylene chloride. After filtering to remove the soil, a 1.00-mL portion of the extract is removed and diluted to 10.00 mL with acetonitrile. Injecting 5 μL of the diluted extract into an HPLC gives a signal of 0.217 (arbitrary units) for the PAH fluoranthene. When 5 μL of a 20.0-ppm fluoranthene standard is analyzed using the same conditions, a signal of 0.258 is measured. Report the parts per million of fluoranthene in the soil. Solution For a single-point external standard, the relationship between the signal, S, and the concentration, C, of fluoranthene is $S = kC \nonumber$ Substituting in values for the standard’s signal and concentration gives the value of k as $k=\frac{S}{C}=\frac{0.258}{20.0 \text{ ppm}}=0.0129 \text{ ppm}^{-1} \nonumber$ Using this value for k and the sample’s HPLC signal gives a fluoranthene concentration of $C=\frac{S}{k}=\frac{0.217}{0.0129 \text{ ppm}^{-1}}=16.8 \text{ ppm} \nonumber$ for the extracted and diluted soil sample. The concentration of fluoranthene in the soil is $\frac{16.8 \text{ g} / \mathrm{mL} \times \frac{10.00 \text{ mL}}{1.00 \text{ mL}} \times 20.00 \text{ mL}}{2.013 \text{ g} \text { sample }}=1670 \text{ ppm} \text { fluoranthene } \nonumber$ Exercise 28.4.2 The concentration of caffeine in beverages is determined by a reversed-phase HPLC separation using a mobile phase of 20% acetonitrile and 80% water, and using a nonpolar C8 column. Results for a series of 10-μL injections of caffeine standards are in the following table. [caffeine] (mg/L) peak area (arb. units) 50.0 226724 100.0 453762 125.0 559443 250.0 1093637 What is the concentration of caffeine in a sample if a 10-μL injection gives a peak area of 424195? The data in this problem comes from Kusch, P.; Knupp, G. “Simultaneous Determination of Caffeine in Cola Drinks and Other Beverages by Reversed-Phase HPTLC and Reversed-Phase HPLC,” Chem. Educator, 2003, 8, 201–205. Answer The figure below shows the calibration curve and calibration equation for the set of external standards. Substituting the sample’s peak area into the calibration equation gives the concentration of caffeine in the sample as 94.4 mg/L.
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/28%3A_High-Performance_Liquid_Chromatography/28.04%3A_Partition_Chromatography.txt
In adsorption chromatography (or liquid-solid chromatography, LSC) the column packing also serves as the stationary phase. In Tswett’s original work the stationary phase was finely divided CaCO3, but modern columns employ porous 3–10 μm particles of silica or alumina. Because the stationary phase is polar, the mobile phase usually is a nonpolar or a moderately polar solvent. Typical mobile phases include hexane, isooctane, and methylene chloride. The usual order of elution—from shorter to longer retention times—is olefins < aromatic hydrocarbons < ethers < esters, aldehydes, ketones < alcohols, amines < amide < carboxylic acids Nonpolar stationary phases, such as charcoal-based absorbents, also are used. For most samples, liquid–solid chromatography does not offer any special advantages over liquid–liquid chromatography. One exception is the analysis of isomers, where LSC excels. 28.06: Ion-Exchange Chromatography In ion-exchange chromatography (IEC) the stationary phase is a cross-linked polymer resin, usually divinylbenzene cross-linked polystyrene, with covalently attached ionic functional groups (see Figure 28.6.1 and Table 28.6.1 ). The counterions to these fixed charges are mobile and are displaced by ions that compete more favorably for the exchange sites. Ion-exchange resins are divided into four categories: strong acid cation exchangers; weak acid cation exchangers; strong base anion exchangers; and weak base anion exchangers. Figure 28.6.1 . Structures of styrene, divinylbenzene, and a styrene–divinylbenzene co-polymer modified for use as an ion-exchange resin are shown on the left. The ion-exchange sites, indicated by R and shown in blue, are mostly in the para position and are not necessarily bound to all styrene units. The cross-linking is shown in red. The photo on the right shows an example of the polymer beads. These beads are approximately 0.30–0.85 mm in diameter. Resins for use in ion-exchange chromatography typically are 5–11 μm in diameter. Table 28.6.1 . Examples of Common Ion-Exchange Resins type functional group examples strong acid cation exchanger sulfonic acid $-\text{SO}_3^-$ $-\text{CH}_2\text{CH}_2\text{SO}_3^-$ weak acid cation exchanger carboxylic acid $-\text{COO}^-$ $-\text{CH}_2\text{COO}^-$ strong base anion exchanger quaternary amine $-\text{CH}_2\text{N(CH}_3)_3^+$ $-\text{CH}_2\text{CH}_2\text{N(CH}_2\text{CH}_3)_3^+$ weak base anion exchanger amine $-\text{NH}_4^+$ $-\text{CH}_2\text{CH}_2\text{NH(CH}_2\text{CH}_3)_3^+$ Strong acid cation exchangers include a sulfonic acid functional group that retains it anionic form—and thus its capacity for ion-exchange—in strongly acidic solutions. The functional groups for a weak acid cation exchanger, on the other hand, are fully protonated at pH levels less then 4 and lose their exchange capacity. The strong base anion exchangers include a quaternary amine, which retains a positive charge even in strongly basic solutions. Weak base anion exchangers remain protonated only at pH levels that are moderately basic. Under more basic conditions a weak base anion exchanger loses a proton and its exchange capacity. The ion-exchange reaction of a monovalent cation, M+, exchange site is $-\mathrm{SO}_{3}^{-} \mathrm{H}^{+}(s)+\mathrm{M}^{+}(a q)\rightleftharpoons-\mathrm{SO}_{3}^{-} \mathrm{M}^{+}(s)+\mathrm{H}^{+}(a q) \nonumber$ The equilibrium constant for this ion-exchange reaction, which we call the selectivity coefficient, K, is $K=\frac{\left\{-\mathrm{SO}_{3}^{-} \mathrm{M}^{+}\right\}\left[\mathrm{H}^{+}\right]}{\left\{-\mathrm{SO}_{3}^{-} \mathrm{H}^{+}\right\}\left[\mathrm{M}^{+}\right]} \label{12.1}$ where we use curly brackets, { }, to indicate a surface concentration instead of a solution concentration. We don’t usually think about a solid’s concentration. There is a good reason for this. In most cases, a solid’s concentration is a constant. If you break a piece of chalk into two parts, for example, the mass and the volume of each piece retains the same proportional relationship as in the original piece of chalk. When we consider an ion binding to a reactive site on the solid’s surface, however, the fraction of sites that are bound, and thus the concentration of bound sites, can take on any value between 0 and some maximum value that is proportional to the density of reactive sites. Rearranging Equation \ref{12.1} shows us that the distribution ratio, D, for the exchange reaction $D=\frac{\text { amount of } \mathrm{M}^{+} \text { in the stationary phase }}{\text { amount of } \mathrm{M}^{+} \text { in the mobile phase }} \nonumber$ $D=\frac{\left\{-\mathrm{SO}_{3}^{-} \mathrm{M}^{+}\right\}}{\left[\mathrm{M}^{+}\right]}=K \times \frac{\left\{-\mathrm{SO}_{3}^{-} \mathrm{H}^{+}\right\}}{\left[\mathrm{H}^{+}\right]} \label{12.2}$ is a function of the concentration of H+ and, therefore, the pH of the mobile phase. An ion-exchange resin’s selectivity is somewhat dependent on whether it includes strong or weak exchange sites and on the extent of cross-linking. The latter is particularly important as it controls the resin’s permeability, and, therefore, the accessibility of exchange sites. An approximate order of selectivity for a typical strong acid cation exchange resin, in order of decreasing D, is Al3+ > Ba2+ > Pb2+ > Ca2+ > Ni2+ > Cd2+ > Cu2+ > Co2+ > Zn2+ > Mg2+ > Ag+ > K+ > $\text{NH}_4^+$ > Na+ > H+ > Li+ Note that highly charged cations bind more strongly than cations of lower charge, and that for cations of similar charge, those with a smaller hydrated radius, or that are more polarizable, bind more strongly. For a strong base anion exchanger the general elution order is $\text{SO}_4^{2-}$ > I > $\text{HSO}_4^-$ > $\text{NO}_3^-$ > Br > $\text{NO}_2^-$ > Cl > $\text{HCO}_3^-$ > CH3COO> OH > F Anions of higher charge and of smaller hydrated radius bind more strongly than anions with a lower charge and a larger hydrated radius. The mobile phase in IEC usually is an aqueous buffer, the pH and ionic composition of which determines a solute’s retention time. Gradient elutions are possible in which the mobile phase’s ionic strength or pH is changed with time. For example, an IEC separation of cations might use a dilute solution of HCl as the mobile phase. Increasing the concentration of HCl speeds the elution rate for more strongly retained cations because the higher concentration of H+ allows it to compete more successfully for the ion-exchange sites. From Equation \ref{12.2}, a cation’s distribution ratio, D, becomes smaller when the concentration of H+ in the mobile phase increases. An ion-exchange resin is incorporated into an HPLC column either as 5–11 μm porous polymer beads or by coating the resin on porous silica particles. Columns typically are 250 mm in length with internal diameters ranging from 2–5 mm. Measuring the conductivity of the mobile phase as it elutes from the column serves as a universal detector for cationic and anionic analytes. Because the mobile phase contains a high concentration of ions—a mobile phase of dilute HCl, for example, contains significant concentrations of H+ and Clions—we need a method for detecting the analytes in the presence of a significant background conductivity. To minimize the mobile phase’s contribution to conductivity, an ion-suppressor column is placed between the analytical column and the detector. This column selectively removes mobile phase ions without removing solute ions. For example, in cation-exchange chromatography using a dilute solution of HCl as the mobile phase, the suppressor column contains a strong base anion-exchange resin. The exchange reaction $\mathrm{H}^{+}(a q)+\mathrm{Cl}^{-}(a q)+\mathrm{Resin}^{+} \mathrm{OH}^{-}(s)\rightleftharpoons\operatorname{Resin}^{+} \mathrm{Cl}^{-}(s)+\mathrm{H}_{2} \mathrm{O}(l ) \nonumber$ replaces the mobile phase ions H+ and Cl with H2O. A similar process is used in anion-exchange chromatography where the suppressor column contains a cation-exchange resin. If the mobile phase is a solution of Na2CO3, the exchange reaction $2 \mathrm{Na}^{+}(a q)+\mathrm{CO}_{3}^{2-}(a q)+2 \operatorname{Resin}^{-} \mathrm{H}^{+}(s)\rightleftharpoons2 \operatorname{Resin}^{-} \mathrm{Na}^{+}(s)+\mathrm{H}_{2} \mathrm{CO}_{3}(a q) \nonumber$ replaces a strong electrolyte, Na2CO3, with a weak electrolyte, H2CO3. Ion-suppression is necessary when the mobile phase contains a high concentration of ions. Single-column ion chromatography, in which an ion-suppressor column is not needed, is possible if the concentration of ions in the mobile phase is small. Typically the stationary phase is a resin with a low capacity for ion-exchange and the mobile phase is a very dilute solution of methane sulfonic acid for cationic analytes, or potassium benzoate or potassium hydrogen phthalate for anionic analytes. Because the background conductivity is sufficiently small, it is possible to monitor a change in conductivity as the analytes elute from the column. A UV/Vis absorbance detector can be used if the analytes absorb ultraviolet or visible radiation. Alternatively, we can detect indirectly analytes that do not absorb in the UV/Vis if the mobile phase contains a UV/Vis absorbing species. In this case, when a solute band passes through the detector, a decrease in absorbance is measured at the detector. Ion-exchange chromatography is an important technique for the analysis of anions and cations in water. For example, an ion-exchange chromatographic analysis for the anions F, Cl, Br, $\text{NO}_2^-$, $\text{NO}_3^-$, $\text{PO}_4^{3-}$, and $\text{SO}_4^{2-}$ takes approximately 15 minutes (Figure 28.6.2 ). A complete analysis of the same set of anions by a combination of potentiometry and spectrophotometry requires 1–2 days. Ion-exchange chromatography also is used for the analysis of proteins, amino acids, sugars, nucleotides, pharmaceuticals, consumer products, and clinical samples. 28.07: Size-Exclusion Chromatography We have considered two classes of micron-sized stationary phases in this chapter: silica particles and cross-linked polymer resin beads. Both materials are porous, with pore sizes ranging from approximately 5–400 nm for silica particles, and from 5 nm to 100 μm for divinylbenzene cross-linked polystyrene resins. In size-exclusion chromatography—which also is known by the terms molecular-exclusion or gel permeation chromatography—the separation of solutes depends upon their ability to enter into the pores of the stationary phase. Smaller solutes spend proportionally more time within the pores and take longer to elute from the column. A stationary phase’s size selectivity extends over a finite range. All solutes significantly smaller than the pores move through the column’s entire volume and elute simultaneously, with a retention volume, Vr, of $V_{r}=V_{i}+V_{o} \label{12.3}$ where Vi is the volume of mobile phase occupying the stationary phase’s pore space and Vo is volume of mobile phase in the remainder of the column. The largest solute for which Equation \ref{12.3} holds is the column’s inclusion limit, or permeation limit. Those solutes too large to enter the pores elute simultaneously with an retention volume of $V_{r} = V_{o} \label{12.4}$ Equation \ref{12.4} defines the column’s exclusion limit. For a solute whose size is between the inclusion limit and the exclusion limit, the amount of time it spends in the stationary phase’s pores is proportional to its size. The retention volume for these solutes is $V_{r}=DV_{i}+V_{o} \label{12.5}$ where D is the solute’s distribution ratio, which ranges from 0 at the exclusion limit to 1 at the inclusion limit. Equation \ref{12.5} assumes that size-exclusion is the only interaction between the solute and the stationary phase that affects the separation. For this reason, stationary phases using silica particles are deactivated as described earlier, and polymer resins are synthesized without exchange sites. Size-exclusion chromatography provides a rapid means for separating larger molecules, including polymers and biomolecules. A stationary phase for proteins that consists of particles with 30 nm pores has an inclusion limit of 7500 g/mol and an exclusion limit of $1.2 \times 10^6$ g/mol. Mixtures of proteins that span a wider range of molecular weights are separated by joining together in series several columns with different inclusion and exclusion limits. Another important application of size-exclusion chromatography is the estimation of a solute’s molecular weight (MW). Calibration curves are prepared using a series of standards of known molecular weight and measuring each standard’s retention volume. As shown in Figure 28.7.1 , a plot of log(MW) versus Vr is roughly linear between the exclusion limit and the inclusion limit. Because a solute’s retention volume is influenced by both its size and its shape, a reasonably accurate estimation of molecular weight is possible only if the standards are chosen carefully to minimize the effect of shape. Size-exclusion chromatography is carried out using conventional HPLC instrumentation, replacing the HPLC column with an appropriate size-exclusion column. A UV/Vis detector is the most common means for obtaining the chromatogram.
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/28%3A_High-Performance_Liquid_Chromatography/28.05%3A_Adsorption_Chromatography.txt
Although there are many analytical applications of gas chromatography and liquid chromatography, they can not separate and analyze all types of samples. Capillary column GC separates complex mixtures with excellent resolution and short analysis times. Its application is limited, however, to volatile analytes or to analytes made volatile by a suitable derivatization reaction. Liquid chromatography separates a wider range of solutes than GC, but the most common detectors—UV, fluorescence, and electrochemical— have poorer detection limits and smaller linear ranges than GC detectors, and are not as universal in their selectivity. For some applications, supercritical fluids provide an attractive solution to these limitations. • 29.1: Properties of Supercritical Fluids A supercritical fluid is a species held at a temperature and a pressure that exceeds its critical point. Under these conditions the species is neither a gas nor a liquid. Some properties of a supercritical fluid are similar to a gas; other properties, however, are similar to a liquid. The viscosity of a supercritical fluid is similar to a gas; THE density of a supercritical fluid, on the other hand, is much closer to that of a liquid, which explains why supercritical fluids are good solvents. • 29.2: Supercritical Fluid Chromatography The instrumentation for supercritical fluid chromatography essentially is the same as that for a standard HPLC. The only important additions are a heated oven for the column and a pressure restrictor downstream from the column to maintain the critical pressure. 29: Supercritical Fluid Chromatography As shown in Figure 29.1.1 , a supercritical fluid is a species held at a temperature and a pressure that exceeds its critical point. Under these conditions the species is neither a gas nor a liquid. Instead, it is a supercritical fluid. Some properties of a supercritical fluid, as shown in Table 29.1.1 , are similar to a gas; other properties, however, are similar to a liquid. The viscosity of a supercritical fluid, for example, is similar to a gas, which means we can move a supercritical fluid through a capillary column or a packed column without the need for high pressures. The density of a supercritical fluid, on the other hand, is much closer to that of a liquid, which explains why supercritical fluids are good solvents. Table 29.1.1 . Typical Properties of Gases, Liquids, and Supercritical Fluids phase density (g/cm3) viscosity (g cm-1 s-1) diffusion coefficient (cm2 s-1) gas \(\approx 10^{-3}\) \(\approx 10^{-4}\) \(\approx 0.1\) supercritical fluid \(\approx 0.1 - 1\) \(\approx 10^{-4} - 10^{-3}\) \(\approx 10^{-4} - 10^{-3}\) liquid \(\approx 1\) \(\approx 10^{-2}\) \(\approx 10^{-3}\) The most commonly used supercritical fluid is CO2. Its low critical temperature of 31.1oC and its low critical pressure of 72.9 atm are relatively easy to achieve and maintain. Although supercritical CO2 is a good solvent for nonpolar organics, it is less useful for polar solutes. The addition of an organic modifier, such as methanol, improves the mobile phase’s elution strength. Other common mobile phases and their critical temperatures and pressures are listed in Table 29.1.2 . Table 29.1.2 . Critical Points for Selected Supercritical Fluids compound critical temperature (oC) critical pressure (atm) carbon dioxide 31.3 72.9 ethane 32.4 48.3 nitrous oxide 36.5 71.4 ammonia 132.3 111.3 diethyl ether 193.6 36.3 isopropanol 235.3 47.0 methanol 240.5 78.9 ethanol 243.4 63.0 water 374.4 226.8 29.02: Supercritical Fluid Chromatography The instrumentation for supercritical fluid chromatography essentially is the same as that for a standard HPLC. The only important additions are a heated oven for the column and a pressure restrictor downstream from the column to maintain the critical pressure. Gradient elutions are accomplished by changing the applied pressure over time. The resulting change in the mobile phase’s density affects its solvent strength. Detection is accomplished using standard GC detectors or HPLC detectors. Analysis time and resolution, although not as good as in GC, usually are better than in conventional HPLC. Supercritical fluid chromatography has many applications in the analysis of polymers, fossil fuels, waxes, drugs, and food products.
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/29%3A_Supercritical_Fluid_Chromatography/29.01%3A_Properties_of_Supercritical_Fluids.txt
Electrophoresis is a class of separation techniques in which we separate analytes by their ability to move through a conductive medium—usually an aqueous buffer—in response to an applied electric field. In the absence of other effects, cations migrate toward the electric field’s negatively charged cathode. Cations with larger charge-to-size ratios—which favors ions of greater charge and of smaller size—migrate at a faster rate than larger cations with smaller charges. Anions migrate toward the positively charged anode and neutral species do not experience the electrical field and remain stationary. In this chapter we focus on the most important of these techniques, capillary electrophoresis and a few related variants. • 30.1: An Overview of Electrophoresis There are several forms of electrophoresis. In capillary electrophoresis the conducting buffer is retained within a capillary tube. The sample is injected into one end of the capillary tube, and as it migrates through the capillary the sample’s components separate and elute from the column at different times. The resulting electropherogram looks similar to a GC or an HPLC chromatogram, and provides both qualitative and quantitative information. • 30.2: Capillary Electrophoresis In capillary electrophoresis, electrophoretic mobility is the solute’s response to the applied electrical field in which cations move toward the negatively charged cathode, anions move toward the positively charged anode, and neutral species remain stationary. The other contribution to a solute’s migration is electroosmotic flow, which occurs when the buffer moves through the capillary in response to the applied electrical field. • 30.3: Applications of Capillary Electrophoresis There are several different forms of capillary electrophoresis, each of which has its particular advantages. Four of these methods are described briefly in this section: capillary zone electrophoresis, micellar electrokinetic capillary chromatography, capillary gel electrophoresis, and capillary electrochromatography. 30: Capillary Electrophoresis and Capillary Electrochromatography Electrophoresis is a class of separation techniques in which we separate analytes by their ability to move through a conductive medium—usually an aqueous buffer—in response to an applied electric field. In the absence of other effects, cations migrate toward the electric field’s negatively charged cathode. Cations with larger charge-to-size ratios—which favors ions of greater charge and of smaller size—migrate at a faster rate than larger cations with smaller charges. Anions migrate toward the positively charged anode and neutral species do not experience the electrical field and remain stationary. As we will see shortly, under normal conditions even neutral species and anions migrate toward the cathode. There are several forms of electrophoresis. In slab gel electrophoresis the conducting buffer is retained within a porous gel of agarose or polyacrylamide. Slabs are formed by pouring the gel between two glass plates separated by spacers. Typical thicknesses are 0.25–1 mm. Gel electrophoresis is an important technique in biochemistry where it frequently is used to separate DNA fragments and proteins. Although it is a powerful tool for the qualitative analysis of complex mixtures, it is less useful for quantitative work. In capillary electrophoresis the conducting buffer is retained within a capillary tube with an inner diameter that typically is 25–75 μm. The sample is injected into one end of the capillary tube, and as it migrates through the capillary the sample’s components separate and elute from the column at different times. The resulting electropherogram looks similar to a GC or an HPLC chromatogram, and provides both qualitative and quantitative information.
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/30%3A_Capillary_Electrophoresis_and_Capillary_Electrochromatography/30.01%3A_An_Overview_of_Electrophoresis.txt
Theory of Electrophoresis In capillary electrophoresis we inject the sample into a buffered solution retained within a capillary tube. When an electric field is applied across the capillary tube, the sample’s components migrate as the result of two types of actions: electrophoretic mobility and electroosmotic mobility. Electrophoretic mobility is the solute’s response to the applied electrical field in which cations move toward the negatively charged cathode, anions move toward the positively charged anode, and neutral species remain stationary. The other contribution to a solute’s migration is electroosmotic flow, which occurs when the buffer moves through the capillary in response to the applied electrical field. Under normal conditions the buffer moves toward the cathode, sweeping most solutes, including the anions and neutral species, toward the negatively charged cathode. Electrophoretic Mobility The velocity with which a solute moves in response to the applied electric field is called its electrophoretic velocity, $\nu_{ep}$; it is defined as $\nu_{ep}=\mu_{ep} E \label{12.1}$ where $\mu_{ep}$ is the solute’s electrophoretic mobility, and E is the magnitude of the applied electrical field. A solute’s electrophoretic mobility is defined as $\mu_{ep}=\frac{q}{6 \pi \eta r} \label{12.2}$ where q is the solute’s charge, $\eta$ is the buffer’s viscosity, and r is the solute’s radius. Using Equation \ref{12.1} and Equation \ref{12.2} we can make several important conclusions about a solute’s electrophoretic velocity. Electrophoretic mobility and, therefore, electrophoretic velocity, increases for more highly charged solutes and for solutes of smaller size. Because q is positive for a cation and negative for an anion, these species migrate in opposite directions. A neutral species, for which q is zero, has an electrophoretic velocity of zero. Electroosmotic Mobility When an electric field is applied to a capillary filled with an aqueous buffer we expect the buffer’s ions to migrate in response to their electrophoretic mobility. Because the solvent, H2O, is neutral we might reasonably expect it to remain stationary. What we observe under normal conditions, however, is that the buffer moves toward the cathode. This phenomenon is called the electroosmotic flow. Electroosmotic flow occurs because the walls of the capillary tubing carry a charge. The surface of a silica capillary contains large numbers of silanol groups (–SiOH). At a pH level greater than approximately 2 or 3, the silanol groups ionize to form negatively charged silanate ions (–SiO). Cations from the buffer are attracted to the silanate ions. As shown in Figure 30.2.1 , some of these cations bind tightly to the silanate ions, forming a fixed layer. Because the cations in the fixed layer only partially neutralize the negative charge on the capillary walls, the solution adjacent to the fixed layer—which is called the diffuse layer—contains more cations than anions. Together these two layers are known as the double layer. Cations in the diffuse layer migrate toward the cathode. Because these cations are solvated, the solution also is pulled along, producing the electroosmotic flow. The anions in the diffuse layer, which also are solvated, try to move toward the anode. Because there are more cations than anions, however, the cations win out and the electroosmotic flow moves in the direction of the cathode. The rate at which the buffer moves through the capillary, what we call its electroosmotic flow velocity, $\nu_{eof}$, is a function of the applied electric field, E, and the buffer’s electroosmotic mobility, $\mu_{eof}$. $\nu_{eof}=\mu_{e o f} E \label{12.3}$ Electroosmotic mobility is defined as $\mu_{eof}=\frac{\varepsilon \zeta}{4 \pi \eta} \label{12.4}$ where $\epsilon$ is the buffer dielectric constant, $\zeta$ is the zeta potential, and $\eta$ is the buffer’s viscosity. The zeta potential—the potential of the diffuse layer at a finite distance from the capillary wall—plays an important role in determining the electroosmotic flow velocity. Two factors determine the zeta potential’s value. First, the zeta potential is directly proportional to the charge on the capillary walls, with a greater density of silanate ions corresponding to a larger zeta potential. Below a pH of 2 there are few silanate ions and the zeta potential and the electroosmotic flow velocity approach zero. As the pH increases, both the zeta potential and the electroosmotic flow velocity increase. Second, the zeta potential is directly proportional to the thickness of the double layer. Increasing the buffer’s ionic strength provides a higher concentration of cations, which decreases the thickness of the double layer and decreases the electroosmotic flow. The definition of zeta potential given here admittedly is a bit fuzzy. For a more detailed explanation see Delgado, A. V.; González-Caballero, F.; Hunter, R. J.; Koopal, L. K.; Lyklema, J. “Measurement and Interpretation of Electrokinetic Phenomena,” Pure. Appl. Chem. 2005, 77, 1753–1805. Although this is a very technical report, Sections 1.3–1.5 provide a good introduction to the difficulty of defining the zeta potential and of measuring its value. The electroosmotic flow profile is very different from that of a fluid moving under forced pressure. Figure 30.2.2 compares the electroosmotic flow profile with the hydrodynamic flow profile in gas chromatography and liquid chromatography. The uniform, flat profile for electroosmosis helps minimize band broadening in capillary electrophoresis, improving separation efficiency. Total Mobility A solute’s total velocity, $\nu_{tot}$, as it moves through the capillary is the sum of its electrophoretic velocity and the electroosmotic flow velocity. $\nu_{t o t}=\nu_{ep}+\nu_{eof} \nonumber$ As shown in Figure 30.2.3 , under normal conditions the following general relationships hold true. $(\nu_{tot})_{cations} > \nu_{eof} \nonumber$ $(\nu_{tot})_{neutrals} = \nu_{eof} \nonumber$ $(\nu_{tot})_{anions} < \nu_{eof} \nonumber$ Cations elute first in an order that corresponds to their electrophoretic mobilities, with small, highly charged cations eluting before larger cations of lower charge. Neutral species elute as a single band with an elution rate equal to the electroosmotic flow velocity. Finally, anions are the last components to elute, with smaller, highly charged anions having the longest elution time. Migration Time Another way to express a solute’s velocity is to divide the distance it travels by the elapsed time $\nu_{tot}=\frac{l}{t_{m}} \label{12.5}$ where l is the distance between the point of injection and the detector, and tm is the solute’s migration time. To understand the experimental variables that affect migration time, we begin by noting that $\nu_{tot} = \mu_{tot}E = (\mu_{ep} + \mu_{eof})E \label{12.6}$ Combining Equation \ref{12.5} and Equation \ref{12.6} and solving for tm leaves us with $t_{\mathrm{m}}=\frac{l}{\left(\mu_{ep}+\mu_{eof}\right) E} \label{12.7}$ The magnitude of the electrical field is $E=\frac{V}{L} \label{12.8}$ where V is the applied potential and L is the length of the capillary tube. Finally, substituting Equation \ref{12.8} into Equation \ref{12.7} leaves us with the following equation for a solute’s migration time. $t_{\mathrm{m}}=\frac{lL}{\left(\mu_{ep}+\mu_{eof}\right) V} \label{12.9}$ To decrease a solute’s migration time—which shortens the analysis time—we can apply a higher voltage or use a shorter capillary tube. We can also shorten the migration time by increasing the electroosmotic flow, although this decreases resolution. Efficiency As we learned in Chapter 26.3, the efficiency of a separation is given by the number of theoretical plates, N. In capillary electrophoresis the number of theoretic plates is $N=\frac{l^{2}}{2 D t_{m}}=\frac{\left(\mu_{e p}+\mu_{eof}\right) E l}{2 D L} \label{12.10}$ where D is the solute’s diffusion coefficient. From Equation \ref{12.10}, the efficiency of a capillary electrophoretic separation increases with higher voltages. Increasing the electroosmotic flow velocity improves efficiency, but at the expense of resolution. Two additional observations deserve comment. First, solutes with larger electrophoretic mobilities—in the same direction as the electroosmotic flow—have greater efficiencies; thus, smaller, more highly charged cations are not only the first solutes to elute, but do so with greater efficiency. Second, efficiency in capillary electrophoresis is independent of the capillary’s length. Theoretical plate counts of approximately 100 000–200 000 are not unusual. It is possible to design an electrophoretic experiment so that anions elute before cations—more about this later—in which smaller, more highly charged anions elute with greater efficiencies. Selectivity In chromatography we defined the selectivity between two solutes as the ratio of their retention factors. In capillary electrophoresis the analogous expression for selectivity is $\alpha=\frac{\mu_{ep, 1}}{\mu_{ep, 2}} \nonumber$ where $\mu_{ep,1}$ and $\mu_{ep,2}$ are the electrophoretic mobilities for the two solutes, chosen such that $\alpha \ge 1$. We can often improve selectivity by adjusting the pH of the buffer solution. For example, $\text{NH}_4^+$ is a weak acid with a pKa of 9.75. At a pH of 9.75 the concentrations of $\text{NH}_4^+$ and NH3 are equal. Decreasing the pH below 9.75 increases its electrophoretic mobility because a greater fraction of the solute is present as the cation $\text{NH}_4^+$. On the other hand, raising the pH above 9.75 increases the proportion of neutral NH3, decreasing its electrophoretic mobility. Resolution The resolution between two solutes is $R = \frac {0.177(\mu_{ep,2} - \mu_{ep,1})\sqrt{V}} {\sqrt{D(\mu_{avg} + \mu_{eof})}} \label{12.11}$ where $\mu_{avg}$ is the average electrophoretic mobility for the two solutes. Increasing the applied voltage and decreasing the electroosmotic flow velocity improves resolution. The latter effect is particularly important. Although increasing electroosmotic flow improves analysis time and efficiency, it decreases resolution. Instrumentation The basic instrumentation for capillary electrophoresis is shown in Figure 30.2.4 and includes a power supply for applying the electric field, anode and cathode compartments that contain reservoirs of the buffer solution, a sample vial that contains the sample, the capillary tube, and a detector. Each part of the instrument receives further consideration in this section. Capillary Tubes Figure 30.2.5 shows a cross-section of a typical capillary tube. Most capillary tubes are made from fused silica coated with a 15–35 μm layer of polyimide to give it mechanical strength. The inner diameter is typically 25–75 μm, which is smaller than the internal diameter of a capillary GC column, with an outer diameter of 200–375 μm. The capillary column’s narrow opening and the thickness of its walls are important. When an electric field is applied to the buffer solution, current flows through the capillary. This current leads to the release of heat, which we call Joule heating. The amount of heat released is proportional to the capillary’s radius and to the magnitude of the electrical field. Joule heating is a problem because it changes the buffer’s viscosity, with the solution at the center of the capillary being less viscous than that near the capillary walls. Because a solute’s electrophoretic mobility depends on its viscosity (see Equation \ref{12.2}), solute species in the center of the capillary migrate at a faster rate than those near the capillary walls. The result is an additional source of band broadening that degrades the separation. Capillaries with smaller inner diameters generate less Joule heating, and capillaries with larger outer diameters are more effective at dissipating the heat. Placing the capillary tube inside a thermostated jacket is another method for minimizing the effect of Joule heating; in this case a smaller outer diameter allows for a more rapid dissipation of thermal energy. Injecting the Sample There are two common methods for injecting a sample into a capillary electrophoresis column: hydrodynamic injection and electrokinetic injection. In both methods the capillary tube is filled with the buffer solution. One end of the capillary tube is placed in the destination reservoir and the other end is placed in the sample vial. Hydrodynamic injection uses pressure to force a small portion of sample into the capillary tubing. A difference in pressure is applied across the capillary either by pressurizing the sample vial or by applying a vacuum to the destination reservoir. The volume of sample injected, in liters, is given by the following equation $V_{\text {inj}}=\frac{\Delta P d^{4} \pi t}{128 \eta L} \times 10^{3} \label{12.12}$ where $\Delta P$ is the difference in pressure across the capillary in pascals, d is the capillary’s inner diameter in meters, t is the amount of time the pressure is applied in seconds, $\eta$ is the buffer’s viscosity in kg m–1 s–1, and L is the length of the capillary tubing in meters. The factor of 103 changes the units from cubic meters to liters. For a hydrodynamic injection we move the capillary from the source reservoir to the sample. The anode remains in the source reservoir. A hydrodynamic injection also is possible if we raise the sample vial above the destination reservoir and briefly insert the filled capillary. Example 30.2.1 In a hydrodynamic injection we apply a pressure difference of $2.5 \times 10^3$ Pa (a $\Delta P \approx 0.02 \text{ atm}$) for 2 s to a 75-cm long capillary tube with an internal diameter of 50 μm. Assuming the buffer’s viscosity is 10–3 kg m–1 s–1, what volume and length of sample did we inject? Solution Making appropriate substitutions into Equation \ref{12.12} gives the sample’s volume as $V_{inj}=\frac{\left(2.5 \times 10^{3} \text{ kg} \text{ m}^{-1} \text{ s}^{-2}\right)\left(50 \times 10^{-6} \text{ m}\right)^{4}(3.14)(2 \text{ s})}{(128)\left(0.001 \text{ kg} \text{ m}^{-1} \text{ s}^{-1}\right)(0.75 \text{ m})} \times 10^{3} \mathrm{L} / \mathrm{m}^{3} \nonumber$ $V_{inj} = 1 \times 10^{-9} \text{ L} = 1 \text{ nL} \nonumber$ Because the interior of the capillary is cylindrical, the length of the sample, l, is easy to calculate using the equation for the volume of a cylinder; thus $l=\frac{V_{\text {inj}}}{\pi r^{2}}=\frac{\left(1 \times 10^{-9} \text{ L}\right)\left(10^{-3} \text{ m}^{3} / \mathrm{L}\right)}{(3.14)\left(25 \times 10^{-6} \text{ m}\right)^{2}}=5 \times 10^{-4} \text{ m}=0.5 \text{ mm} \nonumber$ Exercise 30.2.1 Suppose you need to limit your injection to less than 0.20% of the capillary’s length. Using the information from Example 30.2.1 , what is the maximum injection time for a hydrodynamic injection? Answer The capillary is 75 cm long, which means that 0.20% of that sample’s maximum length is 0.15 cm. To convert this to the maximum volume of sample we use the equation for the volume of a cylinder. $V_{i n j}=l \pi r^{2}=(0.15 \text{ cm})(3.14)\left(25 \times 10^{-4} \text{ cm}\right)^{2}=2.94 \times 10^{-6} \text{ cm}^{3} \nonumber$ Given that 1 cm3 is equivalent to 1 mL, the maximum volume is $2.94 \times 10^{-6}$ mL or $2.94 \times 10^{-9}$ L. To find the maximum injection time, we first solve Equation \ref{12.12} for t $t=\frac{128 V_{inj} \eta L}{P d^{4} \pi} \times 10^{-3} \text{ m}^{3} / \mathrm{L} \nonumber$ and then make appropriate substitutions. $t=\frac{(128)\left(2.94 \times 10^{-9} \text{ L}\right)\left(0.001 \text{ kg } \text{ m}^{-1} \text{ s}^{-1}\right)(0.75 \text{ m})}{\left(2.5 \times 10^{3} \text{ kg } \mathrm{m}^{-1} \text{ s}^{-2}\right)\left(50 \times 10^{-6} \text{ m}\right)^{4}(3.14)} \times \frac{10^{-3} \text{ m}^{3}}{\mathrm{L}} = 5.8 \text{ s} \nonumber$ The maximum injection time, therefore, is 5.8 s. In an electrokinetic injection we place both the capillary and the anode into the sample and briefly apply an potential. The volume of injected sample is the product of the capillary’s cross sectional area and the length of the capillary occupied by the sample. In turn, this length is the product of the solute’s velocity (see Equation \ref{12.6}) and time; thus $V_{inj} = \pi r^2 L = \pi r^2 (\mu_{ep} + \mu_{eof})E^{\prime}t \label{12.13}$ where r is the capillary’s radius, L is the capillary’s length, and $E^{\prime}$ is the effective electric field in the sample. An important consequence of Equation \ref{12.13} is that an electrokinetic injection is biased toward solutes with larger electrophoretic mobilities. If two solutes have equal concentrations in a sample, we inject a larger volume—and thus more moles—of the solute with the larger $\mu_{ep}$. The electric field in the sample is different that the electric field in the rest of the capillary because the sample and the buffer have different ionic compositions. In general, the sample’s ionic strength is smaller, which makes its conductivity smaller. The effective electric field is $E^{\prime} = E \times \frac {\chi_\text{buffer}} {\chi_\text{sample}}\nonumber$ where $\chi_\text{buffer}$ and $\chi_{sample}$ are the conductivities of the buffer and the sample, respectively. When an analyte’s concentration is too small to detect reliably, it maybe possible to inject it in a manner that increases its concentration. This method of injection is called stacking. Stacking is accomplished by placing the sample in a solution whose ionic strength is significantly less than that of the buffer in the capillary tube. Because the sample plug has a lower concentration of buffer ions, the effective field strength across the sample plug, $E^{\prime}$, is larger than that in the rest of the capillary. We know from Equation \ref{12.1} that electrophoretic velocity is directly proportional to the electrical field. As a result, the cations in the sample plug migrate toward the cathode with a greater velocity, and the anions migrate more slowly—neutral species are unaffected and move with the electroosmotic flow. When the ions reach their respective boundaries between the sample plug and the buffer, the electrical field decreases and the electrophoretic velocity of the cations decreases and that for the anions increases. As shown in Figure 30.2.6 , the result is a stacking of cations and anions into separate, smaller sampling zones. Over time, the buffer within the capillary becomes more homogeneous and the separation proceeds without additional stacking. Applying the Electrical Field Migration in electrophoresis occurs in response to an applied electric field. The ability to apply a large electric field is important because higher voltages lead to shorter analysis times (Equation \ref{12.9}), more efficient separations (Equation \ref{12.10}), and better resolution (Equation \ref{12.11}). Because narrow bored capillary tubes dissipate Joule heating so efficiently, voltages of up to 40 kV are possible. Because of the high voltages, be sure to follow your instrument’s safety guidelines. Detectors Most of the detectors used in HPLC also find use in capillary electrophoresis. Among the more common detectors are those based on the absorption of UV/Vis radiation, fluorescence, conductivity, amperometry, and mass spectrometry. Whenever possible, detection is done “on-column” before the solutes elute from the capillary tube and additional band broadening occurs. UV/Vis detectors are among the most popular. Because absorbance is directly proportional to path length, the capillary tubing’s small diameter leads to signals that are smaller than those obtained in HPLC. Several approaches have been used to increase the pathlength, including a Z-shaped sample cell and multiple reflections (see Figure 30.2.7 ). Detection limits are about 10–7 M. Better detection limits are obtained using fluorescence, particularly when using a laser as an excitation source. When using fluorescence detection a small portion of the capillary’s protective coating is removed and the laser beam is focused on the inner portion of the capillary tubing. Emission is measured at an angle of 90o to the laser. Because the laser provides an intense source of radiation that can be focused to a narrow spot, detection limits are as low as 10–16 M. Solutes that do not absorb UV/Vis radiation or that do not undergo fluorescence can be detected by other detectors. Table 30.2.1 provides a list of detectors for capillary electrophoresis along with some of their important characteristics. Table 30.2.1 . Characteristics of Detectors for Capillary Electrophoresis detector selectivity (universal or analyte must ...) detection limited (moles injected) detection limit (molarity) on-column detection? UV/Vis absorbance have a UV/Vis chromophore $10^{-13} - 10^{-16}$ $10^{-5} - 10^{-7}$ yes indirect absorbancd universal $10^{-12} - 10^{-15}$ $10^{-4} - 10^{-6}$ yes fluoresence have a favorable quantum yield $10^{-13} - 10^{-17}$ $10^{-7} - 10^{-9}$ yes laser fluorescence have a favorable quantum yield $10^{-18} - 10^{-20}$ $10^{-13} - 10^{-16}$ yes mass spectrometer universal (total ion) selective (single ion) $10^{-16} - 10^{-17}$ $10^{-8} - 10^{-10}$ no amperometry undergo oxidation or reduction $10^{-18} - 10^{-19}$ $10^{-7} - 10^{-10}$ no conductivity universal $10^{-15} - 10^{-16}$ $10^{-7} - 10^{-9}$ no radiometric be radioactive $10^{-17} - 10^{-19}$ $10^{-10} - 10^{-12}$ yes
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/30%3A_Capillary_Electrophoresis_and_Capillary_Electrochromatography/30.02%3A_Capillary_Electrophoresis.txt
There are several different forms of capillary electrophoresis, each of which has its particular advantages. Several of these methods are described briefly in this section. Capillary Zone Electrophoresis (CZE) The simplest form of capillary electrophoresis is capillary zone electrophoresis. In CZE we fill the capillary tube with a buffer and, after loading the sample, place the ends of the capillary tube in reservoirs that contain additional buffer. Usually the end of the capillary containing the sample is the anode and solutes migrate toward the cathode at a velocity determined by their respective electrophoretic mobilities and the electroosmotic flow. Cations elute first, with smaller, more highly charged cations eluting before larger cations with smaller charges. Neutral species elute as a single band. Anions are the last species to elute, with smaller, more negatively charged anions being the last to elute. We can reverse the direction of electroosmotic flow by adding an alkylammonium salt to the buffer solution. As shown in Figure 30.3.1 , the positively charged end of the alkyl ammonium ions bind to the negatively charged silanate ions on the capillary’s walls. The tail of the alkyl ammonium ion is hydrophobic and associates with the tail of another alkyl ammonium ion. The result is a layer of positive charges that attract anions in the buffer. The migration of these solvated anions toward the anode reverses the electroosmotic flow’s direction. The order of elution is exactly opposite that observed under normal conditions. Coating the capillary’s walls with a nonionic reagent eliminates the electroosmotic flow. In this form of CZE the cations migrate from the anode to the cathode. Anions elute into the source reservoir and neutral species remain stationary. Capillary zone electrophoresis provides effective separations of charged species, including inorganic anions and cations, organic acids and amines, and large biomolecules such as proteins. For example, CZE was used to separate a mixture of 36 inorganic and organic ions in less than three minutes [Jones, W. R.; Jandik, P. J. Chromatog. 1992, 608, 385–393]. A mixture of neutral species, of course, can not be resolved. Micellar Electrokinetic Capillary Chromatography (MEKC) One limitation to CZE is its inability to separate neutral species. Micellar electrokinetic capillary chromatography overcomes this limitation by adding a surfactant, such as sodium dodecylsulfate (Figure 30.3.2 a) to the buffer solution. Sodium dodecylsulfate, or SDS, consists of a long-chain hydrophobic tail and a negatively charged ionic functional group at its head. When the concentration of SDS is sufficiently large a micelle forms. A micelle consists of a spherical agglomeration of 40–100 surfactant molecules in which the hydrocarbon tails point inward and the negatively charged heads point outward (Figure 30.3.2 b). Because micelles have a negative charge, they migrate toward the cathode with a velocity less than the electroosmotic flow velocity. Neutral species partition themselves between the micelles and the buffer solution in a manner similar to the partitioning of solutes between the two liquid phases in HPLC. Because there is a partitioning between two phases, we include the descriptive term chromatography in the techniques name. Note that in MEKC both phases are mobile. The elution order for neutral species in MEKC depends on the extent to which each species partitions into the micelles. Hydrophilic neutrals are insoluble in the micelle’s hydrophobic inner environment and elute as a single band, as they would in CZE. Neutral solutes that are extremely hy- drophobic are completely soluble in the micelle, eluting with the micelles as a single band. Those neutral species that exist in a partition equilibrium between the buffer and the micelles elute between the completely hydro- philic and completely hydrophobic neutral species. Those neutral species that favor the buffer elute before those favoring the micelles. Micellar electrokinetic chromatography is used to separate a wide variety of samples, including mixtures of pharmaceutical compounds, vitamins, and explosives. Capillary Gel Electrophoresis (CGE) In capillary gel electrophoresis the capillary tubing is filled with a polymeric gel. Because the gel is porous, a solute migrates through the gel with a velocity determined both by its electrophoretic mobility and by its size. The ability to effect a separation using size is helpful when the solutes have similar electrophoretic mobilities. For example, fragments of DNA of varying length have similar charge-to-size ratios, making their separation by CZE difficult. Because the DNA fragments are of different size, a CGE separation is possible. The capillary used for CGE usually is treated to eliminate electroosmotic flow to prevent the gel from extruding from the capillary tubing. Samples are injected electrokinetically because the gel provides too much resistance for hydrodynamic sampling. The primary application of CGE is the separation of large biomolecules, including DNA fragments, proteins, and oligonucleotides. Capillary Electrochromatography (CEC) Another approach to separating neutral species is capillary electrochromatography. In CEC the capillary tubing is packed with 1.5–3 μm particles coated with a bonded stationary phase. Neutral species separate based on their ability to partition between the stationary phase and the buffer, which is moving as a result of the electroosmotic flow; Figure 30.3.3 provides a representative example for the separation of a mixture of hydrocarbons. A CEC separation is similar to the analogous HPLC separation, but without the need for high pressure pumps. Efficiency in CEC is better than in HPLC, and analysis times are shorter.
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/30%3A_Capillary_Electrophoresis_and_Capillary_Electrochromatography/30.03%3A_Applications_of_Capillary_Electrophoresis.txt
A thermal method of analysis is a technique in which measure a physical property of a material as we subject it to a change in temperature. In this chapter we consider three examples of thermal methods: thermogravimetry, differential thermal analysis, and differential scanning calorimetry. • 31.1: Thermogravimetry One method for determining the products of a thermal decomposition is to monitor the sample’s mass as a function of temperature, a process called a thermogravimetric analysis (TGA) or thermogravimetry. • 31.2: Differential Thermal Analysis and Differential Scanning Calorimetry Differential thermal analysis (DTA) and differential scanning calorimetry (DSC) are similar methods in which the response of a sample and a reference to a change in temperature. In DTA the temperature applied to the sample is increased linearly and the difference between the temperature of the reference material and the temperature of the sample is recorded as function of the sample's temperature 31: Thermal Methods One method for determining the products of a thermal decomposition is to monitor the sample’s mass as a function of temperature, a process called a thermogravimetric analysis (TGA) or thermogravimetry. Figure 31.1.1 shows a typical thermogram in which each change in mass—each “step” in the thermogram—represents the loss of a volatile product. As the following example illustrates, we can use a thermogram to identify a compound’s decomposition reactions. Example 31.1.1 The thermogram in Figure 31.1.1 shows the mass of a sample of calcium oxalate monohydrate, CaC2O4•H2O, as a function of temperature. The original sample of 17.61 mg was heated from room temperature to 1000oC at a rate of 20oC per minute. For each step in the thermogram, identify the volatilization product and the solid residue that remains. Solution From 100–250oC the sample loses 17.61 mg – 15.44 mg, or 2.17 mg, which is $\frac{2.17 \ \mathrm{mg}}{17.61 \ \mathrm{mg}} \times 100=12.3 \% \nonumber$ of the sample’s original mass. In terms of CaC2O4•H2O, this corresponds to a decrease in the molar mass of $0.123 \times 146.11 \ \mathrm{g} / \mathrm{mol}=18.0 \ \mathrm{g} / \mathrm{mol} \nonumber$ The product’s molar mass and the temperature range for the decomposition, suggest that this is a loss of H2O(g), leaving a residue of CaC2O4. The loss of 3.38 mg from 350–550oC is a 19.2% decrease in the sample’s original mass, or a decrease in the molar mass of $0.192 \times 146.11 \ \mathrm{g} / \mathrm{mol}=28.1 \ \mathrm{g} / \mathrm{mol} \nonumber$ which is consistent with the loss of CO(g) and a residue of CaCO3. Finally, the loss of 5.30 mg from 600-800oC is a 30.1% decrease in the sample’s original mass, or a decrease in molar mass of $0.301 \times 146.11 \ \mathrm{g} / \mathrm{mol}=44.0 \ \mathrm{g} / \mathrm{mol} \nonumber$ This loss in molar mass is consistent with the release of CO2(g), leaving a final residue of CaO. The three decomposition reactions are $\begin{array}{c}{\mathrm{CaC}_{2} \mathrm{O}_{4} \cdot \mathrm{H}_{2} \mathrm{O}(s) \rightarrow \ \mathrm{CaC}_{2} \mathrm{O}_{4}(s)+2 \mathrm{H}_{2} \mathrm{O}(l)} \ {\mathrm{CaC}_{2} \mathrm{O}_{4}(s) \rightarrow \ \mathrm{CaCO}_{3}(s)+\mathrm{CO}(g)} \ {\mathrm{CaCO}_{3}(s) \rightarrow \ \mathrm{CaO}(s)+\mathrm{CO}_{2}(g)}\end{array} \nonumber$ Identifying the products of a thermal decomposition provides information that we can use to develop an analytical procedure. For example, the thermogram in Figure 31.1.1 shows that we must heat a precipitate of CaC2O4•H2O to a temperature between 250 and 400oC if we wish to isolate and weigh CaC2O4. Alternatively, heating the sample to 1000oC allows us to isolate and weigh CaO. Exercise 31.1.1 Under the same conditions as Figure 31.1.1 , the thermogram for a 22.16 mg sample of MgC2O4•H2O shows two steps: a loss of 3.06 mg from 100–250oC and a loss of 12.24 mg from 350–550oC. For each step, identify the volatilization product and the solid residue that remains. Using your results from this exercise and the results from Example 31.1.1 , explain how you can use thermogravimetry to analyze a mixture that contains CaC2O4•H2O and MgC2O4•H2O. You may assume that other components in the sample are inert and thermally stable below 1000oC. Answer From 100–250oC the sample loses 13.8% of its mass, or a loss of $0.138 \times 130.34 \ \mathrm{g} / \mathrm{mol}=18.0 \ \mathrm{g} / \mathrm{mol} \nonumber$ which is consistent with the loss of H2O(g) and a residue of MgC2O4. From 350–550oC the sample loses 55.23% of its original mass, or a loss of $0.5523 \times 130.34 \ \mathrm{g} / \mathrm{mol}=71.99 \ \mathrm{g} / \mathrm{mol} \nonumber$ This weight loss is consistent with the simultaneous loss of CO(g) and CO2(g), leaving a residue of MgO. We can analyze the mixture by heating a portion of the sample to 300oC, 600oC, and 1000oC, recording the mass at each temperature. The loss of mass between 600oC and 1000oC, $\Delta m_2$, is due to the loss of CO2(g) from the decomposition of CaCO3 to CaO, and is proportional to the mass of CaC2O4•H2O in the sample. $\mathrm{g} \ \mathrm{CaC}_{2} \mathrm{O}_{4} \cdot \mathrm{H}_{2} \mathrm{O}=\Delta m_{2} \times \frac{1 \ \mathrm{mol} \ \mathrm{CO}_{2}}{44.01 \ \mathrm{g} \ \mathrm{CO}_{2}} \times \frac{146.11 \ \mathrm{g} \ \mathrm{CaC}_{2} \mathrm{O}_{4} \cdot \mathrm{H}_{2} \mathrm{O}}{\mathrm{mol} \ \mathrm{CO}_{2}} \nonumber$ The change in mass between 300oC and 600oC, $\Delta m_1$, is due to the loss of CO(g) from CaC2O4•H2O and the loss of CO(g) and CO2(g) from MgC2O4•H2O. Because we already know the amount of CaC2O4•H2O in the sample, we can calculate its contribution to $\Delta m_1$. $\left(\Delta m_{1}\right)_{\mathrm{Ca}}=\mathrm{g} \ \mathrm{CaC}_{2} \mathrm{O}_{4} \cdot \mathrm{H}_{2} \mathrm{O}=\Delta m_{2} \times \frac{1 \ \mathrm{mol} \ \mathrm{CO}}{146.11 \ \mathrm{g} \ \mathrm{CaC}_{2} \mathrm{O}_{4} \cdot \mathrm{H}_{2} \mathrm{O}} \times \frac{28.01 \ \mathrm{g} \ \mathrm{CO}}{\mathrm{mol} \ \mathrm{CO}} \nonumber$ The change in mass between 300oC and 600oC due to the decomposition of MgC2O4•H2O $\left(m_{1}\right)_{\mathrm{Mg}}=\Delta m_{1}-\left(\Delta m_{1}\right)_{\mathrm{Ca}} \nonumber$ provides the mass of MgC2O4•H2O in the sample. $\mathrm{g} \ \mathrm{MgC}_{2} \mathrm{O}_{4} \cdot \mathrm{H}_{2} \mathrm{O}=\left(\Delta m_{1}\right)_{\mathrm{Mg}} \times \frac{1 \ \mathrm{mol}\left(\mathrm{CO} \ + \ \mathrm{CO}_{2}\right)}{130.35 \ \mathrm{g} \ \mathrm{MgC}_{2} \mathrm{O}_{4} \cdot \mathrm{H}_{2} \mathrm{O}} \times \frac{78.02 \ \mathrm{g} \ \left(\mathrm{CO} \ + \ \mathrm{CO}_{2}\right)}{\mathrm{mol}\ \left(\mathrm{CO} \ + \ \mathrm{CO}_{2}\right)} \nonumber$ Instrumentation In a thermogravimetric analysis, the sample is placed on a small balance pan attached to one arm of an electromagnetic balance (Figure 31.1.2 ). The sample is lowered into an electric furnace and the furnace’s temperature is increased at a fixed rate of a few degrees per minute while monitoring continuously the sample’s weight. The instrument usually includes a gas line for purging the volatile decomposition products out of the furnace, and a heat exchanger to dissipate the heat emitted by the furnace. Applications Perhaps the most important application gravimetry is exploring a compound's thermal stability, as illustrated in Figure $1$ and Exercise $1$ for calcium oxalate hydrate. TGA is particularly useful for studying the thermal stability of polymers. 31.02: Differential Thermal Analysis Differential thermal analysis (DTA) and differential scanning calorimetry (DSC) are similar methods in which the response of a sample and a reference to a change in temperature. In DTA the temperature applied to the sample is increased linearly and the difference between the temperature of the reference material, $T_{ref}$, and the temperature of the sample, $T_{samp}$, is recorded as function of the sample's temperature $\Delta T = T_{ref} - T_{samp} \nonumber$When the sample undergoes an exothermic process, such as a crystallization or a chemical reaction, the temperature of the sample increases more than does the temperature of the reference, resulting in a more negative value for $\Delta T$. For an endothermic process, such as melting of a crystalline material or the loss of waters of hydration, the sample's temperature lags behind that for the reference materials, resulting in a more positive value for $\Delta T$. Figure $1$ shows the general shape of DTA curve with negative peaks signaling an endothermic process and positive peaks signaling an exothermic process. Changes in $\Delta T$ that are not peaks, but shifts in the baseline—as seen at the far left of the curve in Figure $1$—are the result of a simple phase transition for which $\Delta H = 0$.In DSC the temperature applied to the sample is increased linearly and the relative amount of heat needed to maintain the sample and the reference at the same temperature is measured. For an endothermic process, more heat flows into the sample and for an exothermic process, less heat flows into the sample. The result is a DSC curve that looks similar to that for DTA (see Figure $1$). Instrumentation Figure $2$ shows the basic components of a heat-flux differential scanning calorimeter. The sample and the reference materials are sealed within small aluminum pans and placed on separate platforms within the sample chamber. The two platforms are connected by a metal disk that provides a low resistance path for moving heat between the sample and the reference to maintain a $\Delta T$ of zero between the two. Another instrumental design for differential scanning calorimetry, which is called power compensation DSC, places the sample and the reference in separated heating chambers and measures the difference in the power applied to the two chambers needed to maintain a $\Delta T$ of zero. Applications Integrating a peak in DSC or DTA to determine its its area, $A$, gives a signal that is proportional to $\Delta H$ $\Delta H = K \times A \nonumber$ where the calibration constant, $k$, is determined using an established reference material. Both DSC and DTA find applications in the study of polymers, liquid crystals, and pharmaceutical compounds.
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/31%3A_Thermal_Methods/31.01%3A_Thermogravimetric_Methods.txt
Radiochemical methods of analysis take advantage of the instability of some elemental isotopes, which decay through the release of alpha particles, beta particles, gamma rays, and/or X-rays, often provide for a selective analysis for one analyte in a complex mixture of other species without the need for a prior separation. In this chapter we review the basics of radioactive decay and its direct application to samples, and two additional methods of importance: neutron activation and isotope dilution. • 32.1: Radioactive Isotopes Atoms that have the same number of protons but a different number of neutrons are isotopes. Although an element’s different isotopes have the same chemical properties, their nuclear properties are not identical. The most important difference between isotopes is their stability. The nuclear configuration of a stable isotope remains constant with time. Unstable isotopes, however, disintegrate spontaneously, emitting radioactive decay particles as they transform into a more stable form. • 32.2: Instrumentation Alpha particles, beta particles, gamma rays, and X-rays are measured by using the particle’s energy to produce an amplified pulse of electrical current in a detector. These pulses are counted to give the rate of disintegration. There are three common types of detectors: gas-filled detectors, scintillation counters, and semiconductor detectors. • 32.3: Neutron Activation Methods Few analytes are naturally radioactive. For many analytes, however, we can induce radioactivity by irradiating the sample with neutrons in a process called neutron activation analysis (NAA). The radioactive element formed by neutron activation decays to a stable isotope by emitting a gamma ray, and, possibly, other nuclear particles. • 32.4: Isotope Dilution Methods In isotope dilution, an external source of analyte is prepared in a radioactive form with a known activity. We add a known mass of the tracer to a portion of sample that contains an unknown mass of analyte. Analyzing for the total amount of analyte and amount of activity allows us to determine the amount of analyte in the original sample. Thumbnail: Visual representation of alpha decay. (Public Domain; Inductiveload via Wikipedia) 32: Radiochemical Methods Atoms that have the same number of protons but a different number of neutrons are isotopes. To identify an isotope we use the notation ${}_Z^A E$, where E is the element’s atomic symbol, Z is the element’s atomic number, and A is the element’s atomic mass number. Although an element’s different isotopes have the same chemical properties, their nuclear properties are not identical. The most important difference between isotopes is their stability. The nuclear configuration of a stable isotope remains constant with time. Unstable isotopes, however, disintegrate spontaneously, emitting radioactive decay particles as they transform into a more stable form. An element’s atomic number, Z, is equal to the number of protons and its atomic mass, A, is equal to the sum of the number of protons and neutrons. We represent an isotope of carbon-13 as $_{6}^{13} \text{C}$ because carbon has six protons and seven neutrons. Sometimes we omit Z from this notation—identifying the element and the atomic number is repetitive because all isotopes of carbon have six protons and any atom that has six protons is an isotope of carbon. Thus, 13C and C–13 are alternative notations for this isotope of carbon. Types of Radioactive Decay Particles The most important types of radioactive particles are alpha particles, beta particles, gamma rays, and X-rays. An alpha particle, $\alpha$, is equivalent to a helium nucleus, ${}_2^4 \text{He}$. When an atom emits an alpha particle, the product is a new atom whose atomic number and atomic mass number are, respectively, 2 and 4 less than its unstable parent. The decay of uranium to thorium is one example of alpha emission. $_{92}^{238} \text{U} \longrightarrow _{90}^{234} \text{Th}+\alpha \nonumber$ A beta particle, $\beta$, comes in one of two forms. A negatron, $_{-1}^0 \beta$, is produced when a neutron changes into a proton, increasing the atomic number by one, as shown here for lead. $_{82}^{214} \mathrm{Pb} \longrightarrow_{83}^{214} \mathrm{Bi} + _{-1}^{0} \beta \nonumber$ The conversion of a proton to a neutron results in the emission of a positron, $_{1}^0 \beta$. $_{15}^{30} \mathrm{P} \longrightarrow_{14}^{30} \mathrm{Si} + _{1}^{0} \beta \nonumber$ A negatron, which is the more common type of beta particle, is equivalent to an electron. The emission of an alpha or a beta particle often produces an isotope in an unstable, high energy state. This excess energy is released as a gamma ray, $\gamma$, or as an X-ray. Gamma ray and X-ray emission may also occur without the release of an alpha particle or a beta particle. Radioactive Decay Rates A radioactive isotope’s rate of decay, or activity, follows first-order kinetics $A-{t} = -\frac{d N}{d t}=\lambda N \label{13.1}$ where A is the isotope’s activity, N is the number of radioactive atoms present in the sample at time t, and $\lambda$ is the isotope’s decay constant. Activity is expressed as the number of disintegrations per unit time. As with any first-order process, we can rewrite Equation \ref{13.1} in an integrated form. $N_{t}=N_{0} e^{-\lambda t} \label{13.2}$ Substituting Equation \ref{13.2} into Equation \ref{13.1} gives $A_{t} = \lambda N_{0} e^{-\lambda t}=A_{0} e^{-\lambda t} \label{13.3}$ If we measure a sample’s activity at time t we can determine the sample’s initial activity, A0, or the number of radioactive atoms originally present in the sample, N0. An important characteristic property of a radioactive isotope is its half-life, t1/2, which is the amount of time required for half of the radioactive atoms to disintegrate. For first-order kinetics the half-life is $t_{1 / 2}=\frac{0.693}{\lambda} \label{13.4}$ Because the half-life is independent of the number of radioactive atoms, it remains constant throughout the decay process. For example, if 50% of the radioactive atoms remain after one half-life, then 25% remain after two half-lives, and 12.5% remain after three half-lives. Suppose we begin with an N0 of 1200 atoms During the first half-life, 600 atoms disintegrate and 600 remain. During the second half-life, 300 of the 600 remaining atoms disintegrate, leaving 300 atoms or 25% of the original 1200 atoms. Of the 300 remaining atoms, only 150 remain after the third half-life, or 12.5% of the original 1200 atoms. Kinetic information about a radioactive isotope usually is given in terms of its half-life because it provides a more intuitive sense of the isotope’s stability. Knowing, for example, that the decay constant for $_{38}^{90}\text{Sr}$ is 0.0247 yr–1 does not give an immediate sense of how fast it disintegrates. On the other hand, knowing that its half-life is 28.1 yr makes it clear that the concentration of $_{38}^{90}\text{Sr}$ in a sample remains essentially constant over a short period of time. Counting Statistics Radioactivity does not follow a normal distribution because the possible outcomes are not continuous; that is, a sample can emit 1 or 2 or 3 alpha particles (or some other integer value) in a fixed intervale, but it cannot emit 2.59 alpha particles during that same interval. A Poisson distribution provides the probability that a given number of events will occur in a fixed interval in time or space if the event has a known average rate and if each new event is independent of the preceding event. Mathematically a Poisson distribution is defined by the equation $P(X, \lambda) = \frac {e^{-\lambda} \lambda^X} {X !} \nonumber$ where $P(X, \lambda)$ is the probability that an event happens X times given the event’s average rate, $\lambda$. The Poisson distribution has a theoretical mean, $\mu$, and a theoretical variance, $\sigma^2$, that are each equal to $\lambda$. Note For a more detailed discussion of the distribution of data, including normal distributions and Poisson distributions, see Appendix 1. The accuracy and precision of radiochemical methods generally are within the range of 1–5%. We can improve the precision—which is limited by the random nature of radioactive decay—by counting the emission of radioactive particles for as long a time as is practical. If the number of counts, M, is reasonably large (M ≥ 100), and the counting period is significantly less than the isotope’s half-life, then the percent relative standard deviation for the activity, $(\sigma_A)_{rel}$, is approximately $\left(\sigma_{A}\right)_{\mathrm{rel}}=\frac{1}{\sqrt{M}} \times 100 \nonumber$ For example, if we determine the activity by counting 10 000 radioactive particles, then the relative standard deviation is 1%. A radiochemical method’s sensitivity is inversely proportional to $(\sigma_A)_{rel}$, which means we can improve the sensitivity by counting more particles. Analysis of Radioactive Analytes The concentration of a long-lived radioactive isotope remains essentially constant during the period of analysis. As shown in Example 32.1.1 , we can use the sample’s activity to calculate the number of radioactive particles in the sample. Example 32.1.1 The activity in a 10.00-mL sample of wastewater that contains $_{38}^{90}\text{Sr}$ is $9.07 \times 10^6$ disintegrations/s. What is the molar concentration of $_{38}^{90}\text{Sr}$ in the sample? The half-life for $_{38}^{90}\text{Sr}$ is 28.1 yr. Solution Solving Equation \ref{13.4} for $\lambda$, substituting into Equation \ref{13.1}, and solving for N gives $N=\frac{A \times t_{1 / 2}}{0.693} \nonumber$ Before we can determine the number of atoms of $_{38}^{90}\text{Sr}$ in the sample we must express its activity and its half-life using the same units. Converting the half-life to seconds gives t1/2 as $8.86 \times 10^8$ s; thus, there are $\frac{\left(9.07 \times 10^{6} \text { disintegrations/s }\right)\left(8.86 \times 10^{8} \text{ s}\right)}{0.693} = 1.16 \times 10^{16} \text{ atoms} _{38}^{90}\text{Sr} \nonumber$ The concentration of $_{38}^{90}\text{Sr}$ in the sample is $\frac{1.16 \times 10^{16} \text { atoms } _{38}^{90} \text{Sr}}{\left(6.022 \times 10^{23} \text { atoms/mol }\right)(0.01000 \mathrm{L})} = 1.93 \times 10^{-6} \text{ M } _{38}^{90}\text{Sr} \nonumber$ The direct analysis of a short-lived radioactive isotope using the method outlined in Example 32.1.1 is less useful because it provides only a transient measure of the isotope’s concentration. Instead, we can measure its activity after an elapsed time, t, and use Equation \ref{13.3} to calculate N0. One example of a characterization application is the determination of a sample’s age based on the decay of a radioactive isotope naturally present in the sample. The most common example is carbon-14 dating, which is used to determine the age of natural organic materials. As cosmic rays pass through the upper atmosphere, some $_7^{14}\text{N}$ atoms in the atmosphere capture high energy neutrons, converting them into $_6^{14}\text{C}$. The $_6^{14}\text{C}$ then migrates into the lower atmosphere where it oxidizes to form C-14 labeled CO2. Animals and plants subsequently incorporate this labeled CO2 into their tissues. Because this is a steady-state process, all plants and animals have the same ratio of $_6^{14}\text{C}$ to $_6^{12}\text{C}$ in their tissues. When an organism dies, the radioactive decay of $_6^{14}\text{C}$ to $_7^{14}\text{N}$ by $_{-1}^0 \beta$ emission (t = 5730 years) leads to predictable reduction in the $_6^{14}\text{C}$ to $_6^{12}\text{C}$ ratio. We can use the change in this ratio to date samples that are as much as 30000 years old, although the precision of the analysis is best when the sample’s age is less than 7000 years. The accuracy of carbon-14 dating depends upon our assumption that the natural $_6^{14}\text{C}$ to $_6^{12}\text{C}$ ratio in the atmosphere is constant over time. Some variation in the ratio has occurred as the result of the increased consumption of fossil fuels and the production of $_6^{14}\text{C}$ during the testing of nuclear weapons. A calibration curve prepared using samples of known age—examples of samples include tree rings, deep ocean sediments, coral samples, and cave deposits—limits this source of uncertainty. There is no need to prepare a calibration curve for each analysis. Instead, there is a universal calibration curve known as IntCal. The calibration curve from 2013, IntCal13, is described in the following paper: Reimer, P. J., et. al. “IntCal13 and Marine 13 Radiocarbon Age Calibration Curve 0–50,000 Years Cal BP,” Radiocarbon 2013, 55, 1869–1887. This calibration spans 50 000 years before the present (BP). Example 32.1.2 To determine the age of a fabric sample, the relative ratio of $_6^{14}\text{C}$ to $_6^{12}\text{C}$ was measured yielding a result of 80.9% of that found in modern fibers. How old is the fabric? Solution Equation \ref{13.3} and Equation \ref{13.4} provide us with a method to convert a change in the ratio of $_6^{14}\text{C}$ to $_6^{12}\text{C}$ to the fabric’s age. Letting A0 be the ratio of $_6^{14}\text{C}$ to $_6^{12}\text{C}$ in modern fibers, we assign it a value of 1.00. The ratio of $_6^{14}\text{C}$ to $_6^{12}\text{C}$ in the sample, A, is 0.809. Solving gives $t=\ln \frac{A_{0}}{A} \times \frac{t_{1 / 2}}{0.693}=\ln \frac{1.00}{0.809} \times \frac{5730 \text { yr }}{0.693}=1750 \text { yr } \nonumber$ Other isotopes can be used to determine a sample’s age. The age of rocks, for example, has been determined from the ratio of the number of $_{92}^{238}\text{U}$ to the number of stable $_{82}^{206}\text{Pb}$ atoms produced by radioactive decay. For rocks that do not contain uranium, dating is accomplished by comparing the ratio of radioactive $_{19}^{40}\text{K}$ to the stable $_{18}^{40}\text{Ar}$. Another example is the dating of sediments collected from lakes by measuring the amount of $_{82}^{210}\text{Pb}$ that is present.
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/32%3A_Radiochemical_Methods/32.01%3A_Radioactive_Isotopes.txt
Alpha particles, beta particles, gamma rays, and X-rays are measured by using the particle’s energy to produce an amplified pulse of electrical current in a detector. These pulses are counted to give the rate of disintegration. There are three common types of detectors: gas-filled detectors, scintillation counters, and semiconductor detectors. A gas-filled detector consists of a tube that contains an inert gas, such as Ar. When a radioactive particle enters the tube it ionizes the inert gas, producing an Ar+/e ion-pair. Movement of the electron toward the anode and of the Ar+ toward the cathode generates a measurable electrical current. A Geiger counter is one example of a gas-filled detector. A scintillation counter uses a fluorescent material to convert radioactive particles into easy to measure photons. For example, one solid-state scintillation counter consists of a NaI crystal that contains 0.2% TlI, which produces several thousand photons for each radioactive particle. Finally, in a semiconductor detector, adsorption of a single radioactive particle promotes thousands of electrons to the semiconductor’s conduction band, increasing conductivity. 32.03: Neutron Activation Methods Few analytes are naturally radioactive. For many analytes, however, we can induce radioactivity by irradiating the sample with neutrons in a process called neutron activation analysis (NAA). The radioactive element formed by neutron activation decays to a stable isotope by emitting a gamma ray, and, possibly, other nuclear particles. The rate of gamma-ray emission is proportional to the analyte’s initial concentration in the sample. For example, if we place a sample containing non-radioactive $_{13}^{27}\text{Al}$ in a nuclear reactor and irradiate it with neutrons, the following nuclear reaction takes place. $_{13}^{27} \mathrm{Al}+_{0}^{1} \mathrm{n} \longrightarrow_{13}^{28} \mathrm{Al} \nonumber$ The radioactive isotope of 28Al has a characteristic decay process that includes the release of a beta particle and a gamma ray. $_{13}^{28} \mathrm{Al} \longrightarrow_{14}^{28} \mathrm{Al} + _{-1}^{0} \beta + \gamma \nonumber$ When irradiation is complete, we remove the sample from the nuclear reactor, allow any short-lived radioactive interferences to decay into the background, and measure the rate of gamma-ray emission. The initial activity at the end of irradiation depends on the number of atoms that are present. This, in turn, is a equal to the difference between the rate of formation for $_{13}^{28}\text{Al}$ and its rate of disintegration $\frac {dN_{_{13}^{28} \text{Al}}} {dt} = \Phi \sigma N_{_{13}^{27} \text{Al}} - \lambda N_{_{13}^{28} \text{Al}} \label{13.5}$ where $\Phi$ is the neutron flux and $\sigma$ is the reaction cross-section, or probability that a $_{13}^{27}\text{Al}$ nucleus captures a neutron. Integrating Equation \ref{13.5} over the time of irradiation, ti, and multiplying by $\lambda$ gives the initial activity, A0, at the end of irradiation as $A_0 = \lambda N_{_{13}^{28}\text{Al}} = \Phi \sigma N_{_{13}^{27}\text{Al}} (1-e^{-kt}) \nonumber$ If we know the values for A0, $\Phi$, $\sigma$, $\lambda$, and ti, then we can calculate the number of atoms of $_{13}^{27}\text{Al}$ initially present in the sample. A simpler approach is to use one or more external standards. Letting $(A_0)_x$ and $(A_0)_s$ represent the analyte’s initial activity in an unknown and in an external standard, and letting $w_x$ and $w_s$ represent the analyte’s weight in the unknown and in the external standard, we obtain the following pair of equations $\left(A_{0}\right)_{x}=k w_{x} \label{13.6}$ $\left(A_{0}\right)_{s}=k w_{s} \label{13.7}$ that we can solve to determine the analyte’s mass in the sample. As noted earlier, gamma ray emission is measured following a period during which we allow short-lived interferents to decay into the background. As shown in Figure 32.3.1 , we determine the sample’s or the standard’s initial activity by extrapolating a curve of activity versus time back to t = 0. Alternatively, if we irradiate the sample and the standard at the same time, and if we measure their activities at the same time, then we can substitute these activities for (A0)x and (A0)s. This is the strategy used in the following example. Example 32.3.1 The concentration of Mn in steel is determined by a neutron activation analysis using the method of external standards. A 1.000-g sample of an unknown steel sample and a 0.950-g sample of a standard steel known to contain 0.463% w/w Mn are irradiated with neutrons for 10 h in a nuclear reactor. After a 40-min delay the gamma ray emission is 2542 cpm (counts per minute) for the unknown and 1984 cpm for the external standard. What is the %w/w Mn in the unknown steel sample? Solution Combining equations \ref{13.6} and \ref{13.7} gives $w_{x}=\frac{A_{x}}{A_{s}} \times w_{s} \nonumber$ The weight of Mn in the external standard is $w_{s}=\frac{0.00463 \text{ g } \text{Mn}}{\text{ g } \text { steel }} \times 0.950 \text{ g} \text { steel }=0.00440 \text{ g} \text{ Mn} \nonumber$ Substituting into the above equation gives $w_{x}=\frac{2542 \text{ cpm}}{1984 \text{ cpm}} \times 0.00440 \text{ g} \text{ Mn}=0.00564 \text{ g} \text{ Mn} \nonumber$ Because the original mass of steel is 1.000 g, the %w/w Mn is 0.564%. Among the advantages of neutron activation are its applicability to almost all elements in the periodic table and that it is nondestructive to the sample. Consequently, NAA is an important technique for analyzing archeological and forensic samples, as well as works of art.
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/32%3A_Radiochemical_Methods/32.02%3A_Instrumentation.txt
Another important radiochemical method for the analysis of nonradioactive analytes is isotope dilution. An external source of analyte is prepared in a radioactive form with a known activity, $A_T$, for its radioactive decay—we call this form of the analyte a tracer. To prepare a sample for analysis we add a known mass of the tracer, wT, to a portion of sample that contains an unknown mass, wx , of analyte. After homogenizing the sample and tracer, we isolate wA grams of analyte by using a series of appropriate chemical and physical treatments. Because these chemical and physical treatments cannot distinguish between radioactive and nonradioactive forms of the analyte, the isolated material contains both. Finally, we measure the activity of the isolated sample, AA. If we recover all the analyte—both the radioactive tracer and the nonradioactive analyte—then AA and $A_T$ are equal and wx = wA wT. Normally, we fail to recover all the analyte. In this case $A_A$ is less than $A_T$, and $A_{A}=A_{T} \times \frac{w_{A}}{w_{x}+w_{T}} \label{13.8}$ The ratio of weights in Equation \ref{13.8} accounts for any loss of activity that results from our failure to recover all the analyte. Solving Equation \ref{13.8} for wx gives $w_{x}=\frac{A_{T}}{A_{A}} w_{A}-w_{T} \label{13.9}$ How we process the sample depends on the analyte and the sample’s matrix. We might, for example, digest the sample to bring the analyte into solution. After filtering the sample to remove the residual solids, we might precipitate the analyte, isolate it by filtration, dry it in an oven, and obtain its weight. Given that the goal of an analysis is to determine the amount of nonradioactive analyte in our sample, the realization that we might not recover all the analyte might strike you as unsettling. A single liquid–liquid extraction rarely has an extraction efficiency of 100%. One advantage of isotope dilution is that the extraction efficiency for the nonradioactive analyte and for the tracer are the same. If we recover 50% of the tracer, then we also recover 50% of the nonradioactive analyte. Because we know how much tracer we added to the sample, we can determine how much of the nonradioactive analyte is in the sample. Example 32.4.1 The concentration of insulin in a production vat is determined by isotope dilution. A 1.00-mg sample of insulin labeled with 14C that has an activity of 549 cpm is added to a 10.0-mL sample taken from the production vat. After homogenizing the sample, a portion of the insulin is separated and purified, yielding 18.3 mg of pure insulin. The activity for the isolated insulin is measured at 148 cpm. How many mg of insulin are in the original sample? Solution Substituting known values into Equation \ref{13.8} gives $w_{x}=\frac{549 \text{ cpm}}{148 \text{ cpm}} \times 18.3 \text{ mg}-1.00 \text{ mg}=66.9 \text{ mg} \text { insulin } \nonumber$ Equation \ref{13.8} and Equation \ref{13.9} are valid only if the tracer’s half-life is considerably longer than the time it takes to conduct the analysis. If this is not the case, then the decrease in activity is due both to the incomplete recovery and the natural decrease in the tracer’s activity. Table 32.4.1 provides a list of several common tracers for isotope dilution. Table 32.4.1 . Common Tracers for Isotope Dilution isotope half-life 3H 12.5 years 14C 5730 years 32P 14.3 days 35S 87.1 days 45Ca 152 days 55Fe 2.91 years 60Co 5.3 years 131I 8 days An important feature of isotope dilution is that it is not necessary to recover all the analyte to determine the amount of analyte present in the original sample. Isotope dilution, therefore, is useful for the analysis of samples with complex matrices, where a complete recovery of the analyte is difficult.
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/32%3A_Radiochemical_Methods/32.04%3A_Isotope_Dilution_Methods.txt
• 33.1: Overview of Automated Methods of Analysis An automated method of analysis is one in which the one or more steps in an analysis are completed without the direct action of the analyst. Instead, the instrument itself completes these actions. Some of these actions are carried out discretely and some are carried out continuously. • 33.2: Flow-Injection Analysis In this section we consider the technique of flow injection analysis in which we inject the sample into a flowing carrier stream that gives rise to a transient signal at the detector. Because the shape of this transient signal depends on the physical and chemical kinetic processes that take place in the carrier stream during the time between injection and detection, we include flow injection analysis in this chapter. • 33.3: Other Automated Methods of Analysis In the last two sections we introduced two examples of automated methods of analysis: a brief mention of automated titrators and a more extensive coverage of flow-injection analysis. In this section we consider three additional examples of automated methods of analysis: the stopped-flow analyzer, the centrifugal analyzer, and disposable single-test analyzers based on thin films, screen-printing, and paper. 33: Automated Methods of Analysis An automated method of analysis is one in which the one or more steps in an analysis are completed without the direct action of the analyst. Instead, the instrument itself completes these actions. Some of these actions are carried out discretely, such as an autosampler that can complete all facets of sample preparation, from collecting discrete samples, adding reagents, and diluting the mixture to a desired volume, prior to the analysts analyzing the samples. Another example of a discrete instrument is an automated titrator (see Figure \(1\)) that relieves the analyst from manually operating a buret. Instead, the analyst introduces the sample into the automated titrator and lets the instrument complete the titration. Other automated instruments are continuous in nature, in which samples are injected, either manually or with an autosampler, into a flowing stream of reagents that can serve to transport the samples to a detector and can serve as a source of reagents that convert the analyte into a form suitable for analysis. Both discrete and continuous automated methods of analysis have the advantage of allowing for a high throughput of samples and providing for greater reproducibility in results by relieving the analyst of the tedium associated with completing repetitive tasks. In general, continuous automated methods can handle more samples per unit time than can a discrete method. 33.02: Flow-Injection Analysis In this section we consider the technique of flow injection analysis—a continuous automated method—in which we inject the sample into a flowing carrier stream that gives rise to a transient signal at the detector. The shape of this transient signal depends on the physical and chemical kinetic processes that take place in the carrier stream during the time between injection and detection. Theory and Practice Flow injection analysis (FIA) was developed in the mid-1970s as a highly efficient technique for the automated analyses of samples [see, for example, (a) Ruzicka, J.; Hansen, E. H. Anal. Chim. Acta 1975, 78, 145–157; (b) Stewart, K. K.; Beecher, G. R.; Hare, P. E. Anal. Biochem. 1976, 70, 167–173; (c) Valcárcel, M.; Luque de Castro, M. D. Flow Injection Analysis: Principles and Applications, Ellis Horwood: Chichester, England, 1987]. Unlike the centrifugal analyzer described later in this chapter, in which the number of samples is limited by the transfer disk’s size, FIA allows for the rapid, sequential analysis of an unlimited number of samples. FIA is one example of a continuous-flow analyzer, in which we sequentially introduce samples at regular intervals into a liquid carrier stream that transports them to the detector. A schematic diagram detailing the basic components of a flow injection analyzer is shown in Figure 33.2.1 . The reagent that serves as the carrier is stored in a reservoir, and a propelling unit maintains a constant flow of the carrier through a system of tubing that comprises the transport system. We inject the sample directly into the flowing carrier stream, where it travels through one or more mixing and reaction zones before it reaches the detector’s flow-cell. Figure 33.2.1 is the simplest design for a flow injection analyzer, which consists of a single channel and a single reagent reservoir. Multiple channel instruments that merge together separate channels, each of which introduces a new reagent into the carrier stream, also are possible. When we first inject a sample into the carrier stream it has the rectangular flow profile of width w shown in Figure 33.2.2 a. As the sample moves through the mixing zone and the reaction zone, the width of its flow profile increases as the sample disperses into the carrier stream. Dispersion results from two processes: convection due to the flow of the carrier stream and diffusion due to the concentration gradient between the sample and the carrier stream. Convection occurs by laminar flow. The linear velocity of the sample at the tube’s walls is zero, but the sample at the center of the tube moves with a linear velocity twice that of the carrier stream. The result is the parabolic flow profile shown in Figure 33.2.2 b. Convection is the primary means of dispersion in the first 100 ms following the sample’s injection. The second contribution to the sample’s dispersion is diffusion due to the concentration gradient that exists between the sample and the carrier stream. As shown in Figure 33.2.2 , diffusion occurs parallel (axially) and perpendicular (radially) to the direction in which the carrier stream is moving. Only radial diffusion is important in a flow injection analysis. Radial diffusion decreases the sample’s linear velocity at the center of the tubing, while the sample at the edge of the tubing experiences an increase in its linear velocity. Diffusion helps to maintain the integrity of the sample’s flow profile (Figure 33.2.2 c) and prevents adjacent samples in the carrier stream from dispersing into one another. Both convection and diffusion make significant contributions to dispersion from approximately 3–20 s after the sample’s injection. This is the normal time scale for a flow injection analysis. After approximately 25 s, diffusion is the only significant contributor to dispersion, resulting in a flow profile similar to that shown in Figure 33.2.2 d. An FIA curve, or fiagram, is a plot of the detector’s signal as a function of time. Figure 33.2.4 shows a typical fiagram for conditions in which both convection and diffusion contribute to the sample’s dispersion. Also shown on the figure are several parameters that characterize a sample’s fiagram. Two parameters define the time for a sample to move from the injector to the detector. Travel time, ta, is the time between the sample’s injection and the arrival of its leading edge at the detector. Residence time, T, on the other hand, is the time required to obtain the maximum signal. The difference between the residence time and the travel time is $t^{\prime}$, which approaches zero when convection is the primary means of dispersion, and increases in value as the contribution from diffusion becomes more important. The time required for the sample to pass through the detector’s flow cell—and for the signal to return to the baseline—is also described by two parameters. The baseline-to-baseline time, $\Delta t$, is the time between the arrival of the sample’s leading edge to the departure of its trailing edge. The elapsed time between the maximum signal and its return to the baseline is the return time, $T^{\prime}$. The final characteristic parameter of a fiagram is the sample’s peak height, h. Of the six parameters shown in Figure 33.2.4 , the most important are peak height and the return time. Peak height is important because it is directly or indirectly related to the analyte’s concentration. The sensitivity of an FIA method, therefore, is determined by the peak height. The return time is important because it determines the frequency with which we may inject samples. Figure 33.2.5 shows that if we inject a second sample at a time $T^{\prime}$ after we inject the first sample, there is little overlap of the two FIA curves. By injecting samples at intervals of $T^{\prime}$, we obtain the maximum possible sampling rate. Peak heights and return times are influenced by the dispersion of the sample’s flow profile and by the physical and chemical properties of the flow injection system. Physical parameters that affect h and $T^{\prime}$ include the volume of sample we inject, the flow rate, the length, diameter and geometry of the mixing zone and the reaction zone, and the presence of junctions where separate channels merge together. The kinetics of any chemical reactions between the sample and the reagents in the carrier stream also influence the peak height and return time. Unfortunately, there is no good theory that we can use to consistently predict the peak height and the return time for a given set of physical and chemical parameters. The design of a flow injection analyzer for a particular analytical problem still occurs largely by a process of experimentation. Nevertheless, we can make some general observations about the effects of physical and chemical parameters. In the absence of chemical effects, we can improve sensitivity—that is, obtain larger peak heights—by injecting larger samples, by increasing the flow rate, by decreasing the length and diameter of the tubing in the mixing zone and the reaction zone, and by merging separate channels before the point where the sample is injected. With the exception of sample volume, we can increase the sampling rate—that is, decrease the return time—by using the same combination of physical parameters. Larger sample volumes, however, lead to longer return times and a decrease in sample throughput. The effect of chemical reactivity depends on whether the species we are monitoring is a reactant or a product. For example, if we are monitoring a reactant, we can improve sensitivity by choosing conditions that decrease the residence time, T, or by adjusting the carrier stream’s composition so that the reaction occurs more slowly. Instrumentation The basic components of a flow injection analyzer are shown in Figure 33.2.6 and include a pump to propel the carrier stream and the reagent streams, a means to inject the sample into the carrier stream, and a detector to monitor the composition of the carrier stream. Connecting these units is a transport system that brings together separate channels and provides time for the sample to mix with the carrier stream and to react with the reagent streams. We also can incorporate separation modules into the transport system. Each of these components is considered in greater detail in this section. Propelling Unit The propelling unit moves the carrier stream through the flow injection analyzer. Although several different propelling units have been used, the most common is a peristaltic pump, which, as shown in Figure 33.2.7 , consists of a set of rollers attached to the outside of a rotating drum. Tubing from the reagent reservoirs fits between the rollers and a fixed plate. As the drum rotates the rollers squeeze the tubing, forcing the contents of the tubing to move in the direction of the rotation. Peristaltic pumps provide a constant flow rate, which is controlled by the drum’s speed of rotation and the inner diameter of the tubing. Flow rates from 0.0005–40 mL/min are possible, which is more than adequate to meet the needs of FIA where flow rates of 0.5–2.5 mL/min are common. One limitation to a peristaltic pump is that it produces a pulsed flow—particularly at higher flow rates—that may lead to oscillations in the signal. Injector The sample, typically 5–200 μL, is injected into the carrier stream. Although syringe injections through a rubber septum are possible, the more common method—as seen in Figure 33.2.6 —is to use a rotary, or loop injector similar to that used in an HPLC. This type of injector provides for a reproducible sample volume and is easily adaptable to automation, an important feature when high sampling rates are needed. Detector The most common detectors for flow injection analysis are the electrochemical and optical detectors used in HPLC. These detectors are discussed in Chapter 28 and are not considered further in this section. FIA detectors also have been designed around the use of ion selective electrodes and atomic absorption spectroscopy. Transport System The heart of a flow injection analyzer is the transport system that brings together the carrier stream, the sample, and any reagents that react with the sample. Each reagent stream is considered a separate channel, and all channels must merge before the carrier stream reaches the detector. The complete transport system is called a manifold. The simplest manifold has a single channel, the basic outline of which is shown in Figure 33.2.8 . This type of manifold is used for direct analysis of analyte that does not require a chemical reaction. In this case the carrier stream serves only as a means for rapidly and reproducibly transporting the sample to the detector. For example, this manifold design has been used for sample introduction in atomic absorption spectroscopy, achieving sampling rates as high as 700 samples/h. A single-channel manifold also is used for determining a sample’s pH or determining the concentration of metal ions using an ion selective electrode. We can also use the single-channel manifold in Figure 33.2.8 for an analysis in which we monitor the product of a chemical reaction between the sample and a reactant. In this case the carrier stream both transports the sample to the detector and reacts with the sample. Because the sample must mix with the carrier stream, a lower flow rate is used. One example is the determination of chloride in water, which is based on the following sequence of reactions. $\mathrm{Hg}(\mathrm{SCN})_{2}(a q)+2 \mathrm{Cl}^{-}(a q) \rightleftharpoons \: \mathrm{HgCl}_{2}(a q)+2 \mathrm{SCN}^{-}(a q) \nonumber$ $\mathrm{Fe}^{3+}(a q)+\mathrm{SCN}^{-}(a q) \rightleftharpoons \mathrm{Fe}(\mathrm{SCN})^{2+}(a q) \nonumber$ The carrier stream consists of an acidic solution of Hg(SCN)2 and Fe3+. Injecting a sample that contains chloride into the carrier stream displaces thiocyanate from Hg(SCN)2. The displaced thiocyanate then reacts with Fe3+ to form the red-colored Fe(SCN)2+ complex, the absorbance of which is monitored at a wavelength of 480 nm. Sampling rates of approximately 120 samples per hour have been achieved with this system [Hansen, E. H.; Ruzicka, J. J. Chem. Educ. 1979, 56, 677–680]. Most flow injection analyses that include a chemical reaction use a manifold with two or more channels. Including additional channels provides more control over the mixing of reagents and the interaction between the reagents and the sample. Two configurations are possible for a dual-channel system. A dual-channel manifold, such as the one shown in Figure 33.2.9 a, is used when the reagents cannot be premixed because of their reactivity. For example, in acidic solutions phosphate reacts with molybdate to form the heteropoly acid H3P(Mo12O40). In the presence of ascorbic acid the molybdenum in the heteropoly acid is reduced from Mo(VI) to Mo(V), forming a blue-colored complex that is monitored spectrophotometrically at 660 nm [Hansen, E. H.; Ruzicka, J. J. Chem. Educ. 1979, 56, 677–680]. Because ascorbic acid reduces molybdate, the two reagents are placed in separate channels that merge just before the loop injector. A dual-channel manifold also is used to add a second reagent after injecting the sample into a carrier stream, as shown in Figure 33.2.9 b. This style of manifold is used for the quantitative analysis of many analytes, including the determination of a wastewater’s chemical oxygen demand (COD) [Korenaga, T.; Ikatsu, H. Anal. Chim. Acta 1982, 141, 301–309]. Chemical oxygen demand is a measure of the amount organic matter in the wastewater sample. In the conventional method of analysis, COD is determined by refluxing the sample for 2 h in the presence of acid and a strong oxidizing agent, such as K2Cr2O7 or KMnO4. When refluxing is complete, the amount of oxidant consumed in the reaction is determined by a redox titration. In the flow injection version of this analysis, the sample is injected into a carrier stream of aqueous H2SO4, which merges with a solution of the oxidant from a secondary channel. The oxidation reaction is kinetically slow and, as a result, the mixing coil and the reaction coil are very long—typically 40 m—and submerged in a thermostated bath. The sampling rate is lower than that for most flow injection analyses, but at 10–30 samples/h it is substantially greater than the redox titrimetric method. More complex manifolds involving three or more channels are common, but the possible combination of designs is too numerous to discuss. One example of a four-channel manifold is shown in Figure 33.2.10 . Separation Modules By incorporating a separation module into the flow injection manifold we can include a separation—dialysis, gaseous diffusion and liquid-liquid extractions are examples—in a flow injection analysis. Although these separations are never complete, they are reproducible if we carefully control the experimental conditions. Dialysis and gaseous diffusion are accomplished by placing a semipermeable membrane between the carrier stream containing the sample and an acceptor stream, as shown in Figure 33.2.11 . As the sample stream passes through the separation module, a portion of those species that can cross the semipermeable membrane do so, entering the acceptor stream. This type of separation module is common for the analysis of clinical samples, such as serum and urine, where a dialysis membrane separates the analyte from its complex matrix. Semipermeable gaseous diffusion membranes are used for the determination of ammonia and carbon dioxide in blood. For example, ammonia is determined by injecting the sample into a carrier stream of aqueous NaOH. Ammonia diffuses across the semipermeable membrane into an acceptor stream that contains an acid–base indicator. The resulting acid–base reaction between ammonia and the indicator is monitored spectrophotometrically. Liquid–liquid extractions are accomplished by merging together two immiscible fluids, each carried in a separate channel. The result is a segmented flow through the separation module, consisting of alternating portions of the two phases. At the outlet of the separation module the two fluids are separated by taking advantage of the difference in their densities. Figure 33.2.12 shows a typical configuration for a separation module in which the sample is injected into an aqueous phase and extracted into a less dense organic phase that passes through the detector. Quantitative Applications In a quantitative flow injection method a calibration curve is determined by injecting a series of external standards that contain known concentrations of analyte. The calibration curve’s format—examples include plots of absorbance versus concentration and of potential versus concentration—depends on the method of detection. Flow injection analysis has been used to analyze a wide variety of samples, including environmental, clinical, agricultural, industrial, and pharmaceutical samples. The majority of analyses involve environmental and clinical samples, which is the focus of this section. Quantitative flow injection methods have been developed for cationic, anionic, and molecular pollutants in wastewater, freshwaters, groundwaters, and marine waters, three examples of which were described in the previous section. Table 33.2.1 provides a partial listing of other analytes that have been determined using FIA, many of which are modifications of standard spectrophotometric and potentiometric methods. An additional advantage of FIA for environmental analysis is the ability to provide for the continuous, in situ monitoring of pollutants in the field [Andrew, K. N.; Blundell, N. J.; Price, D.; Worsfold, P. J. Anal. Chem. 1994, 66, 916A–922A]. Table 33.2.1 . Selected Flow Injection Analysis Methods for Environmental Samples analyte sample sample volume (µL) concentration range sampling frequency (h–1) Ca2+ freshwater 20 0.8–7.2 ppm 80 Cu2+ groundwater 70–700 100–400 ppb 20 Pb2+ groundwater 70–700 0–40 ppb 20 Zn2+ seawater 1000 1–100 ppb 30–60 $\text{NH}_4^+$ seawater 60 0.18–18.1 ppb 288 $\text{NO}_3^-$ rainwater 1000 1–10 ppb 40 $\text{SO}_4^{2-}$ freshwater 400 4–140 ppb 180 CN industrial 10 0.3–100 ppm 40 Source: Adapted from Valcárcel, M.; Luque de Castro, M. D. Flow-Injection Analysis: Principles and Practice, Ellis Horwood: Chichester, England, 1987. Several standard methods for the analysis of water involve an acid–base, complexation, or redox titration. It is easy to adapt these titrations to FIA using a single-channel manifold similar to that shown in Figure 33.2.8 [Ramsing, A. U.; Ruzicka, J.; Hansen, E. H. Anal. Chim. Acta 1981, 129, 1–17]. The titrant—whose concentration must be stoichiometrically less than that of the analyte—and a visual indicator are placed in the reagent reservoir and pumped continuously through the manifold. When we inject the sample it mixes thoroughly with the titrant in the carrier stream. The reaction between the analyte, which is in excess, and the titrant produces a relatively broad rectangular flow profile for the sample. As the sample moves toward the detector, additional mixing occurs and the width of the sample’s flow profile decreases. When the sample passes through the detector, we determine the width of its flow profile, $\Delta T$, by monitoring the indicator’s absorbance. A calibration curve of $\Delta T$ versus log[analyte] is prepared using standard solutions of analyte. Flow injection analysis has also found numerous applications in the analysis of clinical samples, using both enzymatic and nonenzymatic methods. Table 33.2.2 summarizes several examples. Table 33.2.2 . Selected Flow Injection Analysis Methods for Clinical Samples analyte sample sample volume (µL) concentration range sampling frequency (h–1) nonenzymatic methods Cu2+ serum 20 0.7–1.5 ppm 70 Cl serum 60 50–150 meq/L 125 $\text{PO}_4^{3-}$ serum 200 10–60 ppm 130 total CO2 serum 50 10–50 mM 70 chlorpromazine blood plasma 200 1.5–9 $\mu \text{M}$ 24 enzymatic methods glucose blood serum 26.5 0.5–15 mM 60 urea blood serum 30 4–20 mM 60 ethanol blood 30 5–30 ppm 50 Source: Adapted from Valcárcel, M.; Luque de Castro, M. D. Flow-Injection Analysis: Principles and Practice, Ellis Horwood: Chichester, England, 1987. Evaluation The majority of flow injection analysis applications are modifications of conventional titrimetric, spectrophotometric, and electrochemical methods of analysis; thus, it is appropriate to compare FIA methods to these conventional methods. The scale of operations for FIA allows for the routine analysis of minor and trace analytes, and for macro, meso, and micro samples. The ability to work with microliter injection volumes is useful when the sample is scarce. Conventional methods of analysis usually have smaller detection limits. The accuracy and precision of FIA methods are comparable to conventional methods of analysis; however, the precision of FIA is influenced by several variables that do not affect conventional methods, including the stability of the flow rate and the reproducibility of the sample’s injection. In addition, results from FIA are more susceptible to temperature variations. In general, the sensitivity of FIA is less than that for conventional methods of analysis for at least two reasons. First, as with chemical kinetic methods, measurements in FIA are made under nonequilibrium conditions when the signal has yet to reach its maximum value. Second, dispersion dilutes the sample as it moves through the manifold. Because the variables that affect sensitivity are known, we can design the FIA manifold to optimize the method’s sensitivity. Selectivity for an FIA method often is better than that for the corresponding conventional method of analysis. In many cases this is due to the kinetic nature of the measurement process, in which potential interferents may react more slowly than the analyte. Contamination from external sources also is less of a problem because reagents are stored in closed reservoirs and are pumped through a system of transport tubing that is closed to the environment. Finally, FIA is an attractive technique when considering time, cost, and equipment. When using an autosampler, a flow injection method can achieve very high sampling rates. A sampling rate of 20–120 samples/h is not unusual and sampling rates as high as 1700 samples/h are possible. Because the volume of the flow injection manifold is small, typically less than 2 mL, the consumption of reagents is substantially smaller than that for a conventional method. This can lead to a significant decrease in the cost per analysis. Flow injection analysis does require the need for additional equipment—a pump, a loop injector, and a manifold—which adds to the cost of an analysis. For a review of the importance of flow injection analysis, see Hansen, E. H.; Miró, M. “How Flow-Injection Analysis (FIA) Over the Past 25 Years has Changed Our Way of Performing Chemical Analyses,” TRAC, Trends Anal. Chem. 2007, 26, 18–26. 33.03: Other Automated Methods of Analysis In the last two sections we introduced two examples of automated methods of analysis: a brief mention of automated titrators and a more extensive coverage of flow-injection analysis. In this section we consider three additional examples of automated methods of analysis: the stopped-flow analyzer, the centrifugal analyzer, and disposable single-test analyzers based on thin films, screen-printing, and paper. Stopped-Flow Analyzer A variety of instruments have been developed to automate the kinetic analysis of fast reactions. One example, which is shown in Figure 33.3.1 , is the stopped-flow analyzer. The sample and the reagents are loaded into separate syringes and precisely measured volumes are dispensed into a mixing chamber by the action of a syringe drive. The continued action of the syringe drive pushes the mixture through an observation cell and into a stopping syringe. The back pressure generated when the stopping syringe hits the stopping block completes the mixing, after which the reaction’s progress is monitored spectrophotometrically. With a stopped-flow analyzer it is possible to complete the mixing of sample and reagent, and initiate the kinetic measurements in approximately 0.5 ms. By attaching an autosampler to the sample syringe it is possible to analyze up to several hundred samples per hour. Centrifugal Analyzer Another instrument for kinetic measurements is the centrifugal analyzer, a partial cross section of which is shown in Figure 33.3.2 . The sample and the reagents are placed in separate wells, which are oriented radially around a circular transfer disk. As the centrifuge spins, the centrifugal force pulls the sample and the reagents into the cuvette where mixing occurs. A single optical source and detector, located below and above the transfer disk’s outer edge, measures the absorbance each time the cuvette passes through the optical beam. When using a transfer disk with 30 cuvettes and rotating at 600 rpm, we can collect 10 data points per second for each sample. The ability to collect lots of data and to collect it quickly requires appropriate hardware and software. Not surprisingly, automated kinetic analyzers developed in parallel with advances in analog and digital circuitry—the hardware—and computer software for smoothing, integrating, and differentiating the analytical signal. For an early discussion of the importance of hardware and software, see Malmstadt, H. V.; Delaney, C. J.; Cordos, E. A. “Instruments for Rate Determinations,” Anal. Chem. 1972, 44(12), 79A–89A. Disposable, Single-Test Analyzers In comparison to other techniques, potentiometry provides a rapid, relatively low-cost means for analyzing samples. The limiting factor when analyzing a large number of samples is the need to rinse the electrode between samples. The use of inexpensive, disposable ion-selective electrodes can increase a lab’s sample throughput. Figure 33.3.3 shows one example of a disposable ISE for Ag+ [Tymecki, L.; Zwierkowska, E.; Głąb, S.; Koncki, R. Sens. Actuators B 2003, 96, 482–488]. Commercial instruments for measuring pH or potential are available in a variety of price ranges, and includes portable models for use in the field.
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/33%3A_Automated_Methods_of_Analysis/33.01%3A_Overview_of_Automated_Methods_of_Analysis.txt
In our coverage of high-performance liquid chromatography (see Chapter 28) we noted that a typical packed column used porous silica particles with a mean diameter between 3-10 µm in size. A column that a manufacturer advertises as using 5 µm particles, however, will contain particles that are smaller and particles that are larger. As the size of the particles has an effect on the pressure it takes to move the mobile phase through the column, knowledge of the distribution of particle sizes is of interest. In this chapter we consider different methods for determining particle size. • 34.1: Overview Particles come in many forms. Some are very small, such as nanoparticles with dimensions of 1-100 nm and that might consist of just a few hundred atoms, and some are much larger. In this chapter we consider methods for determining the size of particles. • 34.2: Measuring Particle Size Using Sieves The particulates in a solid matrix are separated by size using one or more sieves. Sieves are available in a variety of mesh sizes, ranging from approximately 25 mm to 40 μm. By stacking together sieves of different mesh size—with the largest mesh at the top and the smallest mesh at the bottom—we can isolate particulates into several narrow size ranges. • 34.3: Measuring Particle Size by Sedimentation When a particle that is larger than 5 µm is placed in suspension it will slowly settle toward the bottom of its container due to the force of gravity, a process called sedimentation. The time it takes for a particle to move a fixed distance is inversely proportional to the difference in the density of the particle and the density of the fluid in which the particles are suspended, and inversely proportional to the square of the particle's diameter. Larger (and denser) particles, therefore settle • 34.4: Measuring Particle Size Using Image Analysis A photograph that includes a scale can, in principle, be used to estimate the size of the resin's beads. An optical microscope or electron microscope with a digital camera can capture images of the microscope's field of view. Software is used to differentiate the particles from the background, to establish the particle's boundaries, and to determine the particle's size. • 34.5: Measuring Particle Size Using Light Scattering The blue color of the sky during the day and the red color of the sun at sunset are the result of light scattered by small particles of dust, molecules of water, and other gases in the atmosphere. The scattering of radiation has been studied since the late 1800s, with applications beginning soon thereafter. The earliest quantitative applications of scattering, which date from the early 1900s, used the elastic scattering of light by colloidal suspensions to determine the concentration of colloida 34: Particle Size Determination What Do We Mean By a Particle? Particles come in many forms. Some are very small, such as nanoparticles with dimensions of 1-100 nm and that might consist of just a few hundred atoms, and some are much larger, as in the beads of ion-exchange resin shown in Figure $1$, which range in size from approximately 300 µm to 850 µm. Or, consider soils, which generally are subdivided into four types of particles: clay, which has particles with diameters smaller than 2 µm, silt, which has particle with diameters that range from 2 µm to 50 µm, sand, which has particle with diameters from 50 µm to 2000 µm, and gravel, which has particles with diameters larger than 2000 µm in size. We often hold as an image that particles are spherical in shape, which means we can characterize them by reporting a single number: the particle's diameter. Many particulate materials, however, are not uniform in shape. Although many of the resin beads in Figure $1$ appear spherical—the largest bead in the small cluster at the left certainly looks spherical—others of the resin beads are distorted in shape, often appearing somewhat flattened. Still, it is not unusual to treat particles as if they are spheres. There are a number of reasons for this. If the method we use to determine size is not based on a static image (as is the case in Figure $1$), but on a suspension of particles that are rotating rapidly on the timescale of our measurement, then the particle's shape averages out to a sphere even if the particle itself is not a sphere. The size we report, in this case, is called an equivalent spherical diameter (ESD), which may vary from method-to-method. How do We Report Particle Size? Suppose we use a method to determine the size of 10000 particles. A simple way to display the data is to use a histogram that reports the frequency of particles in different size ranges, as we see in Figure $2$. We can characterize this distribution by reporting one or more measures of its central tendency and a measure of its spread. Typical measures of central tendency are the mode, which is the most common result, the median, which is the result that falls exactly in the middle of all recorded values, and the mean, which is the numerical average. For the data in Figure $2$, the mode is 0.255 µm (the center of the bin that begins at 0.200 µm and ends at 0.250 µm), the median is 0.265 µm, and the mean is 0.287 µm. If the distribution was symmetrical, then the mode, median, and mean would be identical; here, the distribution has a long tail to the right, which increases the mean relative to the median, and increases the mean and the median relative to the mode. A common way to report the spread is to use the width of the distribution at a frequency that is half of the maximum frequency; this is called the full-width-at-half-maximum (FWHM). For the data in Figure $2$, the maximum frequency is 1230 counts. The FWHM is at a frequency of 615 and runs from a diameter of 0.050 µm to 0.450 µm, or FWHM = 0.450 – 0.050 = 0.400 µm. Another way to characterize the distribution of particle sizes is to plot the cumulative frequency (as a percent) as a function of the diameter of the particles. Figure $3$ shows this for the data in Figure $2$ using both the binned data for the histogram (shown as the circular black points), and using all of the underlying data (shown as the dashed blue line). The red, purple, and green lines show the particle diameters that include 10% (D10), 50% (D50), and 90% (D90) of all particles. The value of 0.264 µm indicates that half of the particles have diameters less than 0.264 µm and that half have diameters greater than 0.264 µm. One measure of the distribution's relative width is the span, which is defined as $\text{span} = \frac{\text{D90} - \text{D10}}{\text{D50}} = \frac{0.511 - 0.093}{0.264} = 1.59 \nonumber$ How Can We Measure Particle Size? There are a variety of methods that we can use to determine the distribution in the sizes of a particulate material, more than we can cover in a single chapter. Instead, we will consider four common methods: sieving, sedimentation, imaging, and light scattering. When choosing a method, the size and form of the particles are important factors. Sieving, for example, is a practical choice when working with solid particulates that have diameters as small as 20 µm and as large as 125 mm (Note the change from µm to mm!). Sedimentation is useful for particles with diameters of 1 µm, which we can extend to diameters as small as 1 nm by using a centrifuge. Image analysis is useful for particles between 0.5 µm and 1500 µm. Finally, light scattering is useful for particles as small as 0.8 nm.
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/34%3A_Particle_Size_Determination/34.01%3A_Overview.txt
The particulates in a solid matrix are separated by size using one or more sieves (Figure \(1\)). Sieves are available in a variety of mesh sizes, ranging from approximately 25 mm to 40 μm. By stacking together sieves of different mesh size—with the largest mesh at the top and the smallest mesh at the bottom—we can isolate particulates into several narrow size ranges. Using the sieves in Figure \(1\), for example, we can separate a solid into particles with diameters >1700 μm, with diameters between 1700 μm and 500 μm, with diameters between 500 μm and 250 μm, and those with a diameter <250 μm. The sample is place in the uppermost sieve and mechanical shaking used to effect the separation. Because we cannot use more than a limited number of sieves in a single stack, the methods for analyzing the particle size data presented in Chapter 34.1 will be discrete in nature instead of continuous; thus, histograms will have a relatively small number of bins and a cumulative distribution will consist of a discrete number of points. One limitation to a sieve is that irregularly shaped particles are sized based on their two smallest dimensions. 34.03: Measuring Particle Size by Sedimentation When a particle that is larger than 5 µm is placed in suspension it will slowly settle toward the bottom of its container due to the force of gravity, a process called sedimentation. The time it takes for a particle to move a fixed distance is inversely proportional to the difference in the density of the particle and the density of the fluid in which the particles are suspended, and inversely proportional to the square of the particle's diameter. Larger (and denser) particles, therefore settle out more quickly than do smaller particles, as we see in Figure \(1\). To follow the process of sedimentation, a light source is passed through a narrow portion of the sample and the amount of light passing through the sample monitored as a function of time. Once the largest particles pass through the sampling zone, the transmittance of light increases. Standards with well characterized particle sizes are used to calibrate the instrument. For smaller particles, which may remain suspended due to Brownian motion, sedimentation can be carried out using a centrifuge, a technique known as differential centrifigual separation (DCS). As shown in Figure \(2\), the sample is introduced in the center of a disk that contains the fluid through which the particles will move. As the disk spins, larger particles move more quickly, eventually reaching the detector located at the outer edge of the disk. 34.04: Measuring Particle Size Using Image Analysis The chapter overview includes a photograph of an ion-exchange resin's beads. The photograph includes a scale and, in principle, we could use the photograph and scale to estimate the size of the resin's beads. Although the estimates likely are pretty crude, this still serves as an example of image analysis in which we equip an optical microscope or electron microscope with a digital camera that can capture images of the microscope's field of view. Software is used to differentiate the particles from the background, to establish the particle's boundaries, and to determine the particle's size. As shown in Figure \(1a\), the sample is dispersed on an optical platform and light is passed through the optical platform where it is magnified and focused before capturing the image using a camera. The optical platform can be manually or automatically moved in the xy-plane to capture more images, as in Figure \(1b\). The software then sorts the particles into groups based on size and reports a count of particles in each group, as in Figure \(1c\). Because the particles remained immobile, this is called a static image analysis. One limitation to static imaging analysis is that it generally samples a small number of particles as they must be sufficiently dispersed on the optical platform to allow the individual particles to be imaged, analyzed, and counted. In dynamic imaging analysis, the sample is placed in a flow cell set perpendicular to the camera and the light source. Images are collected by using a high-speed flash and shutter speed to capture a sequence of images that are analyzed. By essentially creating an infinite optical platform, dynamic imaging analysis can achieve analysis rates of 10000 particles per minute.
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/34%3A_Particle_Size_Determination/34.02%3A_Measuring_Particle_Size_Using_Sieves.txt
The blue color of the sky during the day and the red color of the sun at sunset are the result of light scattered by small particles of dust, molecules of water, and other gases in the atmosphere. The efficiency of a photon’s scattering depends on its wavelength. We see the sky as blue during the day because violet and blue light scatter to a greater extent than other, longer wavelengths of light. For the same reason, the sun appears red at sunset because red light is less efficiently scattered and is more likely to pass through the atmosphere than other wavelengths of light. The scattering of radiation has been studied since the late 1800s, with applications beginning soon thereafter. The earliest quantitative applications of scattering, which date from the early 1900s, used the elastic scattering of light by colloidal suspensions to determine the concentration of colloidal particles. Origin of Scattering If we send a focused, monochromatic beam of radiation with a wavelength $\lambda$ through a medium of particles with dimensions $< 1.5 \lambda$, the radiation scatters in all directions. For example, visible radiation of 500 nm is scattered by particles as large as 750 nm in the longest dimension. Two general categories of scattering are recognized. In elastic scattering, radiation is first absorbed by the particles and then emitted without undergoing a change in the radiation’s energy. When the radiation emerges with a change in energy, the scattering is inelastic. Only elastic scattering is considered in this chapter. Elastic scattering is divided into two types: Rayleigh, or small-particle scattering, and large-particle scattering. Rayleigh scattering occurs when the scattering particle’s largest dimension is less than 5% of the radiation’s wavelength. The intensity of the scattered radiation is proportional to its frequency to the fourth power, $\nu^4$—which accounts for the greater scattering of blue light than red light—and is distributed symmetrically (Figure 34.5.1 a). For larger particles, scattering increases in the forward direction and decreases in the backward direction as the result of constructive and destructive interferences (Figure 34.5.1 b). Rayleigh Scattering Small particle, or Rayleigh scattering, measured at an angle of $\theta$ is the ratio of the intensity of the scattered light, $I$, to the intensity of the light source, $I_o$, and is expressed as $R_{\theta} = \frac{I}{I_o} = K r^6 \nonumber$ where $r_0$ is the radius of the particle and $K$ is a constant that is a function of the angle of scattering, the wavelength of light used, the refractive index of the particle, and the distance to the particle, $R$. Dynamic Light Scattering In dynamic light scattering (DLS), we use a laser as a light source (see Figure $2$ for an illustration). When the light from the source reaches the sample, which is in a sample cell, it scatters in all directions, as shown in Figure $1$. A detector is placed at a fixed angle to collect the light that scatters at that angle. The resulting intensity of scattered light is measure as a function of time. Because the particles in the sample are moving due to Brownian motion, the intensity of light varies with time yielding a noisy signal. Smaller particles diffuse more rapidly than larger particles, which means that fluctuations in intensity with a small particle occurs more rapidly than for a large particle, as seen in Figure $3$. To process the data in DLS, we examine the correlation of the signal with itself over small increments of time. This is accomplished by shifting the signal by a small amount (we call this the delay time, $\tau$) and computing the correlation between the original signal and the delayed signal. For short delay times, the correlation in intensities is close to 1 because the particles have not had time to move, and for longer delay times the correlation in intensities is close to 0 because the particles have moved significantly; in between these limits, the correlation undergoes an exponential decay. Figure $4$ shows examples of the resulting correlograms for large particles and for small particles. The correlation function, $G(\tau)$, is defined as $G(\tau) = A[1 + Be^{-2 \Gamma \tau}] \label{gtau}$ The terms $A$ and $B$ are, respectively, the baseline and the intercept of the correlation function, and $\Gamma = Dq^2$, where $D$ is the translational diffusion coefficient and where $q$ is equivalent to $(4 \pi n/ \lambda_0) sin(\theta/2)$ where $n$ is the refractive index, $\lambda_0$ is the wavelength of the laser, and $\theta$ is the angle at which scattered light is collected. The relationship between the size of the particles and the translational diffusion coefficient is give by the Stokes-Einstein equation $d = \frac{kT}{3 \pi \eta D} \nonumber$ where $k$ is Boltzmann's constant, $T$ is the absolute temperature, and $\eta$ is the viscosity. Fitting one or more equations for $G(\tau)$ to the correlogram yields the distribution of particle sizes. Static Light Scattering In dynamic light scattering we are interested in how the intensity of scattering changes with time; in static light scattering, we are interested in how the average intensity of scattered light varies with the concentration of particles, $c$, and the angle, $\theta$, at which scattering is measured. The extent of scattering, $R_{\theta}$, for each combination of $c$ and $\theta$ is plotted as $K c / R_{\theta}$, where $K$ is a constant that is a function of the solvent's refractive index, the change in refracative index with concentration, and Avogadro's number as a function of the angle; the value of $S$ for the x-axis is chosen to maintain a separation between the data. A typical plot, which is known as a Zimm plot, is shown in Figure $5$. Each of the solid brown points gives the value of $K c / R_{\theta}$ for a combination of concentration and angle. For each angle, the change in $K c / R_{\theta}$ is extrapolated back to a concentration of zero (the dashed green lines) and for each concentration, the change in $K c / R_{\theta}$ is extrapolated back to an angle of zero (the dashed blue lines). The resulting extrapolation of $c_0$ to $\theta = 0$ gives an intercept that is the inverse of the particles molecular weight, $M$, and the slope of the extrapolations of $\theta_0$ to a concentration of zero gives $R_g$, which is the particle's radius of gyration (it is moving, after all), which is its effective particle size.
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/34%3A_Particle_Size_Determination/34.05%3A_Measuring_Particle_Size_Using_Light_Scattering.txt
The appendices gathered here provide a lengthy introduction to the analysis of data, a variety of tables that contain critical values for the statistical analysis of data and standard oxidation-reduction potential, a discussion of activity, and a list of acronyms and abbreviations used in this textbook and in other resources for analytical chemistry. 35: Appendicies The material in this appendix is adapted from the textbook Chemometrics Using R, which is available through LibreTexts using this link. In addition to the material here, the textbook contains instructions on how to use the statistical programming language R to carry out the calculations. Types of Data At the heart of any analysis is data. Sometimes our data describes a category and sometimes it is numerical; sometimes our data conveys order and sometimes it does not; sometimes our data has an absolute reference and sometimes it has an arbitrary reference; and sometimes our data takes on discrete values and sometimes it takes on continuous values. Whatever its form, when we gather data our intent is to extract from it information that can help us solve a problem. Ways to Describe Data If we are to consider how to describe data, then we need some data with which we can work. Ideally, we want data that is easy to gather and easy to understand. It also is helpful if you can gather similar data on your own so you can repeat what we cover here. A simple system that meets these criteria is to analyze the contents of bags of M&Ms. Although this system may seem trivial, keep in mind that reporting the percentage of yellow M&Ms in a bag is analogous to reporting the concentration of Cu2+ in a sample of an ore or water: both express the amount of an analyte present in a unit of its matrix. At the beginning of this chapter we identified four contrasting ways to describe data: categorical vs. numerical, ordered vs. unordered, absolute reference vs. arbitrary reference, and discrete vs. continuous. To give meaning to these descriptive terms, let’s consider the data in Table $1$, which includes the year the bag was purchased and analyzed, the weight listed on the package, the type of M&Ms, the number of yellow M&Ms in the bag, the percentage of the M&Ms that were red, the total number of M&Ms in the bag and their corresponding ranks. Table $1$. Distribution of Yellow and Red M&Ms in Bags of M&Ms. bag id year weight (oz) type number yellow % red total M&Ms rank (for total) a 2006 1.74 peanut 2 27.8 18 sixth b 2006 1.74 peanut 3 4.35 23 fourth c 2000 0.80 plain 1 22.7 22 fifth d 2000 0.80 plain 5 20.8 24 third e 1994 10.0 plain 56 23.0 331 second f 1994 10.0 plain 63 21.9 333 first The entries in Table $1$ are organized by column and by row. The first row—sometimes called the header row—identifies the variables that make up the data. Each additional row is the record for one sample and each entry in a sample’s record provides information about one of its variables; thus, the data in the table lists the result for each variable and for each sample. Categorical vs. Numerical Data Of the variables included in Table $1$, some are categorical and some are numerical. A categorical variable provides qualitative information that we can use to describe the samples relative to each other, or that we can use to organize the samples into groups (or categories). For the data in Table $1$, bag id, type, and rank are categorical variables. A numerical variable provides quantitative information that we can use in a meaningful calculation; for example, we can use the number of yellow M&Ms and the total number of M&Ms to calculate a new variable that reports the percentage of M&Ms that are yellow. For the data in Table $1$, year, weight (oz), number yellow, % red M&Ms, and total M&Ms are numerical variables. We can also use a numerical variable to assign samples to groups. For example, we can divide the plain M&Ms in Table $1$ into two groups based on the sample’s weight. What makes a numerical variable more interesting, however, is that we can use it to make quantitative comparisons between samples; thus, we can report that there are $14.4 \times$ as many plain M&Ms in a 10-oz. bag as there are in a 0.8-oz. bag. $\frac{333 + 331}{24 + 22} = \frac{664}{46} = 14.4 \nonumber$ Although we could classify year as a categorical variable—not an unreasonable choice as it could serve as a useful way to group samples—we list it here as a numerical variable because it can serve as a useful predictive variable in a regression analysis. On the other hand rank is not a numerical variable—even if we rewrite the ranks as numerals—as there are no meaningful calculations we can complete using this variable. Nominal vs. Ordinal Data Categorical variables are described as nominal or ordinal. A nominal categorical variable does not imply a particular order; an ordinal categorical variable, on the other hand, coveys a meaningful sense of order. For the categorical variables in Table $1$, bag id and type are nominal variables, and rank is an ordinal variable. Ratio vs. Interval Data A numerical variable is described as either ratio or interval depending on whether it has (ratio) or does not have (interval) an absolute reference. Although we can complete meaningful calculations using any numerical variable, the type of calculation we can perform depends on whether or not the variable’s values have an absolute reference. A numerical variable has an absolute reference if it has a meaningful zero—that is, a zero that means a measured quantity of none—against which we reference all other measurements of that variable. For the numerical variables in Table $1$, weight (oz), number yellow, % red, and total M&Ms are ratio variables because each has a meaningful zero; year is an interval variable because its scale is referenced to an arbitrary point in time, 1 BCE, and not to the beginning of time. For a ratio variable, we can make meaningful absolute and relative comparisons between two results, but only meaningful absolute comparisons for an interval variable. For example, consider sample e, which was collected in 1994 and has 331 M&Ms, and sample d, which was collected in 2000 and has 24 M&Ms. We can report a meaningful absolute comparison for both variables: sample e is six years older than sample d and sample e has 307 more M&Ms than sample d. We also can report a meaningful relative comparison for the total number of M&Ms—there are $\frac{331}{24} = 13.8 \times \nonumber$ as many M&Ms in sample e as in sample d—but we cannot report a meaningful relative comparison for year because a sample collected in 2000 is not $\frac{2000}{1994} = 1.003 \times \nonumber$ older than a sample collected in 1994. Discrete vs. Continuous Data Finally, the granularity of a numerical variable provides one more way to describe our data. For example, we can describe a numerical variable as discrete or continuous. A numerical variable is discrete if it can take on only specific values—typically, but not always, an integer value—between its limits; a continuous variable can take on any possible value within its limits. For the numerical data in Table $1$, year, number yellow, and total M&Ms are discrete in that each is limited to integer values. The numerical variables weight (oz) and % red, on the other hand, are continuous variables. Note that weight is a continuous variable even if the device we use to measure weight yields discrete values. Visualizing Data The old saying that "a picture is worth a 1000 words" may not be universally true, but it true when it comes to the analysis of data. A good visualization of data, for example, allows us to see patterns and relationships that are less evident when we look at data arranged in a table, and it provides a powerful way to tell our data's story. Suppose we want to study the composition of 1.69-oz (47.9-g) packages of plain M&Ms. We obtain 30 bags of M&Ms (ten from each of three stores) and remove the M&Ms from each bag one-by-one, recording the number of blue, brown, green, orange, red, and yellow M&Ms. We also record the number of yellow M&Ms in the first five candies drawn from each bag, and record the actual net weight of the M&Ms in each bag. Table $2$ summarizes the data collected on these samples. The bag id identifies the order in which the bags were opened and analyzed. Table $2$. Analysis of Plain M&Ms in 47.9 g Bags. bag store blue brown green orange red yellow yellow_first_five net_weight 1 CVS 3 18 1 5 7 23 2 49.287 2 CVS 3 14 9 7 8 15 0 48.870 3 Target 4 14 5 10 10 16 1 51.250 4 Kroger 3 13 5 4 15 16 0 48.692 5 Kroger 3 16 5 7 8 18 1 48.777 6 Kroger 2 12 6 10 17 7 1 46.405 7 CVS 13 11 2 8 6 17 1 49.693 8 CVS 13 12 7 10 7 8 2 49.391 9 Kroger 6 17 5 4 8 16 1 48.196 10 Kroger 8 13 2 5 10 17 1 47.326 11 Target 9 20 1 4 12 13 3 50.974 12 Target 11 12 0 8 4 23 0 50.081 13 CVS 3 15 4 6 14 13 2 47.841 14 Kroger 4 17 5 6 14 10 2 48.377 15 Kroger 9 13 3 8 14 8 0 47.004 16 CVS 8 15 1 10 9 15 1 50.037 17 CVS 10 11 5 10 7 13 2 48.599 18 Kroger 1 17 6 7 11 14 1 48.625 19 Target 7 17 2 8 4 18 1 48.395 20 Kroger 9 13 1 8 7 22 1 51.730 21 Target 7 17 0 15 4 15 3 50.405 22 CVS 12 14 4 11 9 5 2 47.305 23 Target 9 19 0 5 12 12 0 49.477 24 Target 5 13 3 4 15 16 0 48.027 25 CVS 7 13 0 4 15 16 2 48.212 26 Target 6 15 1 13 10 14 1 51.682 27 CVS 5 17 6 4 8 19 1 50.802 28 Kroger 1 21 6 5 10 14 0 49.055 29 Target 4 12 6 5 13 14 2 46.577 30 Target 15 8 9 6 10 8 1 48.317 Having collected our data, we next examine it for possible problems, such as missing values (Did we forget to record the number of brown M&Ms in any of our samples?), for errors introduced when we recorded the data (Is the decimal point recorded incorrectly for any of the net weights?), or for unusual results (Is it really the case that this bag has only yellow M&M?). We also examine our data to identify interesting observations that we may wish to explore (It appears that most net weights are greater than the net weight listed on the individual packages. Why might this be? Is the difference significant?) When our data set is small we usually can identify possible problems and interesting observations without much difficulty; however, for a large data set, this becomes a challenge. Instead of trying to examine individual values, we can look at our results visually. While it may be difficult to find a single, odd data point when we have to individually review 1000 samples, it often jumps out when we look at the data using one or more of the approaches we will explore in this chapter. Dot Plots A dot plot displays data for one variable, with each sample’s value plotted on the x-axis. The individual points are organized along the y-axis with the first sample at the bottom and the last sample at the top. Figure $1$ shows a dot plot for the number of brown M&Ms in the 30 bags of M&Ms from Table $2$. The distribution of points appears random as there is no correlation between the sample id and the number of brown M&Ms. We would be surprised if we discovered that the points were arranged from the lower-left to the upper-right as this implies that the order in which we open the bags determines whether they have many or a few brown M&Ms. Stripcharts A dot plot provides a quick way to give us confidence that our data are free from unusual patterns, but at the cost of space because we use the y-axis to include the sample id as a variable. A stripchart uses the same x-axis as a dot plot, but does not use the y-axis to distinguish between samples. Because all samples with the same number of brown M&Ms will appear in the same place—making it impossible to distinguish them from each other—we stack the points vertically to spread them out, as shown in Figure $2$. Both the dot plot in Figure $1$ and the stripchart in Figure $2$ suggest that there is a smaller density of points at the lower limit and the upper limit of our results. We see, for example, that there is just one bag each with 8, 16, 18, 19, 20, and 21 brown M&Ms, but there are six bags each with 13 and 17 brown M&Ms. Because a stripchart does not use the y-axis to provide meaningful categorical information, we can easily display several stripcharts at once. Figure $3$ shows this for the data in Table $2$. Instead of stacking the individual points, we jitter them by applying a small, random offset to each point. Among the things we learn from this stripchart are that only brown and yellow M&Ms have counts of greater than 20 and that only blue and green M&Ms have counts of three or fewer M&Ms. Box and Whisker Plots The stripchart in Figure $3$ is easy for us to examine because the number of samples, 30 bags, and the number of M&Ms per bag is sufficiently small that we can see the individual points. As the density of points becomes greater, a stripchart becomes less useful. A box and whisker plot provides a similar view but focuses on the data in terms of the range of values that encompass the middle 50% of the data. Figure $4$ shows the box and whisker plot for brown M&Ms using the data in Table $2$. The 30 individual samples are superimposed as a stripchart. The central box divides the x-axis into three regions: bags with fewer than 13 brown M&Ms (seven samples), bags with between 13 and 17 brown M&Ms (19 samples), and bags with more than 17 brown M&Ms (four samples). The box's limits are set so that it includes at least the middle 50% of our data. In this case, the box contains 19 of the 30 samples (63%) of the bags, because moving either end of the box toward the middle results in a box that includes less than 50% of the samples. The difference between the box's upper limit (19) and its lower limit (13) is called the interquartile range (IQR). The thick line in the box is the median, or middle value (more on this and the IQR in the next chapter). The dashed lines at either end of the box are called whiskers, and they extend to the largest or the smallest result that is within $\pm 1.5 \times \text{IQR}$ of the box's right or left edge, respectively. Because a box and whisker plot does not use the y-axis to provide meaningful categorical information, we can easily display several plots in the same frame. Figure $5$ shows this for the data in Table $2$. Note that when a value falls outside of a whisker, as is the case here for yellow M&Ms, it is flagged by displaying it as an open circle. One use of a box and whisker plot is to examine the distribution of the individual samples, particularly with respect to symmetry. With the exception of the single sample that falls outside of the whiskers, the distribution of yellow M&Ms appears symmetrical: the median is near the center of the box and the whiskers extend equally in both directions. The distribution of the orange M&Ms is asymmetrical: half of the samples have 4–7 M&Ms (just four possible outcomes) and half have 7–15 M&Ms (nine possible outcomes), suggesting that the distribution is skewed toward higher numbers of orange M&Ms (see Chapter 5 for more information about the distribution of samples). Figure $6$ shows box-and-whisker plots for yellow M&Ms grouped according to the store where the bags of M&Ms were purchased. Although the box and whisker plots are quite different in terms of the relative sizes of the boxes and the relative length of the whiskers, the dot plots suggest that the distribution of the underlying data is relatively similar in that most bags contain 12–18 yellow M&Ms and just a few bags deviate from these limits. These observations are reassuring because we do not expect the choice of store to affect the composition of bags of M&Ms. If we saw evidence that the choice of store affected our results, then we would look more closely at the bags themselves for evidence of a poorly controlled variable, such as type (Did we accidentally purchase bags of peanut butter M&Ms from one store?) or the product’s lot number (Did the manufacturer change the composition of colors between lots?). Bar Plots Although a dot plot, a stripchart and a box-and-whisker plot provide some qualitative evidence of how a variable’s values are distributed—we will have more to say about the distribution of data in Chapter 5—they are less useful when we need a more quantitative picture of the distribution. For this we can use a bar plot that displays a count of each discrete outcome. Figure $7$ shows bar plots for orange and for yellow M&Ms using the data in Table $2$. Here we see that the most common number of orange M&Ms per bag is four, which is also the smallest number of orange M&Ms per bag, and that there is a general decrease in the number of bags as the number of orange M&M per bag increases. For the yellow M&Ms, the most common number of M&Ms per bag is 16, which falls near the middle of the range of yellow M&Ms. Histograms A bar plot is a useful way to look at the distribution of discrete results, such as the counts of orange or yellow M&Ms, but it is not useful for continuous data where each result is unique. A histogram, in which we display the number of results that fall within a sequence of equally spaced bins, provides a view that is similar to that of a bar plot but that works with continuous data. Figure $8$, for example, shows a histogram for the net weights of the 30 bags of M&Ms in Table $2$. Individual values are shown by the vertical hash marks at the bottom of the histogram. Summarizing Data In the last section we used data collected from 30 bags of M&Ms to explore different ways to visualize data. In this section we consider several ways to summarize data using the net weights of the same bags of M&Ms. Here is the raw data. Table $3$: Net Weights for 30 Bags of M&Ms. 49.287 48.870 51.250 48.692 48.777 46.405 49.693 49.391 48.196 47.326 50.974 50.081 47.841 48.377 47.004 50.037 48.599 48.625 48.395 51.730 50.405 47.305 49.477 48.027 48.212 51.682 50.802 49.055 46.577 48.317 Without completing any calculations, what conclusions can we make by just looking at this data? Here are a few: • All net weights are greater than 46 g and less than 52 g. • As we see in Figure $9$, a box-and-whisker plot (overlaid with a stripchart) and a histogram suggest that the distribution of the net weights is reasonably symmetric. • The absence of any points beyond the whiskers of the box-and-whisker plot suggests that there are no unusually large or unsually small net weights. Both visualizations provide a good qualitative picture of the data, suggesting that the individual results are scattered around some central value with more results closer to that central value that at distance from it. Neither visualization, however, describes the data quantitatively. What we need is a convenient way to summarize the data by reporting where the data is centered and how varied the individual results are around that center. Where is the Center? There are two common ways to report the center of a data set: the mean and the median. The mean, $\overline{Y}$, is the numerical average obtained by adding together the results for all n observations and dividing by the number of observations $\overline{Y} = \frac{ \sum_{i = 1}^n Y_{i} } {n} = \frac{49.287 + 48.870 + \cdots + 48.317} {30} = 48.980 \text{ g} \nonumber$ The median, $\widetilde{Y}$, is the middle value after we order our observations from smallest-to-largest, as we show here for our data. Table $4$: The data from Table $3$ Sorted From Smallest-to-Largest in Value. 46.405 46.577 47.004 47.305 47.326 47.841 48.027 48.196 48.212 48.317 48.377 48.395 48.599 48.625 48.692 48.777 48.870 49.055 49.287 49.391 49.477 49.693 50.037 50.081 50.405 50.802 50.974 51.250 51.682 51.730 If we have an odd number of samples, then the median is simply the middle value, or $\widetilde{Y} = Y_{\frac{n + 1}{2}} \nonumber$ where n is the number of samples. If, as is the case here, n is even, then $\widetilde{Y} = \frac {Y_{\frac{n}{2}} + Y_{\frac{n}{2}+1}} {2} = \frac {48.692 + 48.777}{2} = 48.734 \text{ g} \nonumber$ When our data has a symmetrical distribution, as we believe is the case here, then the mean and the median will have similar values. What is the Variation of the Data About the Center? There are five common measures of the variation of data about its center: the variance, the standard deviation, the range, the interquartile range, and the median average difference. The variance, s2, is an average squared deviation of the individual observations relative to the mean $s^{2} = \frac { \sum_{i = 1}^n \big(Y_{i} - \overline{Y} \big)^{2} } {n - 1} = \frac { \big(49.287 - 48.980\big)^{2} + \cdots + \big(48.317 - 48.980\big)^{2} } {30 - 1} = 2.052 \nonumber$ and the standard deviation, s, is the square root of the variance, which gives it the same units as the mean. $s = \sqrt{\frac { \sum_{i = 1}^n \big(Y_{i} - \overline{Y} \big)^{2} } {n - 1}} = \sqrt{\frac { \big(49.287 - 48.980\big)^{2} + \cdots + \big(48.317 - 48.980\big)^{2} } {30 - 1}} = 1.432 \nonumber$ The range, w, is the difference between the largest and the smallest value in our data set. $w = 51.730 \text{ g} - 46.405 \text{ g} = 5.325 \text{ g} \nonumber$ The interquartile range, IQR, is the difference between the median of the bottom 25% of observations and the median of the top 25% of observations; that is, it provides a measure of the range of values that spans the middle 50% of observations. There is no single, standard formula for calculating the IQR, and different algorithms yield slightly different results. We will adopt the algorithm described here: 1. Divide the sorted data set in half; if there is an odd number of values, then remove the median for the complete data set. For our data, the lower half is Table $5$: The Lower Half of the Data in Table $4$. 46.405 46.577 47.004 47.305 47.326 47.841 48.027 48.196 48.212 48.317 48.377 48.395 48.599 48.625 48.692 and the upper half is Table $6$: The Upper Half of the Data in Table $4$. 48.777 48.870 49.055 49.287 49.391 49.477 49.693 50.037 50.081 50.405 50.802 50.974 51.250 51.682 51.730 2. Find FL, the median for the lower half of the data, which for our data is 48.196 g. 3. Find FU , the median for the upper half of the data, which for our data is 50.037 g. 4. The IQR is the difference between FU and FL. $F_{U} - F_{L} = 50.037 \text{ g} - 48.196 \text{ g} = 1.841 \text{ g} \nonumber$ The median absolute deviation, MAD, is the median of the absolute deviations of each observation from the median of all observations. To find the MAD for our set of 30 net weights, we first subtract the median from each sample in Table $3$. Table $7$: The Results of Subtracting the Median From Each Value in Table $3$. 0.5525 0.1355 2.5155 -0.0425 0.0425 -2.3295 0.9585 0.6565 -0.5385 -1.4085 2.2395 1.3465 -0.8935 -0.3575 -1.7305 1.3025 -0.1355 -0.1095 -0.3395 2.9955 1.6705 -1.4295 0.7425 -0.7075 -0.5225 2.9475 2.0675 0.3205 -2.1575 -0.4175 Next we take the absolute value of each difference and sort them from smallest-to-largest. Table $8$: The Data in Table $7$ After Taking the Absolute Value. 0.0425 0.0425 0.1095 0.1355 0.1355 0.3205 0.3395 0.3575 0.4175 0.5225 0.5385 0.5525 0.6565 0.7075 0.7425 0.8935 0.9585 1.3025 1.3465 1.4085 1.4295 1.6705 1.7305 2.0675 2.1575 2.2395 2.3295 2.5155 2.9475 2.9955 Finally, we report the median for these sorted values as $\frac{0.7425 + 0.8935}{2} = 0.818 \nonumber$ Robust vs. Non-Robust Measures of The Center and Variation About the Center A good question to ask is why we might desire more than one way to report the center of our data and the variation in our data about the center. Suppose that the result for the last of our 30 samples was reported as 483.17 instead of 48.317. Whether this is an accidental shifting of the decimal point or a true result is not relevant to us here; what matters is its effect on what we report. Here is a summary of the effect of this one value on each of our ways of summarizing our data. Table $9$: Effect on Summary Statistics of Changing Last Value in Table $3$ From 48.317 g to 483.17 g. statistic original data new data mean 48.980 63.475 median 48.734 48.824 variance 2.052 6285.938 standard deviation 1.433 79.280 range 5.325 436.765 IQR 1.841 1.885 MAD 0.818 0.926 Note that the mean, the variance, the standard deviation, and the range are very sensitive to the change in the last result, but the median, the IQR, and the MAD are not. The median, the IQR, and the MAD are considered robust statistics because they are less sensitive to an unusual result; the others are, of course, non-robust statistics. Both types of statistics have value to us, a point we will return to from time-to-time. The Distribution of Data When we measure something, such as the percentage of yellow M&Ms in a bag of M&Ms, we expect two things: • that there is an underlying “true” value that our measurements should approximate, and • that the results of individual measurements will show some variation about that "true" value Visualizations of data—such as dot plots, stripcharts, boxplot-and-whisker plots, bar plots, histograms, and scatterplots—often suggest there is an underlying structure to our data. For example, we have seen that the distribution of yellow M&Ms in bags of M&Ms is more or less symmetrical around its median, while the distribution of orange M&Ms was skewed toward higher values. This underlying structure, or distribution, of our data as it effects how we choose to analyze our data. In this chapter we will take a closer look at several ways in which data are distributed. Terminology Before we consider different types of distributions, let's define some key terms. You may wish, as well, to review the discussion of different types of data in Chapter 2. Populations and Samples A population includes every possible measurement we could make on a system, while a sample is the subset of a population on which we actually make measurements. These definitions are fluid. A single bag of M&Ms is a population if we are interested only in that specific bag, but it is but one sample from a box that contains a gross (144) of individual bags. That box, itself, can be a population, or it can be one sample from a much larger production lot. And so on. Discrete Distributions and Continuous Distributions In a discrete distribution the possible results take on a limited set of specific values that are independent of how we make our measurements. When we determine the number of yellow M&Ms in a bag, the results are limited to integer values. We may find 13 yellow M&Ms or 24 yellow M&Ms, but we cannot obtain a result of 15.43 yellow M&Ms. For a continuous distribution the result of a measurement can take on any possible value between a lower limit and an upper limit, even though our measuring device has a limited precision; thus, when we weigh a bag of M&Ms on a three-digit balance and obtain a result of 49.287 g we know that its true mass is greater than 49.2865... g and less than 49.2875... g. Theoretical Models for the Distribution of Data There are four important types of distributions that we will consider in this chapter: the uniform distribution, the binomial distribution, the Poisson distribution, and the normal, or Gaussian, distribution. In the previous sections we used the analysis of bags of M&Ms to explore ways to visualize data and to summarize data. Here we will use the same data set to explore the distribution of data. Uniform Distribution In a uniform distribution, all outcomes are equally probable. Suppose the population of M&Ms has a uniform distribution. If this is the case, then, with six colors, we expect each color to appear with a probability of 1/6 or 16.7%. Figure $10$ shows a comparison of the theoretical results if we draw 1699 M&Ms—the total number of M&Ms in our sample of 30 bags—from a population with a uniform distribution (on the left) to the actual distribution of the 1699 M&Ms in our sample (on the right). It seems unlikely that the population of M&Ms has a uniform distribution of colors! Binomial Distribution A binomial distribution shows the probability of obtaining a particular result in a fixed number of trials, where the odds of that result happening in a single trial are known. Mathematically, a binomial distribution is defined by the equation $P(X, N) = \frac {N!} {X! (N - X)!} \times p^{X} \times (1 - p)^{N - X} \nonumber$ where P(X,N) is the probability that the event happens X times in N trials, and where p is the probability that the event happens in a single trial. The binomial distribution has a theoretical mean, $\mu$, and a theoretical variance, $\sigma^2$, of $\mu = Np \quad \quad \quad \sigma^2 = Np(1 - p) \nonumber$ Figure $11$ compares the expected binomial distribution for drawing 0, 1, 2, 3, 4, or 5 yellow M&Ms in the first five M&Ms—assuming that the probability of drawing a yellow M&M is 435/1699, the ratio of the number of yellow M&Ms and the total number of M&Ms—to the actual distribution of results. The similarity between the theoretical and the actual results seems evident; in a later section we will consider ways to test this claim. Poisson Distribution The binomial distribution is useful if we wish to model the probability of finding a fixed number of yellow M&Ms in a sample of M&Ms of fixed size—such as the first five M&Ms that we draw from a bag—but not the probability of finding a fixed number of yellow M&Ms in a single bag because there is some variability in the total number of M&Ms per bag. A Poisson distribution gives the probability that a given number of events will occur in a fixed interval in time or space if the event has a known average rate and if each new event is independent of the preceding event. Mathematically a Poisson distribution is defined by the equation $P(X, \lambda) = \frac {e^{-\lambda} \lambda^X} {X !} \nonumber$ where $P(X, \lambda)$ is the probability that an event happens X times given the event’s average rate, $\lambda$. The Poisson distribution has a theoretical mean, $\mu$, and a theoretical variance, $\sigma^2$, that are each equal to $\lambda$. The bar plot in Figure $12$ shows the actual distribution of green M&Ms in 35 small bags of M&Ms (as reported by M. A. Xu-Friedman “Illustrating concepts of quantal analysis with an intuitive classroom model,” Adv. Physiol. Educ. 2013, 37, 112–116). Superimposed on the bar plot is the theoretical Poisson distribution based on their reported average rate of 3.4 green M&Ms per bag. The similarity between the theoretical and the actual results seems evident; in Chapter 6 we will consider ways to test this claim. Normal Distribution A uniform distribution, a binomial distribution, and a Poisson distribution predict the probability of a discrete event, such as the probability of finding exactly two green M&Ms in the next bag of M&Ms that we open. Not all of the data we collect is discrete. The net weights of bags of M&Ms is an example of continuous data as the mass of an individual bag is not restricted to a discrete set of allowed values. In many cases we can model continuous data using a normal (or Gaussian) distribution, which gives the probability of obtaining a particular outcome, P(x), from a population with a known mean, $\mu$, and a known variance, $\sigma^2$. Mathematically a normal distribution is defined by the equation $P(x) = \frac {1} {\sqrt{2 \pi \sigma^2}} e^{-(x - \mu)^2/(2 \sigma^2)} \nonumber$ Figure $13$ shows the expected normal distribution for the net weights of our sample of 30 bags of M&Ms if we assume that their mean, $\overline{X}$, of 48.98 g and standard deviation, s, of 1.433 g are good predictors of the population’s mean, $\mu$, and standard deviation, $\sigma$. Given the small sample of 30 bags, the agreement between the model and the data seems reasonable. The Central Limit Theorem Suppose we have a population for which one of its properties has a uniform distribution where every result between 0 and 1 is equally probable. If we analyze 10,000 samples we should not be surprised to find that the distribution of these 10000 results looks uniform, as shown by the histogram on the left side of Figure $14$. If we collect 1000 pooled samples—each of which consists of 10 individual samples for a total of 10,000 individual samples—and report the average results for these 1000 pooled samples, we see something interesting as their distribution, as shown by the histogram on the right, looks remarkably like a normal distribution. When we draw single samples from a uniform distribution, each possible outcome is equally likely, which is why we see the distribution on the left. When we draw a pooled sample that consists of 10 individual samples, however, the average values are more likely to be near the middle of the distribution’s range, as we see on the right, because the pooled sample likely includes values drawn from both the lower half and the upper half of the uniform distribution. This tendency for a normal distribution to emerge when we pool samples is known as the central limit theorem. As shown in Figure $15$, we see a similar effect with populations that follow a binomial distribution or a Poisson distribution. You might reasonably ask whether the central limit theorem is important as it is unlikely that we will complete 1000 analyses, each of which is the average of 10 individual trials. This is deceiving. When we acquire a sample of soil, for example, it consists of many individual particles each of which is an individual sample of the soil. Our analysis of this sample, therefore, is the mean for a large number of individual soil particles. Because of this, the central limit theorem is relevant. Uncertainty of Data In the last section we examined four ways in which the individual samples we collect and analyze are distributed about a central value: a uniform distribution, a binomial distribution, a Poisson distribution, and a normal distribution. We also learned that regardless of how individual samples are distributed, the distribution of averages for multiple samples often follows a normal distribution. This tendency for a normal distribution to emerge when we report averages for multiple samples is known as the central limit theorem. In this chapter we look more closely at the normal distribution—examining some of its properties—and consider how we can use these properties to say something more meaningful about our data than simply reporting a mean and a standard deviation. Properties of a Normal Distribution Mathematically a normal distribution is defined by the equation $P(x) = \frac {1} {\sqrt{2 \pi \sigma^2}} e^{-(x - \mu)^2/(2 \sigma^2)} \nonumber$ where $P(x)$ is the probability of obtaining a result, $x$, from a population with a known mean, $\mu$, and a known standard deviation, $\sigma$. Figure $16$ shows the normal distribution curves for $\mu = 0$ with standard deviations of 5, 10, and 20. Because the equation for a normal distribution depends solely on the population’s mean, $\mu$, and its standard deviation, $\sigma$, the probability that a sample drawn from a population has a value between any two arbitrary limits is the same for all populations. For example, Figure $17$ shows that 68.26% of all samples drawn from a normally distributed population have values within the range $\mu \pm 1\sigma$, and only 0.14% have values greater than $\mu + 3\sigma$. This feature of a normal distribution—that the area under the curve is the same for all values of $\sigma$—allows us to create a probability table (see Appendix 2) based on the relative deviation, $z$, between a limit, x, and the mean, $\mu$. $z = \frac {x - \mu} {\sigma} \nonumber$ The value of $z$ gives the area under the curve between that limit and the distribution’s closest tail, as shown in Figure $18$. Example $1$ Suppose we know that $\mu$ is 5.5833 ppb Pb and that $\sigma$ is 0.0558 ppb Pb for a particular standard reference material (SRM). What is the probability that we will obtain a result that is greater than 5.650 ppb if we analyze a single, random sample drawn from the SRM? Solution Figure $19$ shows the normal distribution curve given values of 5.5833 ppb Pb for $\mu$ and of 0.0558 ppb Pb $\sigma$. The shaded area in the figures is the probability of obtaining a sample with a concentration of Pb greater than 5.650 ppm. To determine the probability, we first calculate $z$ $z = \frac {x - \mu} {\sigma} = \frac {5.650 - 5.5833} {0.0558} = 1.195 \nonumber$ Next, we look up the probability in Appendix 2 for this value of $z$, which is the average of 0.1170 (for $z = 1.19$) and 0.1151 (for $z = 1.20$), or a probability of 0.1160; thus, we expect that 11.60% of samples will provide a result greater than 5.650 ppb Pb. Example $2$ Example $1$ considers a single limit—the probability that a result exceeds a single value. But what if we want to determine the probability that a sample has between 5.580 g Pb and 5.625 g Pb? Solution In this case we are interested in the shaded area shown in Figure $20$. First, we calculate $z$ for the upper limit $z = \frac {5.625 - 5.5833} {0.0558} = 0.747 \nonumber$ and then we calculate $z$ for the lower limit $z = \frac {5.580 - 5.5833} {0.0558} = -0.059 \nonumber$ Then, we look up the probability in Appendix 2 that a result will exceed our upper limit of 5.625, which is 0.2275, or 22.75%, and the probability that a result will be less than our lower limit of 5.580, which is 0.4765, or 47.65%. The total unshaded area is 71.4% of the total area, so the shaded area corresponds to a probability of $100.00 - 22.75 - 47.65 = 100.00 - 71.40 = 29.6 \% \nonumber$ Confidence Intervals In the previous section, we learned how to predict the probability of obtaining a particular outcome if our data are normally distributed with a known $\mu$ and a known $\sigma$. For example, we estimated that 11.60% of samples drawn at random from a standard reference material will have a concentration of Pb greater than 5.650 ppb given a $\mu$ of 5.5833 ppb and a $\sigma$ of 0.0558 ppb. In essence, we determined how many standard deviations 5.650 is from $\mu$ and used this to define the probability given the standard area under a normal distribution curve. We can look at this in a different way by asking the following question: If we collect a single sample at random from a population with a known $\mu$ and a known $\sigma$, within what range of values might we reasonably expect to find the sample’s result 95% of the time? Rearranging the equation $z = \frac {x - \mu} {\sigma} \nonumber$ and solving for $x$ gives $x = \mu \pm z \sigma = 5.5833 \pm (1.96)(0.0558) = 5.5833 \pm 0.1094 \nonumber$ where a $z$ of 1.96 corresponds to 95% of the area under the curve; we call this a 95% confidence interval for a single sample. It generally is a poor idea to draw a conclusion from the result of a single experiment; instead, we usually collect several samples and ask the question this way: If we collect $n$ random samples from a population with a known $\mu$ and a known $\sigma$, within what range of values might we reasonably expect to find the mean of these samples 95% of the time? We might reasonably expect that the standard deviation for the mean of several samples is smaller than the standard deviation for a set of individual samples; indeed it is and it is given as $\sigma_{\bar{x}} = \frac {\sigma} {\sqrt{n}} \nonumber$ where $\frac {\sigma} {\sqrt{n}}$ is called the standard error of the mean. For example, if we collect three samples from the standard reference material described above, then we expect that the mean for these three samples will fall within a range $\bar{x} = \mu \pm z \sigma_{\bar{X}} = \mu \pm \frac {z \sigma} {\sqrt{n}} = 5.5833 \pm \frac{(1.96)(0.0558)} {\sqrt{3}} = 5.5833 \pm 0.0631 \nonumber$ that is $\pm 0.0631$ ppb around $\mu$, a range that is smaller than that of $\pm 0.1094$ ppb when we analyze individual samples. Note that the relative value to us of increasing the sample’s size diminishes as $n$ increases because of the square root term, as shown in Figure $21$. Our treatment thus far assumes we know $\mu$ and $\sigma$ for the parent population, but we rarely know these values; instead, we examine samples drawn from the parent population and ask the following question: Given the sample’s mean, $\bar{x}$, and its standard deviation, $s$, what is our best estimate of the population’s mean, $\mu$, and its standard deviation, $\sigma$. To make this estimate, we replace the population’s standard deviation, $\sigma$, with the standard deviation, $s$, for our samples, replace the population’s mean, $\mu$, with the mean, $\bar{x}$, for our samples, replace $z$ with $t$, where the value of $t$ depends on the number of samples, $n$ $\bar{x} = \mu \pm \frac{ts}{\sqrt{n}} \nonumber$ and then rearrange the equation to solve for $\mu$. $\mu = \bar{x} \pm \frac {ts} {\sqrt{n}} \nonumber$ We call this a confidence interval. Values for $t$ are available in tables (see Appendix 3) and depend on the probability level, $\alpha$, where $(1 − \alpha) \times 100$ is the confidence level, and the degrees of freedom, $n − 1$; note that for any probability level, $t \longrightarrow z$ as $n \longrightarrow \infty$. We need to give special attention to what this confidence interval means and to what it does not mean: • It does not mean that there is a 95% probability that the population’s mean is in the range $\mu = \bar{x} \pm ts$ because our measurements may be biased or the normal distribution may be inappropriate for our system. • It does provide our best estimate of the population’s mean, $\mu$ given our analysis of $n$ samples drawn at random from the parent population; a different sample, however, will give a different confidence interval and, therefore, a different estimate for $\mu$. Testing the Significance of Data A confidence interval is a useful way to report the result of an analysis because it sets limits on the expected result. In the absence of determinate error, or bias, a confidence interval based on a sample’s mean indicates the range of values in which we expect to find the population’s mean. When we report a 95% confidence interval for the mass of a penny as 3.117 g ± 0.047 g, for example, we are stating that there is only a 5% probability that the penny’s expected mass is less than 3.070 g or more than 3.164 g. Because a confidence interval is a statement of probability, it allows us to consider comparative questions, such as these: “Are the results for a newly developed method to determine cholesterol in blood significantly different from those obtained using a standard method?” “Is there a significant variation in the composition of rainwater collected at different sites downwind from a coal-burning utility plant?” In this chapter we introduce a general approach that uses experimental data to ask and answer such questions, an approach we call significance testing. The reliability of significance testing recently has received much attention—see Nuzzo, R. “Scientific Method: Statistical Errors,” Nature, 2014, 506, 150–152 for a general discussion of the issues—so it is appropriate to begin this chapter by noting the need to ensure that our data and our research question are compatible so that we do not read more into a statistical analysis than our data allows; see Leek, J. T.; Peng, R. D. “What is the Question? Science, 2015, 347, 1314-1315 for a useful discussion of six common research questions. In the context of analytical chemistry, significance testing often accompanies an exploratory data analysis "Is there a reason to suspect that there is a difference between these two analytical methods when applied to a common sample?" or an inferential data analysis. "Is there a reason to suspect that there is a relationship between these two independent measurements?" A statistically significant result for these types of analytical research questions generally leads to the design of additional experiments that are better suited to making predictions or to explaining an underlying causal relationship. A significance test is the first step toward building a greater understanding of an analytical problem, not the final answer to that problem! Significance Testing Let’s consider the following problem. To determine if a medication is effective in lowering blood glucose concentrations, we collect two sets of blood samples from a patient. We collect one set of samples immediately before we administer the medication, and we collect the second set of samples several hours later. After we analyze the samples, we report their respective means and variances. How do we decide if the medication was successful in lowering the patient’s concentration of blood glucose? One way to answer this question is to construct a normal distribution curve for each sample, and to compare the two curves to each other. Three possible outcomes are shown in Figure $22$. In Figure $\PageIndex{22a}$, there is a complete separation of the two normal distribution curves, which suggests the two samples are significantly different from each other. In Figure $\PageIndex{22b}$, the normal distribution curves for the two samples almost completely overlap each other, which suggests the difference between the samples is insignificant. Figure $\PageIndex{22c}$, however, presents us with a dilemma. Although the means for the two samples seem different, the overlap of their normal distribution curves suggests that a significant number of possible outcomes could belong to either distribution. In this case the best we can do is to make a statement about the probability that the samples are significantly different from each other. The process by which we determine the probability that there is a significant difference between two samples is called significance testing or hypothesis testing. Before we discuss specific examples let's first establish a general approach to conducting and interpreting a significance test. Constructing a Significance Test The purpose of a significance test is to determine whether the difference between two or more results is sufficiently large that we are comfortable stating that the difference cannot be explained by indeterminate errors. The first step in constructing a significance test is to state the problem as a yes or no question, such as “Is this medication effective at lowering a patient’s blood glucose levels?” A null hypothesis and an alternative hypothesis define the two possible answers to our yes or no question. The null hypothesis, H0, is that indeterminate errors are sufficient to explain any differences between our results. The alternative hypothesis, HA, is that the differences in our results are too great to be explained by random error and that they must be determinate in nature. We test the null hypothesis, which we either retain or reject. If we reject the null hypothesis, then we must accept the alternative hypothesis and conclude that the difference is significant. Failing to reject a null hypothesis is not the same as accepting it. We retain a null hypothesis because we have insufficient evidence to prove it incorrect. It is impossible to prove that a null hypothesis is true. This is an important point and one that is easy to forget. To appreciate this point let’s use this data for the mass of 100 circulating United States pennies. Table $10$. Masses for a Sample of 100 Circulating U. S. Pennies Penny Weight (g) Penny Weight (g) Penny Weight (g) Penny Weight (g) 1 3.126 26 3.073 51 3.101 76 3.086 2 3.140 27 3.084 52 3.049 77 3.123 3 3.092 28 3.148 53 3.082 78 3.115 4 3.095 29 3.047 54 3.142 79 3.055 5 3.080 30 3.121 55 3.082 80 3.057 6 3.065 31 3.116 56 3.066 81 3.097 7 3.117 32 3.005 57 3.128 82 3.066 8 3.034 33 3.115 58 3.112 83 3.113 9 3.126 34 3.103 59 3.085 84 3.102 10 3.057 35 3.086 60 3.086 85 3.033 11 3.053 36 3.103 61 3.084 86 3.112 12 3.099 37 3.049 62 3.104 87 3.103 13 3.065 38 2.998 63 3.107 88 3.198 14 3.059 39 3.063 64 3.093 89 3.103 15 3.068 40 3.055 65 3.126 90 3.126 16 3.060 41 3.181 66 3.138 91 3.111 17 3.078 42 3.108 67 3.131 92 3.126 18 3.125 43 3.114 68 3.120 93 3.052 19 3.090 44 3.121 69 3.100 94 3.113 20 3.100 45 3.105 70 3.099 95 3.085 21 3.055 46 3.078 71 3.097 96 3.117 22 3.105 47 3.147 72 3.091 97 3.142 23 3.063 48 3.104 73 3.077 98 3.031 24 3.083 49 3.146 74 3.178 99 3.083 25 3.065 50 3.095 75 3.054 100 3.104 After looking at the data we might propose the following null and alternative hypotheses. H0: The mass of a circulating U.S. penny is between 2.900 g and 3.200 g HA: The mass of a circulating U.S. penny may be less than 2.900 g or more than 3.200 g To test the null hypothesis we find a penny and determine its mass. If the penny’s mass is 2.512 g then we can reject the null hypothesis and accept the alternative hypothesis. Suppose that the penny’s mass is 3.162 g. Although this result increases our confidence in the null hypothesis, it does not prove that the null hypothesis is correct because the next penny we sample might weigh less than 2.900 g or more than 3.200 g. After we state the null and the alternative hypotheses, the second step is to choose a confidence level for the analysis. The confidence level defines the probability that we will incorrectly reject the null hypothesis when it is, in fact, true. We can express this as our confidence that we are correct in rejecting the null hypothesis (e.g. 95%), or as the probability that we are incorrect in rejecting the null hypothesis. For the latter, the confidence level is given as $\alpha$, where $\alpha = 1 - \frac {\text{confidence interval (%)}} {100} \nonumber$ For a 95% confidence level, $\alpha$ is 0.05. The third step is to calculate an appropriate test statistic and to compare it to a critical value. The test statistic’s critical value defines a breakpoint between values that lead us to reject or to retain the null hypothesis, which is the fourth, and final, step of a significance test. As we will see in the sections that follow, how we calculate the test statistic depends on what we are comparing. The four steps for a statistical analysis of data using a significance test: 1. Pose a question, and state the null hypothesis, H0, and the alternative hypothesis, HA. 2. Choose a confidence level for the statistical analysis. 3. Calculate an appropriate test statistic and compare it to a critical value. 4. Either retain the null hypothesis, or reject it and accept the alternative hypothesis. One-Tailed and Two-tailed Significance Tests Suppose we want to evaluate the accuracy of a new analytical method. We might use the method to analyze a Standard Reference Material that contains a known concentration of analyte, $\mu$. We analyze the standard several times, obtaining a mean value, $\overline{X}$, for the analyte’s concentration. Our null hypothesis is that there is no difference between $\overline{X}$ and $\mu$ $H_0 \text{: } \overline{X} = \mu \nonumber$ If we conduct the significance test at $\alpha = 0.05$, then we retain the null hypothesis if a 95% confidence interval around $\overline{X}$ contains $\mu$. If the alternative hypothesis is $H_\text{A} \text{: } \overline{X} \neq \mu \nonumber$ then we reject the null hypothesis and accept the alternative hypothesis if $\mu$ lies in the shaded areas at either end of the sample’s probability distribution curve (Figure $\PageIndex{23a}$). Each of the shaded areas accounts for 2.5% of the area under the probability distribution curve, for a total of 5%. This is a two-tailed significance test because we reject the null hypothesis for values of $\mu$ at either extreme of the sample’s probability distribution curve. We can write the alternative hypothesis in two additional ways $H_\text{A} \text{: } \overline{X} > \mu \nonumber$ $H_\text{A} \text{: } \overline{X} < \mu \nonumber$ rejecting the null hypothesis if $\mu$ falls within the shaded areas shown in Figure $\PageIndex{23b}$ or Figure $\PageIndex{23c}$, respectively. In each case the shaded area represents 5% of the area under the probability distribution curve. These are examples of a one-tailed significance test. For a fixed confidence level, a two-tailed significance test is the more conservative test because rejecting the null hypothesis requires a larger difference between the results we are comparing. In most situations we have no particular reason to expect that one result must be larger (or must be smaller) than the other result. This is the case, for example, when we evaluate the accuracy of a new analytical method. A two-tailed significance test, therefore, usually is the appropriate choice. We reserve a one-tailed significance test for a situation where we specifically are interested in whether one result is larger (or smaller) than the other result. For example, a one-tailed significance test is appropriate if we are evaluating a medication’s ability to lower blood glucose levels. In this case we are interested only in whether the glucose levels after we administer the medication are less than the glucose levels before we initiated treatment. If a patient’s blood glucose level is greater after we administer the medication, then we know the answer—the medication did not work—and we do not need to conduct a statistical analysis. Errors in Significance Testing Because a significance test relies on probability, its interpretation is subject to error. In a significance test, $\alpha$ defines the probability of rejecting a null hypothesis that is true. When we conduct a significance test at $\alpha = 0.05$, there is a 5% probability that we will incorrectly reject the null hypothesis. This is known as a type 1 error, and its risk is always equivalent to $\alpha$. A type 1 error in a two-tailed or a one-tailed significance tests corresponds to the shaded areas under the probability distribution curves in Figure $23$. A second type of error occurs when we retain a null hypothesis even though it is false. This is a type 2 error, and the probability of its occurrence is $\beta$. Unfortunately, in most cases we cannot calculate or estimate the value for $\beta$. The probability of a type 2 error, however, is inversely proportional to the probability of a type 1 error. Minimizing a type 1 error by decreasing $\alpha$ increases the likelihood of a type 2 error. When we choose a value for $\alpha$ we must compromise between these two types of error. Most of the examples in this text use a 95% confidence level ($\alpha = 0.05$) because this usually is a reasonable compromise between type 1 and type 2 errors for analytical work. It is not unusual, however, to use a more stringent (e.g. $\alpha = 0.01$) or a more lenient (e.g. $\alpha = 0.10$) confidence level when the situation calls for it. Significance Tests for Normal Distributions A normal distribution is the most common distribution for the data we collect. Because the area between any two limits of a normal distribution curve is well defined, it is straightforward to construct and evaluate significance tests. Comparing $\overline{X}$ to $\mu$ One way to validate a new analytical method is to analyze a sample that contains a known amount of analyte, $\mu$. To judge the method’s accuracy we analyze several portions of the sample, determine the average amount of analyte in the sample, $\overline{X}$, and use a significance test to compare $\overline{X}$ to $\mu$. The null hypothesis is that the difference between $\overline{X}$ and $\mu$ is explained by indeterminate errors that affect our determination of $\overline{X}$. The alternative hypothesis is that the difference between $\overline{X}$ and $\mu$ is too large to be explained by indeterminate error. $H_0 \text{: } \overline{X} = \mu \nonumber$ $H_A \text{: } \overline{X} \neq \mu \nonumber$ The test statistic is texp, which we substitute into the confidence interval for $\mu$ $\mu = \overline{X} \pm \frac {t_\text{exp} s} {\sqrt{n}} \nonumber$ Rearranging this equation and solving for $t_\text{exp}$ $t_\text{exp} = \frac {|\mu - \overline{X}| \sqrt{n}} {s} \nonumber$ gives the value for $t_\text{exp}$ when $\mu$ is at either the right edge or the left edge of the sample's confidence interval (Figure $\PageIndex{24a}$). To determine if we should retain or reject the null hypothesis, we compare the value of texp to a critical value, $t(\alpha, \nu)$, where $\alpha$ is the confidence level and $\nu$ is the degrees of freedom for the sample. The critical value $t(\alpha, \nu)$ defines the largest confidence interval explained by indeterminate error. If $t_\text{exp} > t(\alpha, \nu)$, then our sample’s confidence interval is greater than that explained by indeterminate errors (Figure $24$b). In this case, we reject the null hypothesis and accept the alternative hypothesis. If $t_\text{exp} \leq t(\alpha, \nu)$, then our sample’s confidence interval is smaller than that explained by indeterminate error, and we retain the null hypothesis (Figure $24$c). Example $24$ provides a typical application of this significance test, which is known as a t-test of $\overline{X}$ to $\mu$. You will find values for $t(\alpha, \nu)$ in Appendix 3. Example $3$ Before determining the amount of Na2CO3 in a sample, you decide to check your procedure by analyzing a standard sample that is 98.76% w/w Na2CO3. Five replicate determinations of the %w/w Na2CO3 in the standard gives the following results $98.71 \% \quad 98.59 \% \quad 98.62 \% \quad 98.44 \% \quad 98.58 \%$ Using $\alpha = 0.05$, is there any evidence that the analysis is giving inaccurate results? Solution The mean and standard deviation for the five trials are $\overline{X} = 98.59 \quad \quad \quad s = 0.0973 \nonumber$ Because there is no reason to believe that the results for the standard must be larger or smaller than $\mu$, a two-tailed t-test is appropriate. The null hypothesis and alternative hypothesis are $H_0 \text{: } \overline{X} = \mu \quad \quad \quad H_\text{A} \text{: } \overline{X} \neq \mu \nonumber$ The test statistic, texp, is $t_\text{exp} = \frac {|\mu - \overline{X}|\sqrt{n}} {2} = \frac {|98.76 - 98.59| \sqrt{5}} {0.0973} = 3.91 \nonumber$ The critical value for t(0.05, 4) from Appendix 3 is 2.78. Since texp is greater than t(0.05, 4), we reject the null hypothesis and accept the alternative hypothesis. At the 95% confidence level the difference between $\overline{X}$ and $\mu$ is too large to be explained by indeterminate sources of error, which suggests there is a determinate source of error that affects the analysis. Note There is another way to interpret the result of this t-test. Knowing that texp is 3.91 and that there are 4 degrees of freedom, we use Appendix 3 to estimate the value of $\alpha$ that corresponds to a t($\alpha$, 4) of 3.91. From Appendix 3, t(0.02, 4) is 3.75 and t(0.01, 4) is 4.60. Although we can reject the null hypothesis at the 98% confidence level, we cannot reject it at the 99% confidence level. For a discussion of the advantages of this approach, see J. A. C. Sterne and G. D. Smith “Sifting the evidence—what’s wrong with significance tests?” BMJ 2001, 322, 226–231. Earlier we made the point that we must exercise caution when we interpret the result of a statistical analysis. We will keep returning to this point because it is an important one. Having determined that a result is inaccurate, as we did in Example $3$, the next step is to identify and to correct the error. Before we expend time and money on this, however, we first should critically examine our data. For example, the smaller the value of s, the larger the value of texp. If the standard deviation for our analysis is unrealistically small, then the probability of a type 2 error increases. Including a few additional replicate analyses of the standard and reevaluating the t-test may strengthen our evidence for a determinate error, or it may show us that there is no evidence for a determinate error. Comparing $s^2$ to $\sigma^2$ If we analyze regularly a particular sample, we may be able to establish an expected variance, $\sigma^2$, for the analysis. This often is the case, for example, in a clinical lab that analyzes hundreds of blood samples each day. A few replicate analyses of a single sample gives a sample variance, s2, whose value may or may not differ significantly from $\sigma^2$. We can use an F-test to evaluate whether a difference between s2 and $\sigma^2$ is significant. The null hypothesis is $H_0 \text{: } s^2 = \sigma^2$ and the alternative hypothesis is $H_\text{A} \text{: } s^2 \neq \sigma^2$. The test statistic for evaluating the null hypothesis is Fexp, which is given as either $F_\text{exp} = \frac {s^2} {\sigma^2} \text{ if } s^2 > \sigma^2 \text{ or } F_\text{exp} = \frac {\sigma^2} {s^2} \text{ if } \sigma^2 > s^2 \nonumber$ depending on whether s2 is larger or smaller than $\sigma^2$. This way of defining Fexp ensures that its value is always greater than or equal to one. If the null hypothesis is true, then Fexp should equal one; however, because of indeterminate errors, Fexp, usually is greater than one. A critical value, $F(\alpha, \nu_\text{num}, \nu_\text{den})$, is the largest value of Fexp that we can attribute to indeterminate error given the specified significance level, $\alpha$, and the degrees of freedom for the variance in the numerator, $\nu_\text{num}$, and the variance in the denominator, $\nu_\text{den}$. The degrees of freedom for s2 is n – 1, where n is the number of replicates used to determine the sample’s variance, and the degrees of freedom for $\sigma^2$ is defined as infinity, $\infty$. Critical values of F for $\alpha = 0.05$ are listed in Appendix 4 for both one-tailed and two-tailed F-tests. Example $4$ A manufacturer’s process for analyzing aspirin tablets has a known variance of 25. A sample of 10 aspirin tablets is selected and analyzed for the amount of aspirin, yielding the following results in mg aspirin/tablet. $254 \quad 249 \quad 252 \quad 252 \quad 249 \quad 249 \quad 250 \quad 247 \quad 251 \quad 252$ Determine whether there is evidence of a significant difference between the sample’s variance and the expected variance at $\alpha = 0.05$. Solution The variance for the sample of 10 tablets is 4.3. The null hypothesis and alternative hypotheses are $H_0 \text{: } s^2 = \sigma^2 \quad \quad \quad H_\text{A} \text{: } s^2 \neq \sigma^2 \nonumber$ and the value for Fexp is $F_\text{exp} = \frac {\sigma^2} {s^2} = \frac {25} {4.3} = 5.8 \nonumber$ The critical value for F(0.05, $\infty$, 9) from Appendix 4 is 3.333. Since Fexp is greater than F(0.05, $\infty$, 9), we reject the null hypothesis and accept the alternative hypothesis that there is a significant difference between the sample’s variance and the expected variance. One explanation for the difference might be that the aspirin tablets were not selected randomly. Comparing Variances for Two Samples We can extend the F-test to compare the variances for two samples, A and B, by rewriting our equation for Fexp as $F_\text{exp} = \frac {s_A^2} {s_B^2} \nonumber$ defining A and B so that the value of Fexp is greater than or equal to 1. Example $5$ The table below shows results for two experiments to determine the mass of a circulating U.S. penny. Determine whether there is a difference in the variances of these analyses at $\alpha = 0.05$. First Experiment Second Experiment Penny Mass (g) Penny Mass (g) 1 3.080 1 3.052 2 3.094 2 3.141 3 3.107 3 3.083 4 3.056 4 3.083 5 3.112 5 3.048 6 3.174 7 3.198 Solution The standard deviations for the two experiments are 0.051 for the first experiment (A) and 0.037 for the second experiment (B). The null and alternative hypotheses are $H_0 \text{: } s_A^2 = s_B^2 \quad \quad \quad H_\text{A} \text{: } s_A^2 \neq s_B^2 \nonumber$ and the value of Fexp is $F_\text{exp} = \frac {s_A^2} {s_B^2} = \frac {(0.051)^2} {(0.037)^2} = \frac {0.00260} {0.00137} = 1.90 \nonumber$ From Appendix 4 the critical value for F(0.05, 6, 4) is 9.197. Because Fexp < F(0.05, 6, 4), we retain the null hypothesis. There is no evidence at $\alpha = 0.05$ to suggest that the difference in variances is significant. Comparing Means for Two Samples Three factors influence the result of an analysis: the method, the sample, and the analyst. We can study the influence of these factors by conducting experiments in which we change one factor while holding constant the other factors. For example, to compare two analytical methods we can have the same analyst apply each method to the same sample and then examine the resulting means. In a similar fashion, we can design experiments to compare two analysts or to compare two samples. Before we consider the significance tests for comparing the means of two samples, we need to understand the difference between unpaired data and paired data. This is a critical distinction and learning to distinguish between these two types of data is important. Here are two simple examples that highlight the difference between unpaired data and paired data. In each example the goal is to compare two balances by weighing pennies. • Example 1: We collect 10 pennies and weigh each penny on each balance. This is an example of paired data because we use the same 10 pennies to evaluate each balance. • Example 2: We collect 10 pennies and divide them into two groups of five pennies each. We weigh the pennies in the first group on one balance and we weigh the second group of pennies on the other balance. Note that no penny is weighed on both balances. This is an example of unpaired data because we evaluate each balance using a different sample of pennies. In both examples the samples of 10 pennies were drawn from the same population; the difference is how we sampled that population. We will learn why this distinction is important when we review the significance test for paired data; first, however, we present the significance test for unpaired data. Note One simple test for determining whether data are paired or unpaired is to look at the size of each sample. If the samples are of different size, then the data must be unpaired. The converse is not true. If two samples are of equal size, they may be paired or unpaired. Unpaired Data Consider two analyses, A and B, with means of $\overline{X}_A$ and $\overline{X}_B$, and standard deviations of sA and sB. The confidence intervals for $\mu_A$ and for $\mu_B$ are $\mu_A = \overline{X}_A \pm \frac {t s_A} {\sqrt{n_A}} \nonumber$ $\mu_B = \overline{X}_B \pm \frac {t s_B} {\sqrt{n_B}} \nonumber$ where nA and nB are the sample sizes for A and for B. Our null hypothesis, $H_0 \text{: } \mu_A = \mu_B$, is that any difference between $\mu_A$ and $\mu_B$ is the result of indeterminate errors that affect the analyses. The alternative hypothesis, $H_A \text{: } \mu_A \neq \mu_B$, is that the difference between $\mu_A$and $\mu_B$ is too large to be explained by indeterminate error. To derive an equation for texp, we assume that $\mu_A$ equals $\mu_B$, and combine the equations for the two confidence intervals $\overline{X}_A \pm \frac {t_\text{exp} s_A} {\sqrt{n_A}} = \overline{X}_B \pm \frac {t_\text{exp} s_B} {\sqrt{n_B}} \nonumber$ Solving for $|\overline{X}_A - \overline{X}_B|$ and using a propagation of uncertainty, gives $|\overline{X}_A - \overline{X}_B| = t_\text{exp} \times \sqrt{\frac {s_A^2} {n_A} + \frac {s_B^2} {n_B}} \nonumber$ Finally, we solve for texp $t_\text{exp} = \frac {|\overline{X}_A - \overline{X}_B|} {\sqrt{\frac {s_A^2} {n_A} + \frac {s_B^2} {n_B}}} \nonumber$ and compare it to a critical value, $t(\alpha, \nu)$, where $\alpha$ is the probability of a type 1 error, and $\nu$ is the degrees of freedom. Thus far our development of this t-test is similar to that for comparing $\overline{X}$ to $\mu$, and yet we do not have enough information to evaluate the t-test. Do you see the problem? With two independent sets of data it is unclear how many degrees of freedom we have. Suppose that the variances $s_A^2$ and $s_B^2$ provide estimates of the same $\sigma^2$. In this case we can replace $s_A^2$ and $s_B^2$ with a pooled variance, $s_\text{pool}^2$, that is a better estimate for the variance. Thus, our equation for $t_\text{exp}$ becomes $t_\text{exp} = \frac {|\overline{X}_A - \overline{X}_B|} {s_\text{pool} \times \sqrt{\frac {1} {n_A} + \frac {1} {n_B}}} = \frac {|\overline{X}_A - \overline{X}_B|} {s_\text{pool}} \times \sqrt{\frac {n_A n_B} {n_A + n_B}} \nonumber$ where spool, the pooled standard deviation, is $s_\text{pool} = \sqrt{\frac {(n_A - 1) s_A^2 + (n_B - 1)s_B^2} {n_A + n_B - 2}} \nonumber$ The denominator of this equation shows us that the degrees of freedom for a pooled standard deviation is $n_A + n_B - 2$, which also is the degrees of freedom for the t-test. Note that we lose two degrees of freedom because the calculations for $s_A^2$ and $s_B^2$ require the prior calculation of $\overline{X}_A$ amd $\overline{X}_B$. Note So how do you determine if it is okay to pool the variances? Use an F-test. If $s_A^2$ and $s_B^2$ are significantly different, then we calculate texp using the following equation. In this case, we find the degrees of freedom using the following imposing equation. $\nu = \frac {\left( \frac {s_A^2} {n_A} + \frac {s_B^2} {n_B} \right)^2} {\frac {\left( \frac {s_A^2} {n_A} \right)^2} {n_A + 1} + \frac {\left( \frac {s_B^2} {n_B} \right)^2} {n_B + 1}} - 2 \nonumber$ Because the degrees of freedom must be an integer, we round to the nearest integer the value of $\nu$ obtained from this equation. Note The equation above for the degrees of freedom is from Miller, J.C.; Miller, J.N. Statistics for Analytical Chemistry, 2nd Ed., Ellis-Horward: Chichester, UK, 1988. In the 6th Edition, the authors note that several different equations have been suggested for the number of degrees of freedom for t when sA and sB differ, reflecting the fact that the determination of degrees of freedom an approximation. An alternative equation—which is used by statistical software packages, such as R, Minitab, Excel—is $\nu = \frac {\left( \frac {s_A^2} {n_A} + \frac {s_B^2} {n_B} \right)^2} {\frac {\left( \frac {s_A^2} {n_A} \right)^2} {n_A - 1} + \frac {\left( \frac {s_B^2} {n_B} \right)^2} {n_B - 1}} = \frac {\left( \frac {s_A^2} {n_A} + \frac {s_B^2} {n_B} \right)^2} {\frac {s_A^4} {n_A^2(n_A - 1)} + \frac {s_B^4} {n_B^2(n_B - 1)}} \nonumber$ For typical problems in analytical chemistry, the calculated degrees of freedom is reasonably insensitive to the choice of equation. Regardless of whether how we calculate texp, we reject the null hypothesis if texp is greater than $t(\alpha, \nu)$ and retain the null hypothesis if texp is less than or equal to $t(\alpha, \nu)$. Example $6$ Example $3$ provides results for two experiments to determine the mass of a circulating U.S. penny. Determine whether there is a difference in the means of these analyses at $\alpha = 0.05$. Solution First we use an F-test to determine whether we can pool the variances. We completed this analysis in Example $5$, finding no evidence of a significant difference, which means we can pool the standard deviations, obtaining $s_\text{pool} = \sqrt{\frac {(7 - 1)(0.051)^2 + (5 - 1)(0.037)^2} {7 + 5 - 2}} = 0.0459 \nonumber$ with 10 degrees of freedom. To compare the means we use the following null hypothesis and alternative hypotheses $H_0 \text{: } \mu_A = \mu_B \quad \quad \quad H_A \text{: } \mu_A \neq \mu_B \nonumber$ Because we are using the pooled standard deviation, we calculate texp as $t_\text{exp} = \frac {|3.117 - 3.081|} {0.0459} \times \sqrt{\frac {7 \times 5} {7 + 5}} = 1.34 \nonumber$ The critical value for t(0.05, 10), from Appendix 3, is 2.23. Because texp is less than t(0.05, 10) we retain the null hypothesis. For $\alpha = 0.05$ we do not have evidence that the two sets of pennies are significantly different. Example $7$ One method for determining the %w/w Na2CO3 in soda ash is to use an acid–base titration. When two analysts analyze the same sample of soda ash they obtain the results shown here. Analyst A: $86.82 \% \quad 87.04 \% \quad 86.93 \% \quad 87.01 \% \quad 86.20 \% \quad 87.00 \%$ Analyst B: $81.01 \% \quad 86.15 \% \quad 81.73 \% \quad 83.19 \% \quad 80.27 \% \quad 83.93 \% \quad$ Determine whether the difference in the mean values is significant at $\alpha = 0.05$. Solution We begin by reporting the mean and standard deviation for each analyst. $\overline{X}_A = 86.83\% \quad \quad s_A = 0.32\% \nonumber$ $\overline{X}_B = 82.71\% \quad \quad s_B = 2.16\% \nonumber$ To determine whether we can use a pooled standard deviation, we first complete an F-test using the following null and alternative hypotheses. $H_0 \text{: } s_A^2 = s_B^2 \quad \quad \quad H_A \text{: } s_A^2 \neq s_B^2 \nonumber$ Calculating Fexp, we obtain a value of $F_\text{exp} = \frac {(2.16)^2} {(0.32)^2} = 45.6 \nonumber$ Because Fexp is larger than the critical value of 7.15 for F(0.05, 5, 5) from Appendix 4, we reject the null hypothesis and accept the alternative hypothesis that there is a significant difference between the variances; thus, we cannot calculate a pooled standard deviation. To compare the means for the two analysts we use the following null and alternative hypotheses. $H_0 \text{: } \overline{X}_A = \overline{X}_B \quad \quad \quad H_A \text{: } \overline{X}_A \neq \overline{X}_B \nonumber$ Because we cannot pool the standard deviations, we calculate texp as $t_\text{exp} = \frac {|86.83 - 82.71|} {\sqrt{\frac {(0.32)^2} {6} + \frac {(2.16)^2} {6}}} = 4.62 \nonumber$ and calculate the degrees of freedom as $\nu = \frac {\left( \frac {(0.32)^2} {6} + \frac {(2.16)^2} {6} \right)^2} {\frac {\left( \frac {(0.32)^2} {6} \right)^2} {6 + 1} + \frac {\left( \frac {(2.16)^2} {6} \right)^2} {6 + 1}} - 2 = 5.3 \approx 5 \nonumber$ From Appendix 3, the critical value for t(0.05, 5) is 2.57. Because texp is greater than t(0.05, 5) we reject the null hypothesis and accept the alternative hypothesis that the means for the two analysts are significantly different at $\alpha = 0.05$. Paired Data Suppose we are evaluating a new method for monitoring blood glucose concentrations in patients. An important part of evaluating a new method is to compare it to an established method. What is the best way to gather data for this study? Because the variation in the blood glucose levels amongst patients is large we may be unable to detect a small, but significant difference between the methods if we use different patients to gather data for each method. Using paired data, in which the we analyze each patient’s blood using both methods, prevents a large variance within a population from adversely affecting a t-test of means. Note Typical blood glucose levels for most non-diabetic individuals ranges between 80–120 mg/dL (4.4–6.7 mM), rising to as high as 140 mg/dL (7.8 mM) shortly after eating. Higher levels are common for individuals who are pre-diabetic or diabetic. When we use paired data we first calculate the individual differences, di, between each sample's paired resykts. Using these individual differences, we then calculate the average difference, $\overline{d}$, and the standard deviation of the differences, sd. The null hypothesis, $H_0 \text{: } d = 0$, is that there is no difference between the two samples, and the alternative hypothesis, $H_A \text{: } d \neq 0$, is that the difference between the two samples is significant. The test statistic, texp, is derived from a confidence interval around $\overline{d}$ $t_\text{exp} = \frac {|\overline{d}| \sqrt{n}} {s_d} \nonumber$ where n is the number of paired samples. As is true for other forms of the t-test, we compare texp to $t(\alpha, \nu)$, where the degrees of freedom, $\nu$, is n – 1. If texp is greater than $t(\alpha, \nu)$, then we reject the null hypothesis and accept the alternative hypothesis. We retain the null hypothesis if texp is less than or equal to t(a, o). This is known as a paired t-test. Example $8$ Marecek et. al. developed a new electrochemical method for the rapid determination of the concentration of the antibiotic monensin in fermentation vats [Marecek, V.; Janchenova, H.; Brezina, M.; Betti, M. Anal. Chim. Acta 1991, 244, 15–19]. The standard method for the analysis is a test for microbiological activity, which is both difficult to complete and time-consuming. Samples were collected from the fermentation vats at various times during production and analyzed for the concentration of monensin using both methods. The results, in parts per thousand (ppt), are reported in the following table. Sample Microbiological Electrochemical 1 129.5 132.3 2 89.6 91.0 3 76.6 73.6 4 52.2 58.2 5 110.8 104.2 6 50.4 49.9 7 72.4 82.1 8 141.4 154.1 9 75.0 73.4 10 34.1 38.1 11 60.3 60.1 Is there a significant difference between the methods at $\alpha = 0.05$? Solution Acquiring samples over an extended period of time introduces a substantial time-dependent change in the concentration of monensin. Because the variation in concentration between samples is so large, we use a paired t-test with the following null and alternative hypotheses. $H_0 \text{: } \overline{d} = 0 \quad \quad \quad H_A \text{: } \overline{d} \neq 0 \nonumber$ Defining the difference between the methods as $d_i = (X_\text{elect})_i - (X_\text{micro})_i \nonumber$ we calculate the difference for each sample. sample 1 2 3 4 5 6 7 8 9 10 11 $d_i$ 2.8 1.4 –3.0 6.0 –6.6 –0.5 9.7 12.7 –1.6 4.0 –0.2 The mean and the standard deviation for the differences are, respectively, 2.25 ppt and 5.63 ppt. The value of texp is $t_\text{exp} = \frac {|2.25| \sqrt{11}} {5.63} = 1.33 \nonumber$ which is smaller than the critical value of 2.23 for t(0.05, 10) from Appendix 3. We retain the null hypothesis and find no evidence for a significant difference in the methods at $\alpha = 0.05$. One important requirement for a paired t-test is that the determinate and the indeterminate errors that affect the analysis must be independent of the analyte’s concentration. If this is not the case, then a sample with an unusually high concentration of analyte will have an unusually large di. Including this sample in the calculation of $\overline{d}$ and sd gives a biased estimate for the expected mean and standard deviation. This rarely is a problem for samples that span a limited range of analyte concentrations, such as those in Example $6$ or Exercise $8$. When paired data span a wide range of concentrations, however, the magnitude of the determinate and indeterminate sources of error may not be independent of the analyte’s concentration; when true, a paired t-test may give misleading results because the paired data with the largest absolute determinate and indeterminate errors will dominate $\overline{d}$. In this situation a regression analysis, which is the subject of the next chapter, is more appropriate method for comparing the data. Note The importance of distinguishing between paired and unpaired data is worth examining more closely. The following is data from some work I completed with a colleague in which we were looking at concentration of Zn in Lake Erie at the air-water interface and the sediment-water interface. sample site ppm Zn at air-water interface ppm Zn at the sediment-water interface 1 0.430 0.415 2 0.266 0.238 3 0.457 0.390 4 0.531 0.410 5 0.707 0.605 6 0.716 0.609 The mean and the standard deviation for the ppm Zn at the air-water interface are 0.5178 ppm and 0.01732 ppm, and the mean and the standard deviation for the ppm Zn at the sediment-water interface are 0.4445 ppm and 0.1418 ppm. We can use these values to draw normal distributions for both by letting the means and the standard deviations for the samples, $\overline{X}$ and $s$, serve as estimates for the means and the standard deviations for the population, $\mu$ and $\sigma$. As we see in the following figure the two distributions overlap strongly, suggesting that a t-test of their means is not likely to find evidence of a difference. And yet, we also see that for each site, the concentration of Zn at the sediment-water interface is less than that at the air-water interface. In this case, the difference between the concentration of Zn at individual sites is sufficiently large that it masks our ability to see the difference between the two interfaces. If we take the differences between the air-water and sediment-water interfaces, we have values of 0.015, 0.028, 0.067, 0.121, 0.102, and 0.107 ppm Zn, with a mean of 0.07333 ppm Zn and a standard deviation of 0.04410 ppm Zn. Superimposing all three normal distributions shows clearly that most of the normal distribution for the differences lies above zero, suggesting that a t-test might show evidence that the difference is significant. Outliers Table $11$ provides one more data set giving the masses for a sample of pennies. Do you notice anything unusual in this data? Of the 100 pennies included in our earlier table, no penny has a mass of less than 3 g. In this table, however, the mass of one penny is less than 3 g. We might ask whether this penny’s mass is so different from the other pennies that it is in error. Table $11$. Mass (g) for Additional Sample of Circulating U. S. Penniese 3.067 2.514 3.094 3.049 3.048 3.109 3.039 3.079 3.102 A measurement that is not consistent with other measurements is called an outlier. An outlier might exist for many reasons: the outlier might belong to a different population Is this a Canadian penny? or the outlier might be a contaminated or an otherwise altered sample Is the penny damaged or unusually dirty? or the outlier may result from an error in the analysis Did we forget to tare the balance? Regardless of its source, the presence of an outlier compromises any meaningful analysis of our data. There are many significance tests that we can use to identify a potential outlier, three of which we present here. Dixon's Q-Test One of the most common significance tests for identifying an outlier is Dixon’s Q-test. The null hypothesis is that there are no outliers, and the alternative hypothesis is that there is an outlier. The Q-test compares the gap between the suspected outlier and its nearest numerical neighbor to the range of the entire data set (Figure $25$). The test statistic, Qexp, is $Q_\text{exp} = \frac {\text{gap}} {\text{range}} = \frac {|\text{outlier's value} - \text{nearest value}|} {\text{largest value} - \text{smallest value}} \nonumber$ This equation is appropriate for evaluating a single outlier. Other forms of Dixon’s Q-test allow its extension to detecting multiple outliers [Rorabacher, D. B. Anal. Chem. 1991, 63, 139–146]. The value of Qexp is compared to a critical value, $Q(\alpha, n)$, where $\alpha$ is the probability that we will reject a valid data point (a type 1 error) and n is the total number of data points. To protect against rejecting a valid data point, usually we apply the more conservative two-tailed Q-test, even though the possible outlier is the smallest or the largest value in the data set. If Qexp is greater than $Q(\alpha, n)$, then we reject the null hypothesis and may exclude the outlier. We retain the possible outlier when Qexp is less than or equal to $Q(\alpha, n)$. Table $12$ provides values for $Q(\alpha, n)$ for a data set that has 3–10 values. A more extensive table is in Appendix 5. Values for $Q(\alpha, n)$ assume an underlying normal distribution. Table $12$: Dixon's Q-Test n Q(0.05, n) 3 0.970 4 0.829 5 0.710 6 0.625 7 0.568 8 0.526 9 0.493 10 0.466 Grubb's Test Although Dixon’s Q-test is a common method for evaluating outliers, it is no longer favored by the International Standards Organization (ISO), which recommends the Grubb’s test. There are several versions of Grubb’s test depending on the number of potential outliers. Here we will consider the case where there is a single suspected outlier. Note For details on this recommendation, see International Standards ISO Guide 5752-2 “Accuracy (trueness and precision) of measurement methods and results–Part 2: basic methods for the determination of repeatability and reproducibility of a standard measurement method,” 1994. The test statistic for Grubb’s test, Gexp, is the distance between the sample’s mean, $\overline{X}$, and the potential outlier, $X_\text{out}$, in terms of the sample’s standard deviation, s. $G_\text{exp} = \frac {|X_\text{out} - \overline{X}|} {s} \nonumber$ We compare the value of Gexp to a critical value $G(\alpha, n)$, where $\alpha$ is the probability that we will reject a valid data point and n is the number of data points in the sample. If Gexp is greater than $G(\alpha, n)$, then we may reject the data point as an outlier, otherwise we retain the data point as part of the sample. Table $13$ provides values for G(0.05, n) for a sample containing 3–10 values. A more extensive table is in Appendix 6. Values for $G(\alpha, n)$ assume an underlying normal distribution. Table $13$: Grubb's Test n G(0.05, n) 3 1.115 4 1.481 5 1.715 6 1.887 7 2.020 8 2.126 9 2.215 10 2.290 Chauvenet's Criterion Our final method for identifying an outlier is Chauvenet’s criterion. Unlike Dixon’s Q-Test and Grubb’s test, you can apply this method to any distribution as long as you know how to calculate the probability for a particular outcome. Chauvenet’s criterion states that we can reject a data point if the probability of obtaining the data point’s value is less than $(2n^{-1})$, where n is the size of the sample. For example, if n = 10, a result with a probability of less than $(2 \times 10)^{-1}$, or 0.05, is considered an outlier. To calculate a potential outlier’s probability we first calculate its standardized deviation, z $z = \frac {|X_\text{out} - \overline{X}|} {s} \nonumber$ where $X_\text{out}$ is the potential outlier, $\overline{X}$ is the sample’s mean and s is the sample’s standard deviation. Note that this equation is identical to the equation for Gexp in the Grubb’s test. For a normal distribution, we can find the probability of obtaining a value of z using the probability table in Appendix 2. Example $9$ Table $11$ contains the masses for nine circulating United States pennies. One entry, 2.514 g, appears to be an outlier. Determine if this penny is an outlier using a Q-test, Grubb’s test, and Chauvenet’s criterion. For the Q-test and Grubb’s test, let $\alpha = 0.05$. Solution For the Q-test the value for $Q_\text{exp}$ is $Q_\text{exp} = \frac {|2.514 - 3.039|} {3.109 - 2.514} = 0.882 \nonumber$ From Table $12$, the critical value for Q(0.05, 9) is 0.493. Because Qexp is greater than Q(0.05, 9), we can assume the penny with a mass of 2.514 g likely is an outlier. For Grubb’s test we first need the mean and the standard deviation, which are 3.011 g and 0.188 g, respectively. The value for Gexp is $G_\text{exp} = \frac {|2.514 - 3.011|} {0.188} = 2.64 \nonumber$ Using Table $13$, we find that the critical value for G(0.05, 9) is 2.215. Because Gexp is greater than G(0.05, 9), we can assume that the penny with a mass of 2.514 g likely is an outlier. For Chauvenet’s criterion, the critical probability is $(2 \times 9)^{-1}$, or 0.0556. The value of z is the same as Gexp, or 2.64. Using Appendix 1, the probability for z = 2.64 is 0.00415. Because the probability of obtaining a mass of 0.2514 g is less than the critical probability, we can assume the penny with a mass of 2.514 g likely is an outlier. You should exercise caution when using a significance test for outliers because there is a chance you will reject a valid result. In addition, you should avoid rejecting an outlier if it leads to a precision that is much better than expected based on a propagation of uncertainty. Given these concerns it is not surprising that some statisticians caution against the removal of outliers [Deming, W. E. Statistical Analysis of Data; Wiley: New York, 1943 (republished by Dover: New York, 1961); p. 171]. Note You also can adopt a more stringent requirement for rejecting data. When using the Grubb’s test, for example, the ISO 5752 guidelines suggest retaining a value if the probability for rejecting it is greater than $\alpha = 0.05$, and flagging a value as a “straggler” if the probability for rejecting it is between $\alpha = 0.05$ and $\alpha = 0.01$. A “straggler” is retained unless there is compelling reason for its rejection. The guidelines recommend using $\alpha = 0.01$ as the minimum criterion for rejecting a possible outlier. On the other hand, testing for outliers can provide useful information if we try to understand the source of the suspected outlier. For example, the outlier in Table $11$ represents a significant change in the mass of a penny (an approximately 17% decrease in mass), which is the result of a change in the composition of the U.S. penny. In 1982 the composition of a U.S. penny changed from a brass alloy that was 95% w/w Cu and 5% w/w Zn (with a nominal mass of 3.1 g), to a pure zinc core covered with copper (with a nominal mass of 2.5 g) [Richardson, T. H. J. Chem. Educ. 1991, 68, 310–311]. The pennies in Table $11$, therefore, were drawn from different populations. Calibrating Data A calibration curve is one of the most important tools in analytical chemistry as it allows us to determine the concentration of an analyte in a sample by measuring the signal it generates when placed in an instrument, such as a spectrophotometer. To determine the analyte's concentration we must know the relationship between the signal we measure , $S$, and the analyte's concentration, $C_A$, which we can write as $S = k_A C_A + S_{blank} \nonumber$ where $k_A$ is the calibration curve's sensitivity and $S_{blank}$ is the signal in the absence of analyte. How do we find the best estimate for this relationship between the signal and the concentration of analyte? When a calibration curve is a straight-line, we represent it using the following mathematical model $y = \beta_0 + \beta_1 x \nonumber$ where y is the analyte’s measured signal, S, and x is the analyte’s known concentration, $C_A$, in a series of standard solutions. The constants $\beta_0$ and $\beta_1$ are, respectively, the calibration curve’s expected y-intercept and its expected slope. Because of uncertainty in our measurements, the best we can do is to estimate values for $\beta_0$ and $\beta_1$, which we represent as b0 and b1. The goal of a linear regression analysis is to determine the best estimates for b0 and b1. Unweighted Linear Regression With Errors in y The most common method for completing a linear regression makes three assumptions: 1. the difference between our experimental data and the calculated regression line is the result of indeterminate errors that affect y 2. any indeterminate errors that affect y are normally distributed 3. that indeterminate errors in y are independent of the value of x Because we assume that the indeterminate errors are the same for all standards, each standard contributes equally in our estimate of the slope and the y-intercept. For this reason the result is considered an unweighted linear regression. The second assumption generally is true because of the central limit theorem, which we considered earlier. The validity of the two remaining assumptions is less obvious and you should evaluate them before you accept the results of a linear regression. In particular the first assumption is always suspect because there certainly is some indeterminate error in the measurement of x. When we prepare a calibration curve, however, it is not unusual to find that the uncertainty in the signal, S, is significantly greater than the uncertainty in the analyte’s concentration, $C_A$. In such circumstances the first assumption usually is reasonable. How a Linear Regression Works To understand the logic of a linear regression consider the example in Figure $26$, which shows three data points and two possible straight-lines that might reasonably explain the data. How do we decide how well these straight-lines fit the data, and how do we determine which, if either, is the best straight-line? Let’s focus on the solid line in Figure $26$. The equation for this line is $\hat{y} = b_0 + b_1 x \nonumber$ where b0 and b1 are estimates for the y-intercept and the slope, and $\hat{y}$ is the predicted value of y for any value of x. Because we assume that all uncertainty is the result of indeterminate errors in y, the difference between y and $\hat{y}$ for each value of x is the residual error, r, in our mathematical model. $r_i = (y_i - \hat{y}_i) \nonumber$ Figure $27$ shows the residual errors for the three data points. The smaller the total residual error, R, which we define as $R = \sum_{i = 1}^{n} (y_i - \hat{y}_i)^2 \nonumber$ the better the fit between the straight-line and the data. In a linear regression analysis, we seek values of b0 and b1 that give the smallest total residual error. Note The reason for squaring the individual residual errors is to prevent a positive residual error from canceling out a negative residual error. You have seen this before in the equations for the sample and population standard deviations introduced in Chapter 4. You also can see from this equation why a linear regression is sometimes called the method of least squares. Finding the Slope and y-Intercept for the Regression Model Although we will not formally develop the mathematical equations for a linear regression analysis, you can find the derivations in many standard statistical texts [ See, for example, Draper, N. R.; Smith, H. Applied Regression Analysis, 3rd ed.; Wiley: New York, 1998]. The resulting equation for the slope, b1, is $b_1 = \frac {n \sum_{i = 1}^{n} x_i y_i - \sum_{i = 1}^{n} x_i \sum_{i = 1}^{n} y_i} {n \sum_{i = 1}^{n} x_i^2 - \left( \sum_{i = 1}^{n} x_i \right)^2} \nonumber$ and the equation for the y-intercept, b0, is $b_0 = \frac {\sum_{i = 1}^{n} y_i - b_1 \sum_{i = 1}^{n} x_i} {n} \nonumber$ Although these equations appear formidable, it is necessary only to evaluate the following four summations $\sum_{i = 1}^{n} x_i \quad \sum_{i = 1}^{n} y_i \quad \sum_{i = 1}^{n} x_i y_i \quad \sum_{i = 1}^{n} x_i^2 \nonumber$ Many calculators, spreadsheets, and other statistical software packages are capable of performing a linear regression analysis based on this model; see Section 8.5 for details on completing a linear regression analysis using R. For illustrative purposes the necessary calculations are shown in detail in the following example. Example $10$ Using the calibration data in the following table, determine the relationship between the signal, $y_i$, and the analyte's concentration, $x_i$, using an unweighted linear regression. Solution We begin by setting up a table to help us organize the calculation. $x_i$ $y_i$ $x_i y_i$ $x_i^2$ 0.000 0.00 0.000 0.000 0.100 12.36 1.236 0.010 0.200 24.83 4.966 0.040 0.300 35.91 10.773 0.090 0.400 48.79 19.516 0.160 0.500 60.42 30.210 0.250 Adding the values in each column gives $\sum_{i = 1}^{n} x_i = 1.500 \quad \sum_{i = 1}^{n} y_i = 182.31 \quad \sum_{i = 1}^{n} x_i y_i = 66.701 \quad \sum_{i = 1}^{n} x_i^2 = 0.550 \nonumber$ Substituting these values into the equations for the slope and the y-intercept gives $b_1 = \frac {(6 \times 66.701) - (1.500 \times 182.31)} {(6 \times 0.550) - (1.500)^2} = 120.706 \approx 120.71 \nonumber$ $b_0 = \frac {182.31 - (120.706 \times 1.500)} {6} = 0.209 \approx 0.21 \nonumber$ The relationship between the signal, $S$, and the analyte's concentration, $C_A$, therefore, is $S = 120.71 \times C_A + 0.21 \nonumber$ For now we keep two decimal places to match the number of decimal places in the signal. The resulting calibration curve is shown in Figure $28$. Uncertainty in the Regression Model As we see in Figure $28$, because of indeterminate errors in the signal, the regression line does not pass through the exact center of each data point. The cumulative deviation of our data from the regression line—the total residual error—is proportional to the uncertainty in the regression. We call this uncertainty the standard deviation about the regression, sr, which is equal to $s_r = \sqrt{\frac {\sum_{i = 1}^{n} \left( y_i - \hat{y}_i \right)^2} {n - 2}} \nonumber$ where yi is the ith experimental value, and $\hat{y}_i$ is the corresponding value predicted by the regression equation $\hat{y} = b_0 + b_1 x$. Note that the denominator indicates that our regression analysis has n – 2 degrees of freedom—we lose two degree of freedom because we use two parameters, the slope and the y-intercept, to calculate $\hat{y}_i$. A more useful representation of the uncertainty in our regression analysis is to consider the effect of indeterminate errors on the slope, b1, and the y-intercept, b0, which we express as standard deviations. $s_{b_1} = \sqrt{\frac {n s_r^2} {n \sum_{i = 1}^{n} x_i^2 - \left( \sum_{i = 1}^{n} x_i \right)^2}} = \sqrt{\frac {s_r^2} {\sum_{i = 1}^{n} \left( x_i - \overline{x} \right)^2}} \nonumber$ $s_{b_0} = \sqrt{\frac {s_r^2 \sum_{i = 1}^{n} x_i^2} {n \sum_{i = 1}^{n} x_i^2 - \left( \sum_{i = 1}^{n} x_i \right)^2}} = \sqrt{\frac {s_r^2 \sum_{i = 1}^{n} x_i^2} {n \sum_{i = 1}^{n} \left( x_i - \overline{x} \right)^2}} \nonumber$ We use these standard deviations to establish confidence intervals for the expected slope, $\beta_1$, and the expected y-intercept, $\beta_0$ $\beta_1 = b_1 \pm t s_{b_1} \nonumber$ $\beta_0 = b_0 \pm t s_{b_0} \nonumber$ where we select t for a significance level of $\alpha$ and for n – 2 degrees of freedom. Note that these equations do not contain the factor of $(\sqrt{n})^{-1}$ seen in the confidence intervals for $\mu$ because the confidence interval here is based on a single regression line. Example $11$ Calculate the 95% confidence intervals for the slope and y-intercept from Example $10$. Solution We begin by calculating the standard deviation about the regression. To do this we must calculate the predicted signals, $\hat{y}_i$ , using the slope and the y-intercept from Example $10$, and the squares of the residual error, $(y_i - \hat{y}_i)^2$. Using the last standard as an example, we find that the predicted signal is $\hat{y}_6 = b_0 + b_1 x_6 = 0.209 + (120.706 \times 0.500) = 60.562 \nonumber$ and that the square of the residual error is $(y_i - \hat{y}_i)^2 = (60.42 - 60.562)^2 = 0.2016 \approx 0.202 \nonumber$ The following table displays the results for all six solutions. $x_i$ $y_i$ $\hat{y}_i$ $\left( y_i - \hat{y}_i \right)^2$ 0.000 0.00 0.209 0.0437 0.100 12.36 12.280 0.0064 0.200 24.83 24.350 0.2304 0.300 35.91 36.421 0.2611 0.400 48.79 48.491 0.0894 0.500 60.42 60.562 0.0202 Adding together the data in the last column gives the numerator in the equation for the standard deviation about the regression; thus $s_r = \sqrt{\frac {0.6512} {6 - 2}} = 0.4035 \nonumber$ Next we calculate the standard deviations for the slope and the y-intercept. The values for the summation terms are from Example $10$. $s_{b_1} = \sqrt{\frac {6 \times (0.4035)^2} {(6 \times 0.550) - (1.500)^2}} = 0.965 \nonumber$ $s_{b_0} = \sqrt{\frac {(0.4035)^2 \times 0.550} {(6 \times 0.550) - (1.500)^2}} = 0.292 \nonumber$ Finally, the 95% confidence intervals ($\alpha = 0.05$, 4 degrees of freedom) for the slope and y-intercept are $\beta_1 = b_1 \pm ts_{b_1} = 120.706 \pm (2.78 \times 0.965) = 120.7 \pm 2.7 \nonumber$ $\beta_0 = b_0 \pm ts_{b_0} = 0.209 \pm (2.78 \times 0.292) = 0.2 \pm 0.80 \nonumber$ where t(0.05, 4) from Appendix 3 is 2.78. The standard deviation about the regression, sr, suggests that the signal, Sstd, is precise to one decimal place. For this reason we report the slope and the y-intercept to a single decimal place. Using the Regression Model to Determine a Value for x Given a Value for y Once we have our regression equation, it is easy to determine the concentration of analyte in a sample. When we use a normal calibration curve, for example, we measure the signal for our sample, Ssamp, and calculate the analyte’s concentration, CA, using the regression equation. $C_A = \frac {S_{samp} - b_0} {b_1} \nonumber$ What is less obvious is how to report a confidence interval for CA that expresses the uncertainty in our analysis. To calculate a confidence interval we need to know the standard deviation in the analyte’s concentration, $s_{C_A}$, which is given by the following equation $s_{C_A} = \frac {s_r} {b_1} \sqrt{\frac {1} {m} + \frac {1} {n} + \frac {\left( \overline{S}_{samp} - \overline{S}_{std} \right)^2} {(b_1)^2 \sum_{i = 1}^{n} \left( C_{std_i} - \overline{C}_{std} \right)^2}} \nonumber$ where m is the number of replicates we use to establish the sample’s average signal, Ssamp, n is the number of calibration standards, Sstd is the average signal for the calibration standards, and $C_{std_i}$ and $\overline{C}_{std}$ are the individual and the mean concentrations for the calibration standards. Knowing the value of $s_{C_A}$, the confidence interval for the analyte’s concentration is $\mu_{C_A} = C_A \pm t s_{C_A} \nonumber$ where $\mu_{C_A}$ is the expected value of CA in the absence of determinate errors, and with the value of t is based on the desired level of confidence and n – 2 degrees of freedom. A close examination of these equations should convince you that we can decrease the uncertainty in the predicted concentration of analyte, $C_A$ if we increase the number of standards, $n$, increase the number of replicate samples that we analyze, $m$, and if the sample’s average signal, $\overline{S}_{samp}$, is equal to the average signal for the standards, $\overline{S}_{std}$. When practical, you should plan your calibration curve so that Ssamp falls in the middle of the calibration curve. For more information about these regression equations see (a) Miller, J. N. Analyst 1991, 116, 3–14; (b) Sharaf, M. A.; Illman, D. L.; Kowalski, B. R. Chemometrics, Wiley-Interscience: New York, 1986, pp. 126-127; (c) Analytical Methods Committee “Uncertainties in concentrations estimated from calibration experiments,” AMC Technical Brief, March 2006. Note The equation for the standard deviation in the analyte's concentration is written in terms of a calibration experiment. A more general form of the equation, written in terms of x and y, is given here. $s_{x} = \frac {s_r} {b_1} \sqrt{\frac {1} {m} + \frac {1} {n} + \frac {\left( \overline{Y} - \overline{y} \right)^2} {(b_1)^2 \sum_{i = 1}^{n} \left( x_i - \overline{x} \right)^2}} \nonumber$ Example $12$ Three replicate analyses for a sample that contains an unknown concentration of analyte, yields values for Ssamp of 29.32, 29.16 and 29.51 (arbitrary units). Using the results from Example $10$ and Example $11$, determine the analyte’s concentration, CA, and its 95% confidence interval. Solution The average signal, $\overline{S}_{samp}$, is 29.33, which, using the slope and the y-intercept from Example $10$, gives the analyte’s concentration as $C_A = \frac {\overline{S}_{samp} - b_0} {b_1} = \frac {29.33 - 0.209} {120.706} = 0.241 \nonumber$ To calculate the standard deviation for the analyte’s concentration we must determine the values for $\overline{S}_{std}$ and for $\sum_{i = 1}^{2} (C_{std_i} - \overline{C}_{std})^2$. The former is just the average signal for the calibration standards, which, using the data in Table $10$, is 30.385. Calculating $\sum_{i = 1}^{2} (C_{std_i} - \overline{C}_{std})^2$ looks formidable, but we can simplify its calculation by recognizing that this sum-of-squares is the numerator in a standard deviation equation; thus, $\sum_{i = 1}^{n} (C_{std_i} - \overline{C}_{std})^2 = (s_{C_{std}})^2 \times (n - 1) \nonumber$ where $s_{C_{std}}$ is the standard deviation for the concentration of analyte in the calibration standards. Using the data in Table $10$ we find that $s_{C_{std}}$ is 0.1871 and $\sum_{i = 1}^{n} (C_{std_i} - \overline{C}_{std})^2 = (0.1872)^2 \times (6 - 1) = 0.175 \nonumber$ Substituting known values into the equation for $s_{C_A}$ gives $s_{C_A} = \frac {0.4035} {120.706} \sqrt{\frac {1} {3} + \frac {1} {6} + \frac {(29.33 - 30.385)^2} {(120.706)^2 \times 0.175}} = 0.0024 \nonumber$ Finally, the 95% confidence interval for 4 degrees of freedom is $\mu_{C_A} = C_A \pm ts_{C_A} = 0.241 \pm (2.78 \times 0.0024) = 0.241 \pm 0.007 \nonumber$ Figure $29$ shows the calibration curve with curves showing the 95% confidence interval for CA. Evaluating a Regression Model You should never accept the result of a linear regression analysis without evaluating the validity of the model. Perhaps the simplest way to evaluate a regression analysis is to examine the residual errors. As we saw earlier, the residual error for a single calibration standard, ri, is $r_i = (y_i - \hat{y}_i) \nonumber$ If the regression model is valid, then the residual errors should be distributed randomly about an average residual error of zero, with no apparent trend toward either smaller or larger residual errors (Figure $\PageIndex{30a}$). Trends such as those in Figure $\PageIndex{30b}$ and Figure $\PageIndex{30c}$ provide evidence that at least one of the model’s assumptions is incorrect. For example, a trend toward larger residual errors at higher concentrations, Figure $\PageIndex{30b}$, suggests that the indeterminate errors affecting the signal are not independent of the analyte’s concentration. In Figure $\PageIndex{30c}$, the residual errors are not random, which suggests we cannot model the data using a straight-line relationship. Regression methods for the latter two cases are discussed in the following sections. Example $13$ Use your results from Exercise $10$ to construct a residual plot and explain its significance. Solution To create a residual plot, we need to calculate the residual error for each standard. The following table contains the relevant information. $x_i$ $y_i$ $\hat{y}_i$ $y_i - \hat{y}_i$ 0.000 0.000 0.0015 –0.0015 $1.55 \times 10^{-3}$ 0.050 0.0473 0.0027 $3.16 \times 10^{-3}$ 0.093 0.0949 –0.0019 $4.74 \times 10^{-3}$ 0.143 0.1417 0.0013 $6.34 \times 10^{-3}$ 0.188 0.1890 –0.0010 $7.92 \times 10^{-3}$ 0.236 0.2357 0.0003 The figure below shows a plot of the resulting residual errors. The residual errors appear random, although they do alternate in sign, and they do not show any significant dependence on the analyte’s concentration. Taken together, these observations suggest that our regression model is appropriate. Weighted Linear Regression With Errors in y Our treatment of linear regression to this point assumes that any indeterminate errors that affect y are independent of the value of x. If this assumption is false, then we must include the variance for each value of y in our determination of the y-intercept, b0, and the slope, b1; thus $b_0 = \frac {\sum_{i = 1}^{n} w_i y_i - b_1 \sum_{i = 1}^{n} w_i x_i} {n} \nonumber$ $b_1 = \frac {n \sum_{i = 1}^{n} w_i x_i y_i - \sum_{i = 1}^{n} w_i x_i \sum_{i = 1}^{n} w_i y_i} {n \sum_{i =1}^{n} w_i x_i^2 - \left( \sum_{i = 1}^{n} w_i x_i \right)^2} \nonumber$ where wi is a weighting factor that accounts for the variance in yi $w_i = \frac {n (s_{y_i})^{-2}} {\sum_{i = 1}^{n} (s_{y_i})^{-2}} \nonumber$ and $s_{y_i}$ is the standard deviation for yi. In a weighted linear regression, each xy-pair’s contribution to the regression line is inversely proportional to the precision of yi; that is, the more precise the value of y, the greater its contribution to the regression. Example $14$ Shown here are data for an external standardization in which sstd is the standard deviation for three replicate determination of the signal. $C_{std}$ (arbitrary units) $S_{std}$ (arbitrary units) $s_{std}$ 0.000 0.00 0.02 0.100 12.36 0.02 0.200 24.83 0.07 0.300 35.91 0.13 0.400 48.79 0.22 0.500 60.42 0.33 Determine the calibration curve’s equation using a weighted linear regression. As you work through this example, remember that x corresponds to Cstd, and that y corresponds to Sstd. Solution We begin by setting up a table to aid in calculating the weighting factors. $C_{std}$ (arbitrary units) $S_{std}$ (arbitrary units) $s_{std}$ $(s_{y_i})^{-2}$ $w_i$ 0.000 0.00 0.02 2500.00 2.8339 0.100 12.36 0.02 250.00 2.8339 0.200 24.83 0.07 204.08 0.2313 0.300 35.91 0.13 59.17 0.0671 0.400 48.79 0.22 20.66 0.0234 0.500 60.42 0.33 9.18 0.0104 Adding together the values in the fourth column gives $\sum_{i = 1}^{n} (s_{y_i})^{-2} \nonumber$ which we use to calculate the individual weights in the last column. As a check on your calculations, the sum of the individual weights must equal the number of calibration standards, n. The sum of the entries in the last column is 6.0000, so all is well. After we calculate the individual weights, we use a second table to aid in calculating the four summation terms in the equations for the slope, $b_1$, and the y-intercept, $b_0$. $x_i$ $y_i$ $w_i$ $w_i x_i$ $w_i y_i$ $w_i x_i^2$ $w_i x_i y_i$ 0.000 0.00 2.8339 0.0000 0.0000 0.0000 0.0000 0.100 12.36 2.8339 0.2834 35.0270 0.0283 3.5027 0.200 24.83 0.2313 0.0463 5.7432 0.0093 1.1486 0.300 35.91 0.0671 0.0201 2.4096 0.0060 0.7229 0.400 48.79 0.0234 0.0094 1.1417 0.0037 0.4567 0.500 60.42 0.0104 0.0052 0.6284 0.0026 0.3142 Adding the values in the last four columns gives $\sum_{i = 1}^{n} w_i x_i = 0.3644 \quad \sum_{i = 1}^{n} w_i y_i = 44.9499 \quad \sum_{i = 1}^{n} w_i x_i^2 = 0.0499 \quad \sum_{i = 1}^{n} w_i x_i y_i = 6.1451 \nonumber$ which gives the estimated slope and the estimated y-intercept as $b_1 = \frac {(6 \times 6.1451) - (0.3644 \times 44.9499)} {(6 \times 0.0499) - (0.3644)^2} = 122.985 \nonumber$ $b_0 = \frac{44.9499 - (122.985 \times 0.3644)} {6} = 0.0224 \nonumber$ The calibration equation is $S_{std} = 122.98 \times C_{std} + 0.2 \nonumber$ Figure $31$ shows the calibration curve for the weighted regression determined here and the calibration curve for the unweighted regression. Although the two calibration curves are very similar, there are slight differences in the slope and in the y-intercept. Most notably, the y-intercept for the weighted linear regression is closer to the expected value of zero. Because the standard deviation for the signal, Sstd, is smaller for smaller concentrations of analyte, Cstd, a weighted linear regression gives more emphasis to these standards, allowing for a better estimate of the y-intercept. Equations for calculating confidence intervals for the slope, the y-intercept, and the concentration of analyte when using a weighted linear regression are not as easy to define as for an unweighted linear regression [Bonate, P. J. Anal. Chem. 1993, 65, 1367–1372]. The confidence interval for the analyte’s concentration, however, is at its optimum value when the analyte’s signal is near the weighted centroid, yc , of the calibration curve. $y_c = \frac {1} {n} \sum_{i = 1}^{n} w_i x_i \nonumber$ Weighted Linear Regression With Errors in x and y If we remove our assumption that indeterminate errors affecting a calibration curve are present only in the signal (y), then we also must factor into the regression model the indeterminate errors that affect the analyte’s concentration in the calibration standards (x). The solution for the resulting regression line is computationally more involved than that for either the unweighted or weighted regression lines. Although we will not consider the details in this textbook, you should be aware that neglecting the presence of indeterminate errors in x can bias the results of a linear regression. Note See, for example, Analytical Methods Committee, “Fitting a linear functional relationship to data with error on both variable,” AMC Technical Brief, March, 2002), as well as this chapter’s Additional Resources. Curvilinear, Multivariable, and Multivariate Regression A straight-line regression model, despite its apparent complexity, is the simplest functional relationship between two variables. What do we do if our calibration curve is curvilinear—that is, if it is a curved-line instead of a straight-line? One approach is to try transforming the data into a straight-line. Logarithms, exponentials, reciprocals, square roots, and trigonometric functions have been used in this way. A plot of log(y) versus x is a typical example. Such transformations are not without complications, of which the most obvious is that data with a uniform variance in y will not maintain that uniform variance after it is transformed. Note It is worth noting here that the term “linear” does not mean a straight-line. A linear function may contain more than one additive term, but each such term has one and only one adjustable multiplicative parameter. The function $y = ax + bx^2 \nonumber$ is an example of a linear function because the terms x and x2 each include a single multiplicative parameter, a and b, respectively. The function $y = x^b \nonumber$ is nonlinear because b is not a multiplicative parameter; it is, instead, a power. This is why you can use linear regression to fit a polynomial equation to your data. Sometimes it is possible to transform a nonlinear function into a linear function. For example, taking the log of both sides of the nonlinear function above gives a linear function. $\log(y) = b \log(x) \nonumber$ Another approach to developing a linear regression model is to fit a polynomial equation to the data, such as $y = a + b x + c x^2$. You can use linear regression to calculate the parameters a, b, and c, although the equations are different than those for the linear regression of a straight-line. If you cannot fit your data using a single polynomial equation, it may be possible to fit separate polynomial equations to short segments of the calibration curve. The result is a single continuous calibration curve known as a spline function. The use of R for curvilinear regression is included in Chapter 8.5. Note For details about curvilinear regression, see (a) Sharaf, M. A.; Illman, D. L.; Kowalski, B. R. Chemometrics, Wiley-Interscience: New York, 1986; (b) Deming, S. N.; Morgan, S. L. Experimental Design: A Chemometric Approach, Elsevier: Amsterdam, 1987. The regression models in this chapter apply only to functions that contain a single dependent variable and a single independent variable. One example is the simplest form of Beer's law in which the absorbance, $A$, of a sample at a single wavelength, $\lambda$, depends upon the concentration of a single analyte, $C_A$ $A_{\lambda} = \epsilon_{\lambda, A} b C_A \nonumber$ where $\epsilon_{\lambda, A}$ is the analyte's molar absorptivity at the selected wavelength and $b$ is the pathlength through the sample. In the presence of an interferent, $I$, however, the signal may depend on the concentrations of both the analyte and the interferent $A_{\lambda} = \epsilon_{\lambda, A} b C_A + \epsilon_{\lambda, I} b C_I \nonumber$ where $\epsilon_{\lambda, I}$ is the interferent’s molar absorptivity and CI is the interferent’s concentration. This is an example of multivariable regression, which is covered in more detail in Chapter 9 when we consider the optimization of experiments where there is a single dependent variable and two or more independent variables. In multivariate regression we have both multiple dependent variables, such as the absorbance of samples at two or more wavelengths, and multiple independent variables, such as the concentrations of two or more analytes in the samples. As discussed in Chapter 0.2, we can represent this using matrix notation $\begin{bmatrix} \cdots & \cdots & \cdots \ \vdots & A & \vdots \ \cdots & \cdots & \cdots \end{bmatrix}_{r \times c} = \begin{bmatrix} \cdots & \cdots & \cdots \ \vdots & \epsilon b & \vdots \ \cdots & \cdots & \cdots \end{bmatrix}_{r \times n} \times \begin{bmatrix} \cdots & \cdots & \cdots \ \vdots & C & \vdots \ \cdots & \cdots & \cdots \end{bmatrix}_{n \times c} \nonumber$ where there are $r$ wavelengths, $c$ samples, and $n$ analytes. Each column in the $\epsilon b$ matrix, for example, holds the $\epsilon b$ value for a different analyte at one of $r$ wavelengths, and each row in the $C$ matrix is the concentration of one of the $n$ analytes in one of the $c$ samples. We will consider this approach in more detail in Chapter 11. Note For a nice discussion of the difference between multivariable regression and multivariate regression, see Hidalgo, B.; Goodman, M. "Multivariate or Multivariable Regression," Am. J. Public Health, 2013, 103, 39-40.
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/35%3A_Appendicies/35.01%3A_Evaluation_of_Analytical_Data.txt
Table $1$, at the bottom of this appendix, gives the proportion, P, of the area under a normal distribution curve that lies to the right of a deviation, z $z = \frac {X -\mu} {\sigma} \nonumber$ where X is the value for which the deviation is defined, $\mu$ is the distribution’s mean value and $\sigma$ is the distribution’s standard deviation. For example, the proportion of the area under a normal distribution to the right of a deviation of 0.04 is 0.4840 (see entry in red in the table), or 48.40% of the total area (see the area shaded blue in Figure $1$). The proportion of the area to the left of the deviation is 1 – P. For a deviation of 0.04, this is 1 – 0.4840, or 51.60%. Figure $1$. Normal distribution curve showing the area under a curve greater than a deviation of +0.04 (blue) and with a deviation less than –0.04 (green). When the deviation is negative—that is, when X is smaller than $\mu$—the value of z is negative. In this case, the values in the table give the area to the left of z. For example, if z is –0.04, then 48.40% of the area lies to the left of the deviation (see area shaded green in Figure $1$. To use the single-sided normal distribution table, sketch the normal distribution curve for your problem and shade the area that corresponds to your answer (for example, see Figure $2$, which is for Example 4.4.2). This divides the normal distribution curve into three regions: the area that corresponds to our answer (shown in blue), the area to the right of this, and the area to the left of this. Calculate the values of z for the limits of the area that corresponds to your answer. Use the table to find the areas to the right and to the left of these deviations. Subtract these values from 100% and, voilà, you have your answer. Table $1$: Values for a Single-Sided Normal Distribution z 0.00 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.0 0.5000 0.4960 0.4920 0.4880 0.4840 0.4801 0.4761 0.4721 0.4681 0.4641 0.1 0.4602 0.4562 0.4522 0.4483 0.4443 0.4404 0.4365 0.4325 0.4286 0.4247 0.2 0.4207 0.4168 0.4129 0.4090 0.4502 0.4013 0.3974 0.3396 0.3897 0.3859 0.3 0.3821 0.3783 0.3745 0.3707 0.3669 0.3632 0.3594 0.3557 0.3520 0.3483 0.4 0.3446 0.3409 0.3372 0.3336 0.3300 0.3264 0.3228 0.3192 0.3156 0.3121 0.5 0.3085 0.3050 0.3015 0.2981 0.2946 0.2912 0.2877 0.2843 0.2810 0.2776 0.6 0.2743 0.2709 0.2676 0.2643 0.2611 0.2578 0.2546 0.2514 0.2483 0.2451 0.7 0.2420 0.2389 0.2358 0.2327 0.2296 0.2266 0.2236 0.2206 0.2177 0.2148 0.8 0.2119 0.2090 0.2061 0.2033 0.2005 0.1977 0.1949 0.1922 0.1894 0.1867 0.9 0.1841 0.1814 0.1788 0.1762 0.1736 0.1711 0.1685 0.1660 0.1635 0.1611 1.0 0.1587 0.1562 0.1539 0.1515 0.1492 0.1469 0.1446 0.1423 0.1401 0.1379 1.1 0.1357 0.1335 0.1314 0.1292 0.1271 0.1251 0.1230 0.1210 0.1190 0.1170 1.2 0.1151 0.1131 0.1112 0.1093 0.1075 0.1056 0.1038 0.1020 0.1003 0.0985 1.3 0.0968 0.0951 0.0934 0.0918 0.0901 0.0885 0.0869 0.0853 0.0838 0.0823 1.4 0.0808 0.0793 0.0778 0.0764 0.0749 0.0735 0.0721 0.0708 0.0694 0.0681 1.5 0.0668 0.0655 0.0643 0.0630 0.0618 0.0606 0.0594 0.0582 0.0571 0.0559 1.6 0.0548 0.0537 0.0526 0.0516 0.0505 0.0495 0.0485 0.0475 0.0465 0.0455 1.7 0.0466 0.0436 0.0427 0.0418 0.0409 0.0401 0.0392 0.0384 0.0375 0.0367 1.8 0.0359 0.0351 0.0344 0.0336 0.0329 0.0322 0.0314 0.0307 0.0301 0.0294 1.9 0.0287 0.0281 0.0274 0.0268 0.0262 0.0256 0.0250 0.0244 0.0239 0.0233 2.0 0.0228 0.0222 0.0217 0.0212 0.0207 0.0202 0.0197 0.0192 0.0188 0.0183 2.1 0.0179 0.0174 0.0170 0.0166 0.0162 0.0158 0.0154 0.0150 0.0146 0.0143 2.2 0.0139 0.0136 0.0132 0.0129 0.0125 0.0122 0.0119 0.0116 0.0113 0.0110 2.3 0.0107 0.0104 0.0102   0.00964   0.00914   0.00866 2.4 0.00820   0.00776   0.00734   0.00695   0.00657 2.5 0.00621   0.00587   0.00554   0.00523   0.00494 2.6 0.00466   0.00440   0.00415   0.00391   0.00368 2.7 0.00347   0.00326   0.00307   0.00289   0.00272 2.8 0.00256   0.00240   0.00226   0.00212   0.00199 2.9 0.00187   0.00175   0.00164   0.00154   0.00144 3.0 0.00135 3.1 0.000968 3.2 0.000687 3.3 0.000483 3.4 0.000337 3.5 0.000233 3.6 0.000159 3.7 0.000108 3.8 0.0000723 3.9 0.0000481 4.0 0.0000317 35.03: Critical Values for t-Test Assuming we have calculated texp, there are two approaches to interpreting a t-test. In the first approach we choose a value of $\alpha$ for rejecting the null hypothesis and read the value of $t(\alpha,\nu)$ from the table below. If $t_\text{exp} > t(\alpha,\nu)$, we reject the null hypothesis and accept the alternative hypothesis. In the second approach, we find the row in the table below that corresponds to the available degrees of freedom and move across the row to find (or estimate) the a that corresponds to $t_\text{exp} = t(\alpha,\nu)$; this establishes largest value of $\alpha$ for which we can retain the null hypothesis. Finding, for example, that $\alpha$ is 0.10 means that we retain the null hypothesis at the 90% confidence level, but reject it at the 89% confidence level. The examples in this textbook use the first approach. Table $1$: Critical Values of t for the t-Test Values of t for… …a confidence interval of: 90% 95% 98% 99% …an $\alpha$ value of: 0.10 0.05 0.02 0.01 Degrees of Freedom 1 6.314 12.706 31.821 63.657 2 2.920 4.303 6.965 9.925 3 2.353 3.182 4.541 5.841 4 2.132 2.776 3.747 4.604 5 2.015 2.571 3.365 4.032 6 1.943 2.447 3.143 3.707 7 1.895 2.365 2.998 3.499 8 1.860 2.306 2.896 3.255 9 1.833 2.262 2.821 3.250 10 1.812 2.228 2.764 3.169 12 1.782 2.179 2.681 3.055 14 1.761 2.145 2.624 2.977 16 1.746 2.120 2.583 2.921 18 1.734 2.101 2.552 2.878 20 1.725 2.086 2.528 2.845 30 1.697 2.042 2.457 2.750 50 1.676 2.009 2.311 2.678 $\infty$ 1.645 1.960 2.326 2.576 The values in this table are for a two-tailed t-test. For a one-tailed test, divide the $\alpha$ values by 2. For example, the last column has an $\alpha$ value of 0.005 and a confidence interval of 99.5% when conducting a one-tailed t-test.
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/35%3A_Appendicies/35.02%3A_Single-Sided_Normal_Distribution.txt
The following tables provide values for $F(0.05, \nu_\text{num}, \nu_\text{denom})$ for one-tailed and for two-tailed F-tests. To use these tables, we first decide whether the situation calls for a one-tailed or a two-tailed analysis and calculate Fexp $F_\text{exp} = \frac {s_A^2} {s_B^2} \nonumber$ where $S_A^2$ is greater than $s_B^2$. Next, we compare Fexp to $F(0.05, \nu_\text{num}, \nu_\text{denom})$ and reject the null hypothesis if $F_\text{exp} > F(0.05, \nu_\text{num}, \nu_\text{denom})$. You may replace s with $\sigma$ if you know the population’s standard deviation. Table $1$: Critical Values of F for a One-Tailed F-Test $\frac {\nu_\text{num}\ce{->} }{\nu_{denom} \ce{ v }}$ 1 2 3 4 5 6 7 8 9 10 15 20 $\infty$ 1 161.4 199.5 215.7 224.6 230.2 234.0 236.8 238.9 240.5 241.9 245.9 248.0 254.3 2 18.51 19.00 19.16 19.25 19.30 19.33 19.35 19.37 19.38 19.40 19.43 19.45 19.50 3 10.13 9.552 9.277 9.117 9.013 8.941 8.887 8.845 8.812 8.786 8.703 8.660 8.526 4 7.709 6.994 6.591 6.388 6.256 6.163 6.094 6.041 5.999 5.964 5.858 5.803 5.628 5 6.608 5.786 5.409 5.192 5.050 4.950 4.876 4.818 4.722 4.753 4.619 4.558 4.365 6 5.987 5.143 4.757 4.534 4.387 4.284 4.207 4.147 4.099 4.060 3.938 3.874 3.669 7 5.591 4.737 4.347 4.120 3.972 3.866 3.787 3.726 3.677 3.637 3.511 3.445 3.230 8 5.318 4.459 4.066 3.838 3.687 3.581 3.500 3.438 3.388 3.347 3.218 3.150 2.928 9 5.117 4.256 3.863 3.633 3.482 3.374 3.293 3.230 3.179 3.137 3.006 2.936 2.707 10 4.965 4.103 3.708 3.478 3.326 3.217 3.135 3.072 3.020 2.978 2.845 2.774 2.538 11 4.844 3.982 3.587 3.257 3.204 3.095 3.012 2.948 2.896 2.854 2.719 2.646 2.404 12 4.747 3.885 3.490 3.259 3.106 2.996 2.913 2.849 2.796 2.753 2.617 2.544 2.296 13 4.667 3.806 3.411 3.179 3.025 2.915 2.832 2.767 2.714 2.671 2.533 2.459 2.206 14 4.600 3.739 3.344 3.112 2.958 2.848 2.764 2.699 2.646 2.602 2.463 2.388 2.131 15 4.534 3.682 3.287 3.056 2.901 2.790 2.707 2.641 2.588 2.544 2.403 2.328 2.066 16 4.494 3.634 3.239 3.007 2.852 2.741 2.657 2.591 2.538 2.494 2.352 2.276 2.010 17 4.451 3.592 3.197 2.965 2.810 2.699 2.614 2.548 2.494 2.450 2.308 2.230 1.960 18 4.414 3.555 3.160 2.928 2.773 2.661 2.577 2.510 2.456 2.412 2.269 2.191 1.917 19 4.381 3.552 3.127 2.895 2.740 2.628 2.544 2.477 2.423 2.378 2.234 2.155 1.878 20 4,351 3.493 3.098 2.866 2.711 2.599 2.514 2.447 2.393 2.348 2.203 2.124 1.843 $\infty$ 3.842 2.996 2.605 2.372 2.214 2.099 2.010 1.938 1.880 1.831 1.666 1.570 1.000 Table $2$: Critical Values of F for a Two-Tailed F-Test $\frac {\nu_\text{num}\ce{->} }{\nu_{denom} \ce{ v }}$ 1 2 3 4 5 6 7 8 9 10 15 20 $\infty$ 1 647.8 799.5 864.2 899.6 921.8 937.1 948.2 956.7 963.3 968.6 984.9 993.1 1018 2 38.51 39.00 39.17 39.25 39.30 39.33 39.36 39.37 39.39 39.40 39.43 39.45 39.50 3 17.44 16.04 15.44 15.10 14.88 14.73 14.62 14.54 14.47 14.42 14.25 14.17 13.90 4 12.22 10.65 9.979 9.605 9.364 9.197 9.074 8.980 8.905 8.444 8.657 8.560 8.257 5 10.01 8.434 7.764 7.388 7.146 6.978 6.853 6.757 6.681 6.619 6.428 6.329 6.015 6 8.813 7.260 6.599 6.227 5.988 5.820 5.695 5.600 5.523 5.461 5.269 5.168 4.894 7 8.073 6.542 5.890 5.523 5.285 5.119 4.995 4.899 4.823 4.761 4.568 4.467 4.142 8 7.571 6.059 5.416 5.053 4.817 4.652 4.529 4.433 4.357 4.259 4.101 3.999 3.670 9 7.209 5.715 5.078 4.718 4.484 4.320 4.197 4.102 4.026 3.964 3.769 3.667 3.333 10 6.937 5.456 4.826 4.468 4.236 4.072 3.950 3.855 3.779 3.717 3.522 3.419 3.080 11 6.724 5.256 4.630 4.275 4.044 3.881 3.759 3.644 3.588 3.526 3.330 3.226 2.883 12 6.544 5.096 4.474 4.121 3.891 3.728 3.607 3.512 3.436 3.374 3.177 3.073 2.725 13 6.414 4.965 4.347 3.996 3.767 3.604 3.483 3.388 3.312 3.250 3.053 2.948 2.596 14 6.298 4.857 4.242 3.892 3.663 3.501 3.380 3.285 3.209 3.147 2.949 2.844 2.487 15 6.200 4.765 4.153 3.804 3.576 3.415 3.293 3.199 3.123 3.060 2.862 2.756 2.395 16 6.115 4.687 4.077 3.729 3.502 3.341 3.219 3.125 3.049 2.986 2.788 2.681 2.316 17 6.042 4.619 4.011 3.665 3.438 3.277 3.156 3.061 2.985 2.922 2.723 2.616 2.247 18 5.978 4.560 3.954 3.608 3.382 3.221 3.100 3.005 2.929 2.866 2.667 2.559 2.187 19 5.922 4.508 3.903 3.559 3.333 3.172 3.051 2.956 2.880 2.817 2.617 2.509 2.133 20 5.871 4.461 3.859 3.515 3.289 3.128 3.007 2.913 2.837 2.774 2.573 2.464 2.085 $\infty$ 5.024 3.689 3.116 2.786 2.567 2.408 2.288 2.192 2.114 2.048 1.833 1.708 1.000 35.05: Critical Values for Dixon's Q-Test The following table provides critical values for $Q(\alpha, n)$, where $\alpha$ is the probability of incorrectly rejecting the suspected outlier and $n$ is the number of samples in the data set. There are several versions of Dixon’s Q-Test, each of which calculates a value for Qij where i is the number of suspected outliers on one end of the data set and j is the number of suspected outliers on the opposite end of the data set. The critical values for Q here are for a single outlier, Q10, where $Q_\text{exp} = Q_{10} = \frac {|\text{outlier's value} - \text{nearest value}|} {\text{largest value} - \text{smallest value}} \nonumber$ The suspected outlier is rejected if Qexp is greater than $Q(\alpha, n)$. For additional information consult Rorabacher, D. B. “Statistical Treatment for Rejection of Deviant Values: Critical Values of Dixon’s ‘Q’ Parameter and Related Subrange Ratios at the 95% confidence Level,” Anal. Chem. 1991, 63, 139–146. Table $1$: Critical Values for Dixon's Q-Test $\frac {\alpha \ce{->}} {n \ce{ v }}$ 0.1 0.05 0.04 0.02 0.01 3 0.941 0.970 0.976 0.988 0.994 4 0.765 0.829 0.846 0.889 0.926 5 0.642 0.710 0.729 0.780 0.821 6 0.560 0.625 0.644 0.698 0.740 7 0.507 0.568 0.586 0.637 0.680 8 0.468 0.526 0.543 0.590 0.634 9 0.437 0.493 0.510 0.555 0.598 10 0.412 0.466 0.483 0.527 0.568 35.06: Critical Values for Grubb's Test The following table provides critical values for $G(\alpha, n)$, where $\alpha$ is the probability of incorrectly rejecting the suspected outlier and n is the number of samples in the data set. There are several versions of Grubb’s Test, each of which calculates a value for Gij where i is the number of suspected outliers on one end of the data set and j is the number of suspected outliers on the opposite end of the data set. The critical values for G given here are for a single outlier, G10, where $G_\text{exp} = G_{10} = \frac {|X_{out} - \overline{X}|} {s} \nonumber$ The suspected outlier is rejected if Gexp is greater than $G(\alpha, n)$. Table $1$: Critical Values for the Grubb's Test $\frac {\alpha \ce{->}} {n \ce{ v }}$ 0.05 0.01 3 1.155 1.155 4 1.481 1.496 5 1.715 1.764 6 1.887 1.973 7 2.020 2.139 8 2.126 2.274 9 2.215 2.387 10 2.290 2.482 11 2.355 2.564 12 2.412 2.636 13 2.462 2.699 14 2.507 2.755 15 2.549 2.755
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/35%3A_Appendicies/35.04%3A_Critical_Values_for_F-Test.txt
Careful measurements on the metal–ligand complex Fe(SCN)2+ suggest its stability decreases in the presence of inert ions [Lister, M. W.; Rivington, D. E. Can. J. Chem. 1995, 33, 1572–1590]. We can demonstrate this by adding an inert salt to an equilibrium mixture of Fe3+ and SCN. Figure 35.7.1 a shows the result of mixing together equal volumes of 1.0 mM FeCl3 and 1.5 mM KSCN, both of which are colorless. The solution’s reddish–orange color is due to the formation of Fe(SCN)2+. $\mathrm{Fe}^{3+}(a q)+\mathrm{SCN}^{-}(a q) \rightleftharpoons \mathrm{Fe}(\mathrm{SCN})^{2+}(a q) \label{6.1}$ Adding 10 g of KNO3 to the solution and stirring to dissolve the solid, produces the result shown in Figure 35.7.1 b. The solution’s lighter color suggests that adding KNO3 shifts reaction \ref{6.1} to the left, decreasing the concentration of Fe(SCN)2+ and increasing the concentrations of Fe3+ and SCN. The result is a decrease in the complex’s formation constant, K1. $K_{1}=\frac{\left[\mathrm{Fe}(\mathrm{SCN})^{2+}\right]}{\left[\mathrm{Fe}^{3+}\right]\left[\mathrm{SCN}^{-}\right]} \label{6.2}$ Why should adding an inert electrolyte affect a reaction’s equilibrium position? We can explain the effect of KNO3 on the formation of Fe(SCN)2+ if we consider the reaction on a microscopic scale. The solution in Figure 35.7.1 b contains a variety of cations and anions: Fe3+, SCN, K+, $\text{NO}_3^-$, H3O+, and OH. Although the solution is homogeneous, on average, there are slightly more anions in regions near the Fe3+ ions, and slightly more cations in regions near the SCN ions. As shown in Figure 35.7.2 , each Fe3+ ion and each SCN ion is surrounded by an ionic atmosphere of opposite charge ($\delta^–$ and $\delta^+$) that partially screen the ions from each other. Because each ion’s apparent charge at the edge of its ionic atmosphere is less than its actual charge, the force of attraction between the two ions is smaller. As a result, the formation of Fe(SCN)2+ is slightly less favorable and the formation constant in Equation \ref{6.2} is slightly smaller. Higher concentrations of KNO3 increase $\delta^–$ and $\delta^+$, resulting in even smaller values for the formation constant. Ionic Strength To factor the concentration of ions into the formation constant for Fe(SCN)2+, we need a way to express that concentration in a meaningful way. Because both an ion’s concentration and its charge are important, we define the solution’s ionic strength, $\mu$ as $\mu=\frac{1}{2} \sum_{i=1}^{n} c_{i} z_{i}^{2} \nonumber$ where ci and zi are the concentration and charge of the ith ion. Example 35.7.1 Calculate the ionic strength of a solution of 0.10 M NaCl. Repeat the calculation for a solution of 0.10 M Na2SO4. Solution The ionic strength for 0.10 M NaCl is $\begin{array}{c}{\mu=\frac{1}{2}\left\{\left[\mathrm{Na}^{+}\right] \times(+1)^{2}+\left[\mathrm{Cl}^{-}\right] \times(-1)^{2}\right\}} \ {\mu=\frac{1}{2}\left\{(0.10) \times(+1)^{2}+(0.10) \times(-1)^{2}\right\}=0.10 \ \mathrm{M}}\end{array} \nonumber$ For 0.10 M Na2SO4 the ionic strength is $\begin{array}{c}{\mu=\frac{1}{2}\left\{\left[\mathrm{Na}^{+}\right] \times(+1)^{2}+\left[\mathrm{SO}_{4}^{2-}\right] \times(-2)^{2}\right\}} \ {\mu=\frac{1}{2}\left\{(0.20) \times(+1)^{2}+(0.10) \times(-2)^{2}\right\}=0.30 \ \mathrm{M}}\end{array} \nonumber$ In calculating the ionic strengths of these solutions we are ignoring the presence of H3O+ and OH, and, in the case of Na2SO4, the presence of $\text{HSO}_4^-$ from the base dissociation reaction of $\text{SO}_4^{2-}$. In the case of 0.10 M NaCl, the concentrations of H3O+ and OH are $1.0 \times 10^{-7}$, which is significantly smaller than the concentrations of Na+ and Cl. Because $\text{SO}_4^{2-}$ is a very weak base (Kb = $1.0 \times 10^{-12}$), the solution is only slightly basic (pH = 7.5), and the concentrations of H3O+, OH, and $\text{HSO}_4^-$ are negligible. Although we can ignore the presence of H3O+, OH, and $\text{HSO}_4^-$ when we calculate the ionic strength of these two solutions, be aware that an equilibrium reaction can generate ions that might affect the solution’s ionic strength. Note that the unit for ionic strength is molarity, but that a salt’s ionic strength need not match its molar concentration. For a 1:1 salt, such as NaCl, ionic strength and molar concentration are identical. The ionic strength of a 2:1 electrolyte, such as Na2SO4, is three times larger than the electrolyte’s molar concentration. Activity and Activity Coefficients Figure 35.7.1 shows that adding KNO3 to a mixture of Fe3+ and SCN decreases the formation constant for Fe(SCN)2+. This creates a contradiction. Earlier in this chapter we showed that there is a relationship between a reaction’s standard‐state free energy, ∆Go, and its equilibrium constant, K. $\triangle G^{\circ}=-R T \ln K \nonumber$ Because a reaction has only one standard‐state, its equilibrium constant must be independent of solution conditions. Although ionic strength affects the apparent formation constant for Fe(SCN)2+, reaction \ref{6.1} must have an underlying thermodynamic formation constant that is independent of ionic strength. The apparent formation constant for Fe(SCN)2+, as shown in Equation \ref{6.2}, is a function of concentrations. In place of concentrations, we define the true thermodynamic equilibrium constant using activities. The activity of species A, aA, is the product of its concentration, [A], and a solution‐dependent activity coefficient, $\gamma_A$ $a_{A}=[A] \gamma_{A} \nonumber$ The true thermodynamic formation constant for Fe(SCN)2+, therefore, is $K_{1}=\frac{a_{\mathrm{Fe}(S \mathrm{CN})^{2+}}}{a_{\mathrm{Fe}^{3+}} \times a_{\mathrm{SCN}^-}}=\frac{\left[\mathrm{Fe}(\mathrm{SCN})^{2+}\right] \gamma_{\mathrm{Fe}(\mathrm{SCN})^{2+}}}{\left[\mathrm{Fe}^{3+}\right] \gamma_{\mathrm{Fe}^{3+}}\left[\mathrm{SCN}^{-}\right] \gamma_{\mathrm{SCN}^{-}}} \nonumber$ Unless otherwise specified, the equilibrium constants in the appendices are thermodynamic equilibrium constants. A species’ activity coefficient corrects for any deviation between its physical concentration and its ideal value. For a gas, a pure solid, a pure liquid, or a non‐ionic solute, the activity coefficient is approximately one under most reasonable experimental conditions. For a gas the proper terms are fugacity and fugacity coefficient, instead of activity and activity coefficient. For a reaction that involves only these species, the difference between activity and concentration is negligible. The activity coefficient for an ion, however, depends on the solution’s ionic strength, the ion’s charge, and the ion’s size. It is possible to estimate activity coefficients using the extended Debye‐Hückel equation $\log \gamma_{A}=\frac{-0.51 \times z_{A}^{2} \times \sqrt{\mu}}{1+3.3 \times \alpha_{A} \times \sqrt{\mu}} \label{6.3}$ where zA is the ion’s charge, $\alpha_A$ is the hydrated ion’s effective diameter in nanometers (Table 6.2), $\mu$ is the solution’s ionic strength, and 0.51 and 3.3 are constants appropriate for an aqueous solution at 25oC. A hydrated ion’s effective radius is the radius of the ion plus those water molecules closely bound to the ion. The effective radius is greater for smaller, more highly charged ions than it is for larger, less highly charged ions. Table 35.7.1 . Effective Diameters ($\alpha$) for Selected Ions ion effective diameter (nm) H3O+ 0.9 Li+ 0.6 Na+, $\text{IO}_3^-$, $\text{HSO}_3^-$, $\text{HCO}_3^-$, $\text{H}_2\text{PO}_4^-$ 0.45 OH, F, SCN, HS, $\text{ClO}_3^-$, $\text{ClO}_4^-$, $\text{MnO}_4^-$ 0.35 K+, Cl, Br, I, CN, $\text{NO}_2^-$, $\text{NO}_3^-$ 0.3 Cs+, Tl+, Ag+, $\text{NH}_4^+$ 0.25 Mg2+, Be2+ 0.8 Ca2+, Cu2+, Zn2+, Sn2+, Mn2+, Fe2+, Ni2+, Co2+ 0.6 Sr2+, Ba2+, Cd2+, Hg2+, S2– 0.5 Pb2+, $\text{SO}_4^{2-}$, $\text{SO}_3^{2-}$ 0.45 $\text{Hg}_2^{2+}$, $\text{SO}_4^{2-}$, $\text{S}_22\text{O}_3^{2-}$, $\text{CrO}_4^{2-}$, $\text{HPO}_4^{2-}$ 0.40 Al3+, Fe3+, Cr3+ 0.9 $\text{PO}_4^{3-}$, $\text{Fe(CN)}_6^{3-}$ 0.4 Zr4+, Ce4+, Sn4+ 1.1 $\text{Fe(CN)}_6^{4-}$ 0.5 Source: Kielland, J. J. Am. Chem. Soc. 1937, 59, 1675–1678. Several features of Equation \ref{6.3} deserve our attention. First, as the ionic strength approaches zero an ion’s activity coefficient approaches a value of one. In a solution where $\mu = 0$, an ion’s activity and its concentration are identical. We can take advantage of this fact to determine a reaction’s thermodynamic equilibrium constant by measuring the apparent equilibrium constant for several increasingly smaller ionic strengths and extrapolating back to an ionic strength of zero. Second, an activity coefficient is smaller, and the effect of activity is more important, for an ion with a higher charge and a smaller effective radius. Finally, the extended Debye‐Hückel equation provides a reasonable estimate of an ion’s activity coefficient when the ionic strength is less than 0.1. Modifications to Equation \ref{6.3} extend the calculation of activity coefficients to higher ionic strengths [Davies, C. W. Ion Association, Butterworth: London, 1962]. Including Activity Coefficients When Solving Equilibrium Problems Earlier in this chapter we calculated the solubility of Pb(IO3)2 in deionized water, obtaining a result of $4.0 \times 10^{-5}$ mol/L. Because the only significant source of ions is from the solubility reaction, the ionic strength is very low and we can assume that $\gamma \approx 1$ for both Pb2+ and $\text{IO}_3^-$. In calculating the solubility of Pb(IO3)2 in deionized water, we do not need to account for ionic strength. But what if we need to know the solubility of Pb(IO3)2 in a solution that contains other, inert ions? In this case we need to include activity coefficients in our calculation. Example 35.7.2 Calculate the solubility of Pb(IO3)2 in a matrix of 0.020 M Mg(NO3)2. Solution We begin by calculating the solution’s ionic strength. Since Pb(IO3)2 is only sparingly soluble, we will assume we can ignore its contribution to the ionic strength; thus $\mu=\frac{1}{2}\left\{(0.020)(+2)^{2}+(0.040)(-1)^{2}\right\}=0.060 \ \mathrm{M} \nonumber$ Next, we use Equation \ref{6.3} to calculate the activity coefficients for Pb2+ and $\text{IO}_3^-$. $\log \gamma_{\mathrm{Pb}^{2+}}=\frac{-0.51 \times(+2)^{2} \times \sqrt{0.060}}{1+3.3 \times 0.45 \times \sqrt{0.060}}=-0.366 \nonumber$ $\gamma_{\mathrm{Pb}^{2+}}=0.431 \nonumber$ $\log \gamma_{\mathrm{IO}_{3}^{-}}=\frac{-0.51 \times(-1)^{2} \times \sqrt{0.060}}{1+3.3 \times 0.45 \times \sqrt{0.060}}=-0.0916 \nonumber$ $\gamma_{\mathrm{IO}_{3}^-}=0.810 \nonumber$ Defining the equilibrium concentrations of Pb2+ and $\text{IO}_3^-$ in terms of the variable x Concentrations Pb(IO3)2 (s) $\rightleftharpoons$ Pb2+ (aq) + 2$\text{IO}_3^-$ (aq) initial solid   0   0 change solid   +x   +2x equilibrium solid   x   2x and substituting into the thermodynamic solubility product for Pb(IO3)2 leaves us with $K_{\mathrm{sp}}=a_{\mathrm{Pb}^{2+}} \times a_{\mathrm{IO}_{3}^-}^{2}=\gamma_{\mathrm{Pb}^{2+}}\left[\mathrm{Pb}^{2+}\right] \times \gamma_{\mathrm{IO}_3^-}^{2}\left[\mathrm{IO}_{3}^{-}\right]^{2}=2.5 \times 10^{-13} \nonumber$ $K_{\mathrm{sp}}=(0.431)(x)(0.810)^{2}(2 x)^{2}=2.5 \times 10^{-13} \nonumber$ $K_{\mathrm{sp}}=1.131 x^{3}=2.5 \times 10^{-13} \nonumber$ Solving for x gives $6.0 \times 10^{-5}$ and a molar solubility of $6.0 \times 10^{-5}$ mol/L for Pb(IO3)2. If we ignore activity, as we did in our earlier calculation, we report the molar solubility as $4.0 \times 10^{-5}$ mol/L. Failing to account for activity in this case underestimates the molar solubility of Pb(IO3)2 by 33%. The solution’s equilibrium composition is $\begin{array}{c}{\left[\mathrm{Pb}^{2+}\right]=6.0 \times 10^{-5} \ \mathrm{M}} \ {\left[\mathrm{IO}_{3}^{-}\right]=1.2 \times 10^{-4} \ \mathrm{M}} \ {\left[\mathrm{Mg}^{2+}\right]=0.020 \ \mathrm{M}} \ {\left[\mathrm{NO}_{3}^{-}\right]=0.040 \ \mathrm{M}}\end{array} \nonumber$ Because the concentrations of both Pb2+ and $\text{IO}_3^-$ are much smaller than the concentrations of Mg2+ and $\text{NO}_3^-$ our decision to ignore the contribution of Pb2+ and $\text{IO}_3^-$ to the ionic strength is reasonable. How do we handle the calculation if we can not ignore the concentrations of Pb2+ and $\text{IO}_3^-$ when calculating the ionic strength. One approach is to use the method of successive approximations. First, we recalculate the ionic strength using the concentrations of all ions, including Pb2+ and $\text{IO}_3^-$. Next, we recalculate the activity coefficients for Pb2+ and $\text{IO}_3^-$ using this new ionic strength and then recalculate the molar solubility. We continue this cycle until two successive calculations yield the same molar solubility within an acceptable margin of error. Exercise 35.7.1 Calculate the molar solubility of Hg2Cl2 in 0.10 M NaCl, taking into account the effect of ionic strength. Compare your answer to that from Exercise 6.7.2 in which you ignored the effect of ionic strength. Answer We begin by calculating the solution’s ionic strength. Because NaCl is a 1:1 ionic salt, the ionic strength is the same as the concentration of NaCl; thus $\mu$ = 0.10 M. This assumes, of course, that we can ignore the contributions of $\text{Hg}_2^{2+}$ and Cl from the solubility of Hg2Cl2. Next we use Equation \ref{6.3} to calculate the activity coefficients for $\text{Hg}_2^{2+}$ and Cl. $\log \gamma_{\mathrm{Hg}_{2}^{2+}}=\frac{-0.51 \times(+2)^{2} \times \sqrt{0.10}}{1+3.3 \times 0.40 \times \sqrt{0.10}}=-0.455 \nonumber$ $\gamma_{\mathrm{H} \mathrm{g}_{2}^{2+}}=0.351 \nonumber$ $\log \gamma_{\mathrm{Cl}^{-}}=\frac{-0.51 \times(-1)^{2} \times \sqrt{0.10}}{1+3.3 \times 0.3 \times \sqrt{0.10}}=-0.12 \nonumber$ $\gamma_{\mathrm{Cl}^-}=0.75 \nonumber$ Defining the equilibrium concentrations of $\text{Hg}_2^{2+}$ and Cl in terms of the variable x concentrations Hg2Cl2 (s) $\rightleftharpoons$ $\text{Hg}_2^{2+}$ (aq) + 2Cl (aq) initial solid   0   0.10 change solid   +x   +2x equilibrium solid   x   0.10 + 2x and substituting into the thermodynamic solubility product for Hg2Cl2, leave us with $K_{\mathrm{sp}}=a_{\mathrm{Hg}_{2}^{2+}}\left(a_{\mathrm{Cl}^-}\right)^{2} = \gamma_{\mathrm{Hg}_{2}^{2+}}\left[\mathrm{Hg}_{2}^{2+}\right]\left(\gamma_{\mathrm{Cl}^{-}}\right)^{2}\left[\mathrm{Cl}^{-}\right]^{2}=1.2 \times 10^{-18} \nonumber$ Because the value of x likely is small, let’s simplify this equation to $(0.351)(x)(0.75)^{2}(0.1)^{2}=1.2 \times 10^{-18} \nonumber$ Solving for x gives its value as $6.1 \times 10^{-16}$. Because x is the concentration of $\text{Hg}_2^{2+}$ and 2x is the concentration of Cl, our decision to ignore their contributions to the ionic strength is reasonable. The molar solubility of Hg2Cl2 in 0.10 M NaCl is $6.1 \times 10^{-16}$ mol/L. In Exercise 6.7.2, where we ignored ionic strength, we determined that the molar solubility of Hg2Cl2 is $1.2 \times 10^{-16}$ mol/L, a result that is $5 \times$ smaller than the its actual value. As Example 35.7.2 and Exercise 35.7.1 show, failing to correct for the effect of ionic strength can lead to a significant error in an equilibrium calculation. Nevertheless, it is not unusual to ignore activities and to assume that the equilibrium constant is expressed in terms of concentrations. There is a practical reason for this—in an analysis we rarely know the exact composition, much less the ionic strength of aqueous samples or of solid samples brought into solution. Equilibrium calculations are a useful guide when we develop an analytical method; however, it only is when we complete an analysis and evaluate the results that can we judge whether our theory matches reality. In the end, work in the laboratory is the most critical step in developing a reliable analytical method. This is a good place to revisit the meaning of pH. In Chapter 2 we defined pH as $\mathrm{pH}=-\log \left[\mathrm{H}_{3} \mathrm{O}^{+}\right] \nonumber$ Now we see that the correct definition is $\begin{array}{c}{\mathrm{pH}=-\log a_{\mathrm{H}_{3} \mathrm{O}^{+}}} \ {\mathrm{pH}=-\log \gamma_{\mathrm{H}_{3} \mathrm{O}^{+}}\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]}\end{array} \nonumber$ Failing to account for the effect of ionic strength can lead to a significant error in the reported concentration of H3O+. For example, if the pH of a solution is 7.00 and the activity coefficient for H3O+ is 0.90, then the concentration of H3O+ is $1.11 \times 10^{-7}$ M, not $1.00 \times 10^{-7}$ M, an error of +11%. Fortunately, when we develop and carry out an analytical method, we are more interested in controlling pH than in calculating [H3O+]. As a result, the difference between the two definitions of pH rarely is of significant concern.
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/35%3A_Appendicies/35.07%3A_Activity_Coefficients.txt
Standard/Formal Reduction Potentials The following table provides Eo and Eo ́ values for selected reduction reactions. Values are from the following sources (primarily the first two): • Bard, A. J.; Parsons, B.; Jordon, J., eds. Standard Potentials in Aqueous Solutions, Dekker: New York, 1985 • Milazzo, G.; Caroli, S.; Sharma, V. K. Tables of Standard Electrode Potentials, Wiley: London, 1978; • Swift, E. H.; Butler, E. A. Quantitative Measurements and Chemical Equilibria, Freeman: New York, 1972. • Bratsch, S. G. "Standard Electrode Potentials and Temperature Coefficients in Water at 298.15K, J. Phys. Chem. Ref. Data, 1989, 18, 1–21. • Latimer, W. M. Oxidation Potentials, 2nd. Ed., Prentice-Hall: Englewood Cliffs, NJ, 1952 Solids, gases, and liquids are identified; all other species are aqueous. Reduction reactions in acidic solution are written using H+ in place of H3O+. You may rewrite a reaction by replacing H+ with H3O+ and adding to the opposite side of the reaction one molecule of H2O per H+; thus H3AsO4 + 2H+ +2e $\rightleftharpoons$ HAsO2 +2H2O becomes H3AsO4 + 2H3O+ +2e $\rightleftharpoons$ HAsO2 +4H2O Conditions for formal potentials (Eo ́) are listed next to the potential. For most of the reduction half-reactions gathered here, there are minor differences in values provided by the references above. In most cases, these differences are small and will not affect calculations. In a few cases the differences are not insignificant and the user may find discrepancies in calculations. For example, Bard, Parsons, and Jordon report an Eo value of –1.285 V for $\text{Zn(OH)}_4^{2-} + 2e^- \rightleftharpoons \text{Zn}(s) + 4\text{OH}^-\nonumber$ while Milazzo, Caroli, and Sharma report the value as –1.214 V, Swift reports the value as –1.22, Bratsch reports the value as –1.199 V, and Latimer reports the value as –1.216 V. Aluminum E (V) Eo ́ (V) $\text{Al}^{3+} + 3e^- \rightleftharpoons \text{Al}(s)$ –1.676 $\text{Al(OH)}_4^- + 3e^- \rightleftharpoons \text{Al}(s) + 4\text{OH}^-$ –2.310 $\text{AlF}_6^{3-} + 3e^- \rightleftharpoons \text{Al}(s) + 6\text{F}^-$ –2.07 Antimony E (V) Eo ́ (V) $\text{Sb} + 3\text{H}^+ + 3e^- \rightleftharpoons \text{SbH}_3(g)$ –0.510 $\text{Sb}_2\text{O}_5 + 6\text{H}^+ + 4e^- \rightleftharpoons 2\text{SbO}^+ + 3\text{H}_2\text{O}(l)$ 0.605 $\text{SbO}^+ + 2\text{H}^+ + 3e^- \rightleftharpoons \text{Sb}(s) + \text{H}_2\text{O}(l)$ 0.212 Arsenic E (V) Eo ́ (V) $\text{As}(s) + 3\text{H}^+ + 3e^- \rightleftharpoons \text{AsH}_3(g)$ –0.225 $\text{H}_3\text{AsO}_4 + 2\text{H}^+ + 2e^- \rightleftharpoons \text{HAsO}_2 + 2\text{H}_2\text{O}(l)$ 0.560 $\text{HAsO}_2 + 3\text{H}^+ + 3e^- \rightleftharpoons \text{As}(s) + 2\text{H}_2\text{O}(l)$ 0.240 Barium E (V) Eo ́ (V) $\text{Ba}^{2+} + 2e^- \rightleftharpoons \text{Ba}(s)$ –2.92 $\text{BaO}(s) + 2\text{H}^+ + 2e^- \rightleftharpoons \text{Ba}(s) + \text{H}_2\text{O}(l)$ –2.166 Beryllium E (V) Eo ́ (V) $\text{Be}^{2+} + 2e^- \rightleftharpoons \text{Be}(s)$ –1.99 Bismuth E (V) Eo ́ (V) $\text{Bi}^{3+} + 3e^- \rightleftharpoons \text{Bi}(s)$ 0.317 $\text{BiCl}_4^- + 3e^- \rightleftharpoons \text{Bi}(s) + 4\text{Cl}^-$ 0.199 Boron E (V) Eo ́ (V) $\text{B(OH)}_3 + 3\text{H}^+ + 3e^- \rightleftharpoons \text{B}(s) + 3\text{H}_2\text{O}(l)$ –0.890 $\text{B(OH)}_4^- + 3e^- \rightleftharpoons \text{B}(s) + 4\text{OH}^-$ –1.811 Bromine E (V) Eo ́ (V) $\text{Br}_2(l) + 2e^- \rightleftharpoons 2\text{Br}^-$ 1.087 $\text{HOBr} + \text{H}^+ + 2e^- \rightleftharpoons \text{Br}^- + \text{H}_2\text{O}(l)$ 1.341 $\text{HOBr} + \text{H}^+ + e^- \rightleftharpoons \frac{1}{2} \text{Br}_2 + \text{H}_2\text{O}(l)$ 1.604 $\text{BrO}^- + \text{H}_2\text{O}(l) + 2e^- \rightleftharpoons \text{Br}^- + 2\text{OH}^-$   0.76 in 1 M NaOH $\text{BrO}_3^- +6\text{H}^+ + 5e^- \rightleftharpoons \frac{1}{2} \text{Br}_2(l) + 3\text{H}_2\text{O}(l)$ 1.5 $\text{BrO}_3^- + 6\text{H}^+ +6e^- \rightleftharpoons \text{Br}^- + 3\text{H}_2\text{O}(l)$ 1.478 Cadmium E (V) Eo ́ (V) $\text{Cd}^{2+} + 2e^- \rightleftharpoons \text{Cd}(s)$ –0.4030 $\text{Cd(CN)}_4^{2-} + 2e^- \rightleftharpoons \text{Cd}(s) + 4\text{CN}^-$ –0.943 $\text{Cd(NH}_3)_4^{2+} + 2e^- \rightleftharpoons \text{Cd}(s) + 4\text{NH}_3$ –0.622 Calcium E (V) Eo ́ (V) $\text{Ca}^{2+} + 2e^- \rightleftharpoons \text{Ca}(s)$ –2.84 Carbon E (V) Eo ́ (V) $\text{CO}_2(g) + 2\text{H}^+ + 2e^- \rightleftharpoons \text{CO}(g) + \text{H}_2\text{O}(l)$ –0.106 $\text{CO}_2(g) + 2\text{H}^+ +2e^- \rightleftharpoons \text{HCO}_2\text{H}$ –0.20 $2\text{CO}_2(g) + 2\text{H}^+ +2e^- \rightleftharpoons \text{H}_2\text{C}_2\text{O}_4$ –0.481 $\text{HCHO} + 2\text{H}^+ + 2e^- \rightleftharpoons \text{CH}_3\text{OH}$ 0.2323 Cerium E (V) Eo ́ (V) $\text{Ce}^{3+} + 3e^- \rightleftharpoons \text{Ce}(s)$ –2.336 $\text{Ce}^{4+} + e^- \rightleftharpoons \text{Ce}^{3+}$ 1.72 1.70 in 1 M HClO4 1.44 in 1 M H2SO4 1.61 in 1 M HNO3 1.28 in 1 M HCl Chlorine E (V) Eo ́ (V) $\text{Cl}_2(g) + 2e^- \rightleftharpoons 2\text{Cl}^-$ 1.396 $\text{ClO}^- + \text{H}_2\text{O}(l) + e^- \rightleftharpoons \frac{1}{2} \text{Cl}_2(g) + 2\text{OH}^-$   0.421 in 1 M NaOH $\text{ClO}^- + \text{H}_2\text{O}(l) + 2e^- \rightleftharpoons \text{Cl}^- + 2\text{OH}^-$   0.890 in 1 M NaOH $\text{HClO}_2 + 2\text{H}^+ + 2e^- \rightleftharpoons \text{HOCl} + \text{H}_2\text{O}(l)$ 1.64 Chlorine E (V) Eo ́ (V) $\text{ClO}_3^- + 2\text{H}^+ + e^- \rightleftharpoons \text{ClO}_2(g) + \text{H}_2\text{O}(l)$ 1.175 $\text{ClO}_3^- + 3\text{H}^+ + 2e^- \rightleftharpoons \text{HClO}_2 + \text{H}_2\text{O}(l)$ 1.181 $\text{ClO}_4^- + 2\text{H}^+ +2e^- \rightleftharpoons \text{ClO}_3^- + \text{H}_2\text{O}(l)$ 1.201 Chromium E (V) Eo ́ (V) $\text{Cr}^{3+} + 3e^- \rightleftharpoons \text{Cr}(s)$ –0.424 $\text{Cr}^{2+} + 2e^- \rightleftharpoons \text{Cr}(s)$ –0.90 $\text{Cr}_2\text{O}_7^{2-} + 14\text{H}^+ + 6e^- \rightleftharpoons 2\text{Cr}^{3+} + 7\text{H}_2\text{O}(l)$ 1.36 $\text{CrO}_4^{2-} + 4\text{H}_2\text{O}(l) + 3e^- \rightleftharpoons \text{Cr(OH)}_4^- + 4\text{OH}^-$   –0.13 in 1 M NaOH Cobalt E (V) Eo ́ (V) $\text{Co}^{2+} + 2e^- \rightleftharpoons \text{Co}(s)$ –0.277 $\text{Co}^{3+} + 3e^- \rightleftharpoons \text{Co}(s)$ 1.92 $\text{Co(NH}_3)_6^{3+} + e^- \rightleftharpoons \text{Co(NH}_3)_6^{2+}$ 0.1 $\text{Co(OH)}_3(s) + e^- \rightleftharpoons \text{Co(OH)}_2(s) + \text{OH}^-$ 0.17 $\text{Co(OH)}_2(s) + 2e^- \rightleftharpoons \text{Co}(s) + 2\text{OH}^-$ –0.746 Copper E (V) Eo ́ (V) $\text{Cu}^+ + e^- \rightleftharpoons \text{Cu}(s)$ 0.520 $\text{Cu}^{2+} + e^- \rightleftharpoons \text{Cu}^+$ 0.159 $\text{Cu}^{2+} + 2e^- \rightleftharpoons \text{Cu}(s)$ 0.3419 $\text{Cu}^{2+} + \text{I}^- + e^- \rightleftharpoons \text{CuI}(s)$ 0.86 $\text{Cu}^{2+} + \text{Cl}^- + e^- \rightleftharpoons \text{CuCl}(s)$ 0.559 Fluorine E (V) Eo ́ (V) $\text{F}_2(g) + 2\text{H}^+ + 2e^- \rightleftharpoons 2\text{HF}(g)$ 3.053 $\text{F}_2(g) + 2e^- \rightleftharpoons 2\text{F}^-$ 2.87 Gallium E (V) Eo ́ (V) $\text{Ga}^{3+} + 3e^- \rightleftharpoons \text{Ga}(s)$ –0.529 Gold E (V) Eo ́ (V) $\text{Au}^+ + e^- \rightleftharpoons \text{Au}(s)$ 1.83 $\text{Au}^{3+} + 2e^- \rightleftharpoons \text{Au}^+$ 1.36 $\text{Au}^{3+} + 3e^- \rightleftharpoons \text{Au}(s)$ 1.52 $\text{AuCl}_4^- + 3e^- \rightleftharpoons \text{Au}(s) + 4\text{Cl}^-$ 1.002 Hydrogen E (V) Eo ́ (V) $2\text{H}^+ + 2e^- \rightleftharpoons \text{H}_2 (g)$ 0.00000 $\text{H}_2\text{O}(l) + e^- \rightleftharpoons \frac{1}{2} \text{H}_2(g) + \text{OH}^-$ –0.828 Iodine E (V) Eo ́ (V) $\text{I}_2(s) + 2e^- \rightleftharpoons 2\text{I}^-$ 0.5355 Iodine E (V) Eo ́ (V) $\text{I}_3^- + 2e^- \rightleftharpoons 3\text{I}^-$ 0.536 $\text{HIO} + \text{H}^+ + 2e^- \rightleftharpoons \text{I}^- + \text{H}_2\text{O}(l)$ 0.985 $\text{IO}_3^- + 6\text{H}^+ + 5e^- \rightleftharpoons \frac{1}{2} \text{I}_2(s) + 3\text{H}_2\text{O}(l)$ 1.195 $\text{IO}_3^- + 3\text{H}_2\text{O}(l) + 6e^- \rightleftharpoons \text{I}^- +6\text{OH}^-$ 0.257 Iron E (V) Eo ́ (V) $\text{Fe}^{2+} + 2e^- \rightleftharpoons \text{Fe}(s)$ –0.44 $\text{Fe}^{3+} + 3e^- \rightleftharpoons \text{Fe}(s)$ –0.037 $\text{Fe}^{3+} + e^- \rightleftharpoons \text{Fe}^{2+}$ 0.771 0.70 in 1 M HCl 0.767 in 1 M HClO4 0.746 in 1 M HNO3 0.68 in 1 M H2SO4 0.44 in 0.3 M H3PO4 $\text{Fe(CN)}_6^{3-} + e^- \rightleftharpoons \text{Fe(CN)}_6^{4-}$ 0.356 $\text{Fe(phen)}_3^{3+} + e^- \rightleftharpoons \text{Fe(phen)}_3^{2+}$ 1.147 Lanthanum E (V) Eo ́ (V) $\text{La}^{3+} + 3e^- \rightleftharpoons \text{La}(s)$ –2.38 Lead E (V) Eo ́ (V) $\text{Pb}^{2+} + 2e^- \rightleftharpoons \text{Pb}(s)$ –0.126 $\text{PbO}_2(s) + 4\text{H}^+ + 2e^- \rightleftharpoons \text{Pb}^{2+} + 2\text{H}_2\text{O}(l)$ 1.46 $\text{PbO}_2(s) + \text{SO}_4^{2-} + 4\text{H}^+ + 2e^- \rightleftharpoons \text{PbSO}_4(s) + 2\text{H}_2\text{O}(l)$ 1.690 $\text{PbSO}_4(s) + 2e^- \rightleftharpoons \text{Pb}(s) + \text{SO}_4^{2-}$ –0.356 Lithium E (V) Eo ́ (V) $\text{Li}^+ + e^- \rightleftharpoons \text{Li}(s)$ –3.040 Magnesium E (V) Eo ́ (V) $\text{Mg}^{2+} + 2e^- \rightleftharpoons \text{Mg}(s)$ –2.356 $\text{Mg(OH)}_2(s) + 2e^- \rightleftharpoons \text{Mg}(s) + 2\text{OH}^-$ –2.687 Manganese E (V) Eo ́ (V) $\text{Mn}^{2+} + 2e^- \rightleftharpoons \text{Mn}(s)$ –1.17 $\text{Mn}^{3+} + e^- \rightleftharpoons \text{Mn}^{2+}$ 1.5 $\text{MnO}_2(s) + 4\text{H}^+ + 2e^- \rightleftharpoons \text{Mn}^{2+} + 2\text{H}_2\text{O}(l)$ 1.23 $\text{MnO}_4^- + 4\text{H}^+ +3e^- \rightleftharpoons \text{MnO}_2(s) + 2\text{H}_2\text{O}(l)$ 1.70 $\text{MnO}_4^- + 8\text{H}^+ + 5e^- \rightleftharpoons \text{Mn}^{2+} + 4\text{H}_2\text{O}(l)$ 1.51 $\text{MnO}_4^- + 2\text{H}_2\text{O}(l) + 3e^- \rightleftharpoons \text{MnO}_2(s) + 4\text{OH}^-$ 0.60 Mercury E (V) Eo ́ (V) $\text{Hg}^{2+} + 2e^- \rightleftharpoons \text{Hg}(l)$ 0.8535 $2\text{Hg}^{2+} +2e^- \rightleftharpoons \text{Hg}_2^{2+}$ 0.911 Mercury E (V) Eo ́ (V) $\text{Hg}_2^{2+} + 2e^- \rightleftharpoons 2\text{Hg}(l)$ 0.7960 $\text{Hg}_2\text{Cl}_2(s) + 2e^- \rightleftharpoons 2\text{Hg}(l) + 2\text{Cl}^-$ 0.2682 $\text{HgO}(s) + 2\text{H}^+ + 2e^- \rightleftharpoons \text{Hg}(l) + \text{H}_2\text{O}(l)$ 0.926 $\text{Hg}_2\text{Br}_2(s) + 2e^- \rightleftharpoons 2\text{Hg}(l) + 2\text{Br}^-$ 1.392 $\text{Hg}_2\text{I}_2(s) + 2e^- \rightleftharpoons 2\text{Hg}(l) + 2\text{I}^-$ –0.0405 Molybdenum E (V) Eo ́ (V) $\text{Mo}^{3+} + 3e^- \rightleftharpoons \text{Mo}(s)$ –0.2 $\text{MoO}_2(s) + 4\text{H}^+ + 4e^- \rightleftharpoons \text{Mo}(s) + 2\text{H}_2\text{O}(l)$ –0.152 $\text{MoO}_4^{2-} + 4\text{H}_2\text{O}(l) + 6e^- \rightleftharpoons \text{Mo}(s) + 8\text{OH}^-$ –0.913 Nickel E (V) Eo ́ (V) $\text{Ni}^{2+} + 2e^- \rightleftharpoons \text{Ni}(s)$ –0.257 $\text{Ni(OH)}_2(s) + 2e^- \rightleftharpoons \text{Ni}(s) + 2\text{OH}^-$ –0.72 $\text{Ni(NH}_3)_6^{2+} + 2e^- \rightleftharpoons \text{Ni}(s) + 6\text{NH}_3$ –0.49 Nitrogen E (V) Eo ́ (V) $\text{N}_2(g) + 5\text{H}^+ + 4e^- \rightleftharpoons \text{N}_2\text{H}_5^+$ –0.23 $\text{N}_2\text{O}(g) + 2\text{H}^+ + 2e^- \rightleftharpoons \text{N}_2(g) + \text{H}_2\text{O}(l)$ 1.77 $2\text{NO}(g) + 2\text{H}^+ + 2e^- \rightleftharpoons \text{N}_2\text{O}(g) + \text{H}_2\text{O}(l)$ 1.59 $\text{HNO}_2 + \text{H}^+ + e^- \rightleftharpoons \text{NO}(g) + \text{H}_2\text{O}(l)$ 0.996 $2\text{HNO}_2 + 4\text{H}^+ + 4e^- \rightleftharpoons \text{N}_2\text{O}(g) + 3\text{H}_2\text{O}(l)$ 1.297 $\text{NO}_3^- + 3\text{H}^+ + 2e^- \rightleftharpoons \text{HNO}_2 + \text{H}_2\text{O}(l)$ 0.94 Oxygen E (V) Eo ́ (V) $\text{O}_2(g) + 2\text{H}^+ + 2e^- \rightleftharpoons \text{H}_2\text{O}_2$ 0.695 $\text{O}_2(g) + 4\text{H}^+ + 4e^- \rightleftharpoons 2\text{H}_2\text{O}(l)$ 1.229 $\text{H}_2\text{O}_2 + 2\text{H}^+ + 2e^- \rightleftharpoons 2\text{H}_2\text{O}(l)$ 1.763 $\text{O}_2(g) + 2\text{H}_2\text{O}(l) + 4e^- \rightleftharpoons 4\text{OH}^-$ 0.401 $\text{O}_3(g) + 2\text{H}^+ + 2e^- \rightleftharpoons \text{O}_2(g) + \text{H}_2\text{O}(l)$ 2.07 Phosphorous E (V) Eo ́ (V) $\text{P}(s, white) + 3\text{H}^+ + 3e^- \rightleftharpoons \text{PH}_3(g)$ –0.063 $\text{H}_3\text{PO}_3 + 2\text{H}^+ + 2e^- \rightleftharpoons \text{H}_3\text{PO}_2 + \text{H}_2\text{O}(l)$ –0.499 $\text{H}_3\text{PO}_4 + 2\text{H}^+ + 2e^- \rightleftharpoons \text{H}_3\text{PO}_3 + \text{H}_2\text{O}(l)$ –0.276 Platinum E (V) Eo ́ (V) $\text{Pt}^{2+} + 2e^- \rightleftharpoons \text{Pt}(s)$ 1.188 $\text{PtCl}_4^{2-} + 2e^- \rightleftharpoons \text{Pt}(s) + 4\text{Cl}^-$ 0.758 Potasium E (V) Eo ́ (V) $\text{K}^+ + e^- \rightleftharpoons \text{K}(s)$ –2.924 Ruthenium E (V) Eo ́ (V) $\text{Ru}^{3+} + 3e^- \rightleftharpoons \text{Ru}(s)$ 0.249 $\text{RuO}_2(s) + 4\text{H}^+ + 4e^- \rightleftharpoons \text{Ru}(s) + 2\text{H}_2\text{O}(l)$ 0.68 $\text{Ru(NH}_3)_6^{3+} + e^- \rightleftharpoons \text{Ru(NH}_3)_6^{2+}$ 0.10 $\text{Ru(CN)}_6^{3-} + e^- \rightleftharpoons \text{Ru(CN)}_6^{4-}$ 0.86 Selenium E (V) Eo ́ (V) $\text{Se}(s) + 2e^- \rightleftharpoons \text{Se}^{2-}$   –0.67 in 1 M NaOH $\text{Se}(s) + 2\text{H}^+ + 2e^- \rightleftharpoons \text{H}_2\text{Se}(g)$ –0.115 $\text{H}_2\text{SeO}_3 + 4\text{H}^+ + 4e^- \rightleftharpoons \text{Se}(s) + 3\text{H}_2\text{O}(l)$ 0.74 $\text{SeO}_4^{3-} + 4\text{H}^+ + e^- \rightleftharpoons \text{H}_2\text{SeO}_3 + \text{H}_2\text{O}(l)$ 1.151 Silicon E (V) Eo ́ (V) $\text{SiF}_6^{2-} + 4e^- \rightleftharpoons \text{Si}(s) + 6\text{F}^-$ –1.37 $\text{SiO}_2(s) + 4\text{H}^+ + 4e^- \rightleftharpoons \text{Si}(s) + 2\text{H}_2\text{O}(l)$ –0.909 $\text{SiO}_2(s) + 8\text{H}^+ + 8e^- \rightleftharpoons \text{SiH}_4(g) + 2\text{H}_2\text{O}(l)$ –0.516 Silver E (V) Eo ́ (V) $\text{Ag}^+ + e^- \rightleftharpoons \text{Ag}(s)$ 0.7996 $\text{AgBr}(s) + e^- \rightleftharpoons \text{Ag}(s) + \text{Br}^-$ 0.071 $\text{Ag}_2\text{C}_2\text{O}_4(s) + 2e^- \rightleftharpoons 2\text{Ag}(s) + \text{C}_2\text{O}_4^{2-}$ 0.47 $\text{AgCl}(s) + e^- \rightleftharpoons \text{Ag}(s) + \text{Cl}^-$ 0.2223 $\text{AgI}(s) + e^- \rightleftharpoons \text{Ag}(s) + \text{I}^-$ –0.152 $\text{Ag}_2\text{S}(s) + 2e^- \rightleftharpoons 2\text{Ag}(s) + \text{S}^{2-}$ –0.71 $\text{Ag(NH}_3)_2^+ + e^- \rightleftharpoons \text{Ag}(s) + 2\text{NH}_3$ –0.373 Sodium E (V) Eo ́ (V) $\text{Na}^+ + e^- \rightleftharpoons \text{Na}(s)$ –2.713 Strontium E (V) Eo ́ (V) $\text{Sr}^{2+} + 2e^- \rightleftharpoons \text{Sr}(s)$ –2.89 Sulfur E (V) Eo ́ (V) $\text{S}(s) + 2e^- \rightleftharpoons \text{S}^{2-}$ –0.407 $\text{S}(s) + 2\text{H}^+ + 2e^- \rightleftharpoons \text{H}_2\text{S}(g)$ 0.144 $\text{S}_2\text{O}_6^{2-} + 4\text{H}^+ + 2e^- \rightleftharpoons 2\text{H}_2\text{SO}_3$ 0.569 $\text{S}_2\text{O}_8^{2-} + 2e^- \rightleftharpoons 2\text{SO}_4^{2-}$ 1.96 $\text{S}_4\text{O}_6^{2-} + 2e^- \rightleftharpoons 2\text{S}_2\text{O}_3^{2-}$ 0.080 $2\text{SO}_3^{2-} + 2\text{H}_2\text{O}(l) + 2e^- \rightleftharpoons \text{S}_2\text{O}_4^{2-} + 4\text{OH}^-$ –1.13 $2\text{SO}_3^{2-} + 3\text{H}_2\text{O}(l) + 4e^- \rightleftharpoons \text{S}_2\text{O}_3^{2-} + 6\text{OH}^-$   –0.576 in 1 M NaOH $2\text{SO}_4^{2-} + 4\text{H}^+ + 2e^- \rightleftharpoons \text{S}_2\text{O}_6^{2-} + 2\text{H}_2\text{O}(l)$ –0.25 $\text{SO}_4^{2-} + \text{H}_2\text{O}(l) + 2e^- \rightleftharpoons \text{SO}_3^{2-} + 2\text{OH}^-$ –0.936 $\text{SO}_4^{2-} + 4\text{H}^+ + 2e^- \rightleftharpoons \text{H}_2\text{SO}_3 + \text{H}_2\text{O}(l)$ 0.172 Thallium E (V) Eo ́ (V) $\text{Tl}^{3+} + 2e^- \rightleftharpoons \text{Tl}^+$ 1.25 in 1 M HClO4 0.77 in 1 M HCl $\text{Tl}^{3+} + 3e^- \rightleftharpoons \text{Tl}(s)$ 0.742 Tin E (V) Eo ́ (V) $\text{Sn}^{2+} + 2e^- \rightleftharpoons \text{Sn}(s)$   –0.19 in 1 M HCl $\text{Sn}^{4+} + 2e^- \rightleftharpoons \text{Sn}^{2+}$ 0.154 0.139 in 1 M HCl Titanium E (V) Eo ́ (V) $\text{Ti}^{2+} + 2e^- \rightleftharpoons \text{Ti}(s)$ –0.163 $\text{Ti}^{3+} + e^- \rightleftharpoons \text{Ti}^{2+}$ –0.37 Tungsten E (V) Eo ́ (V) $\text{WO}_2(s) + 4\text{H}^+ + 4e^- \rightleftharpoons \text{W}(s) + 2\text{H}_2\text{O}(l)$ –0.119 $\text{WO}_3(s) + 6\text{H}^+ + 6e^- \rightleftharpoons \text{W}(s) + 3\text{H}_2\text{O}(l)$ –0.090 Uranium E (V) Eo ́ (V) $\text{U}^{3+} + 3e^- \rightleftharpoons \text{U}(s)$ –1.66 $\text{U}^{4+} + e^- \rightleftharpoons \text{U}^{3+}$ –0.52 $\text{UO}_2^+ + 4\text{H}^+ + e^- \rightleftharpoons \text{U}^{4+} + 2\text{H}_2\text{O}(l)$ 0.27 $\text{UO}_2^{2+} + e^- \rightleftharpoons \text{UO}_2^+$ 0.16 $\text{UO}_2^{2+} + 4\text{H}^+ + 2e^- \rightleftharpoons \text{U}^{4+} + 2\text{H}_2\text{O}(l)$ 0.327 Vanadium E (V) Eo ́ (V) $\text{V}^{2+} + 2e^- \rightleftharpoons \text{V}(s)$ –1.13 $\text{V}^{3+} + e^- \rightleftharpoons \text{V}^{2+}$ –0.255 $\text{VO}^{2+} + 2\text{H}^+ + e^- \rightleftharpoons \text{V}^{3+} + \text{H}_2\text{O}(l)$ 0.337 $\text{VO}_2^{+} + 2\text{H}^+ + e^- \rightleftharpoons \text{VO}^{2+} + \text{H}_2\text{O}(l)$ 1.000 Zinc E (V) Eo ́ (V) $\text{Zn}^{2+} + 2e^- \rightleftharpoons \text{Zn}(s)$ –0.7618 $\text{Zn(OH)}_4^{2-} + 2e^- \rightleftharpoons \text{Zn}(s) + 4\text{OH}^-$ –1.285 $\text{Zn(NH}_3)_4^{2+} + 2e^- \rightleftharpoons \text{Zn}(s) + 4\text{NH}_3$ –1.04 $\text{Zn(CN)}_4^{2-} + 2e^- \rightleftharpoons \text{Zn}(s) + 4\text{CN}^-$ –1.34 Polarographic Half-Wave Potentials The following table provides E1/2 values for selected reduction reactions. Values are from Dean, J. A. Analytical Chemistry Handbook, McGraw-Hill: New York, 1995. Element $E_{1/2}$ (volts vs. SCE) Matrix $\ce{Al^{3+}}(aq) + \ce{3 e-} \ce{<=>} \ce{Al}(s)$ –0.5 0.2 M acetate (pH 4.5–4.7) $\ce{Cd^{2+}}(aq) + \ce{2 e-} \ce{<=>} \ce{Cd}(s)$ –0.6 0.1 M KCl 0.050 M H2SO4 1 M HNO3 $\ce{Cr^{3+}}(aq) + \ce{3 e-} \ce{<=>} \ce{Cr}(s)$ –0.35 $(+3 \ce{->} +2)$ –1.70 $(+2 \ce{->} 0)$ 1 M NH4Cl plus 1 M NH3 1 M NH4+/NH3 buffer (pH 8–9) $\ce{Co^{3+}}(aq)+ \ce{3 e-} \ce{<=>} \ce{Co}(s)$ –0.5 $(+3 \ce{->} +2)$ –1.3 $(+2 \ce{->} 0)$ 1 M NH4Cl plus 1 M NH3 $\ce{Co^{2+}}(aq) + \ce{2 e-} \ce{<=>} \ce{Co}(s)$ –1.03 1 M KSCN $\ce{Cu^{2+}}(aq) + \ce{2 e-} \ce{<=>} \ce{Cu}(s)$ 0.04 –0.22 0.1 M KSCN 0.1 M NH4ClO4 1 M Na2SO4 0.5 M potassium citrate (pH 7.5) $\ce{Fe^{3+}}(aq) + \ce{3 e-} \ce{<=>} \ce{Fe}(s)$ –0.17 $(+3 \ce{->} +2)$ –1.52 $(+2 \ce{->} 0)$ 0.5 M sodium tartrate (pH 5.8) $\ce{Fe^{3+}}(aq) + \ce{e-} \ce{<=>} \ce{Fe^{2+}}(aq)$ –0.27 0.2 M Na2C2O4 (pH < 7.9) $\ce{Pb^{2+}}(aq) + \ce{2 e-} \ce{<=>} \ce{Pb}(s)$ –0.405 –0.435 1 M HNO3 1 M KCl $\ce{Mn^{2+}}(aq) + \ce{2 e-} \ce{<=>} \ce{Mn}(s)$ –1.65 1 M NH4Cl plus 1 M NH3 $\ce{Ni^{2+}}(aq) + \ce{2 e-} \ce{<=>} \ce{Ni}(s)$ –0.70 –1.09 1 M KSCN 1 M NH4Cl plus 1 M NH3 $\ce{Zn^{2+}}(aq) + \ce{2 e-} \ce{<=>} \ce{Zn}(s)$ –0.995 –1.33 0.1 M KCl 1 M NH4Cl plus 1 M NH3
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/35%3A_Appendicies/35.08%3A_Standard_and_Formal_Electrode_Potentials.txt
All compounds are of the highest available purity. Metals are cleaned with dilute acid to remove any surface impurities and rinsed with distilled water. Unless otherwise indicated, compounds are dried to a constant weight at 110 oC. Most of these compounds are soluble in dilute acid (1:1 HCl or 1:1 HNO3), with gentle heating if necessary; some of the compounds are water soluble. Element Compound FW (g/mol) Comments aluminum Al metal 26.982 antimony Sb metal 121.760 \(\ce{KSbOC4H4O6}\) 324.92 prepared by drying \(\ce{KSbC4H4O6 * 1/2H2O}\) at 100 °C and storing in a desicator arsenic As metal 74.922 \(\ce{As2O3}\) 197.84 toxic barium \(\ce{BaCO3}\) 197.84 dry at 200 oC for 4 h bismuth Bi metal 208.98 boron \(\ce{H3BO3}\) 61.83 do not dry bromine KBr 119.01 cadmium Cd metal 112.411 CdO 128.40 calcium \(\ce{CaCO3}\) 100.09 cerium Ce metal 140.116 \(\ce{(NH4)2Ce(NO3)4}\) 548.23 cesium \(\ce{Cs2CO3}\) 325.82 \(\ce{Cs2SO4}\) 361.87 chlorine NaCl 58.44 chromium Cr metal 51.996 \(\ce{K2Cr2O7}\) 294.19 cobalt Co metal 58.933 copper Cu metal 63.546 CuO 79.54 fluorine NaF 41.99 do not store solutions in glass containers iodine KI 166.00 \(\ce{KIO3}\) 214.00 iron Fe metal 55.845 lead Pb metal 207.2 lithium \(\ce{Li2CO3}\) 73.89 magnesium Mg metal 24.305 manganese Mn metal 54.938 mercury Hg metal 200.59 molybdenum Mo metal 95.94 nickel Ni metal 58.693 phosphorous \(\ce{KH2PO4}\) 136.09 \(\ce{P2O5}\) 141.94 potassium KCl 74.56 \(\ce{K2CO3}\) 138.21 \(\ce{K2Cr2O7}\) 294.19 \(\ce{KHC8H4O2}\) 204.23 silicon Si metal 28.085 \(\ce{SiO2}\) 60.08 silver Ag metal 107.868 \(\ce{AgNO3}\) 169.87 sodium NaCl 58.44 \(\ce{Na2CO3}\) 106.00 \(\ce{Na2C2O4}\) 134.00 strontium \(\ce{SrCO3}\) 147.63 sulfur elemental S 32.066 \(\ce{K2SO4}\) 174.27 \(\ce{Na2SO4}\) 142.04 tin Sn metal 118.710 titanium Ti metal 47.867 tungsten W metal 183.84 uranium U metal 238.029 \(\ce{U3O8}\) 842.09 vanadium V metal 50.942 zinc Zn metal 81.37 Sources: • Smith, B. W.; Parsons, M. L. J. Chem. Educ. 1973, 50, 679–681 • Moody, J. R.; Greenburg, P. R.; Pratt, K. W.; Rains, T. C. Anal. Chem. 1988, 60, 1203A–1218A. 35.10: Acronyms and Abbreviations acronym or abbreviation name AC alternating current ADC analog-to-digital convertor AES atomic emission spectroscopy auger electron spectroscopy AFM atomic force microscopy AFS atomic fluorescence spectroscopy ASV anodic stripping voltammetry ATF attenuated total reflectance BPC binary pulse counter CCD charge-coupled device CE capillary electrophoresis CEC capillary electrochromatography CGE capillary gel electrophoresis CI chemical ionization CID charge injection device COSY correlation spectroscopy CZE capillary zone electrophoresis CSV cathodic stripping voltammetry CV cyclic voltammetry DAC digital-to-analog convertor DAS diode array spectrometer DC direct current DCP direct current plasma DCS differential centrifugal separation DL detection limit DME dropping mercury electrode DPP differential pulse polarography DRIFT diffuse reflectance infrared Fourier transform DSC differential scanning calorimetry DTA differential thermal analysis ECD electron capture detector electrochemical detector EDL electrical double layer EM electromagnetic ESR electron spin resonance FAAS flame atomic absorption spectrometry FIA flow-injection analysis FID flame ionization detector free induction decay FIR far infrared FT Fourier transform GC gas chromatography GC-MS gas chromatography-mass spectrometry GDMS glow-discharge mass spectrometry HCL hollow cathode lamp HETCOR heteronuclear correlation spectroscopy HETP height-equivalent of theoretical plate HMDE hanging mercury drop electrode HPLC high performance liquid chromatography ICP inductively coupled plasma IEC ion-exchange chromatography IR infrared IS internal standard ISE ion-selective electrode LC liquid chromatography LOI limit-of-identification LOQ limit-of-quantification LSV linear sweep voltammetry MALDI matrix-assisted laser desorption ionization MEKC micellar electrokinetic capillary chromatography MS mass spectrometry MW molecular weight M/Z mass-to-charge ratio NAA neutron activation analysis NHE normal hydrogen electrode NIR near infrared NMR nuclear magnetic resonance NOSEY nuclear overhauser and exchange spectroscopy NP normal polarography NPP normal pulse polarography ORD optical rotary dispersion PDA photodiode array PLOT porous-layer open tubular column PP pulse polarography RPC reverse phase chromatography RRM resonance Raman spectrosocopy SCE saturated calomel electrode SCOT support-coated open tubular column SEC size-exclusion chromatography SEM scanning electron microscopy SERS surface-enhanced Raman spectroscopy SFC supercritical fluid chromatography SHE standard hydrogen electrode SIMS secondary ion mass spectrometry SMDE static mercury drop electrode S/R (or SNR) signal-to-noise ratio SSMS spark-source mass spectrometry STM scanning tunneling microscopy TCD thermal conductivity detector TGA thermal gravimetric analysis TOCSY total correlation spectroscopy TOF time-of-flight UV ultraviolet UV/Vis ultraviolet/visible Vis visible WCOT wall-coated open tubular column XPS x-ray photoelectron spectroscopy XRF x-ray fluorescence
textbooks/chem/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/35%3A_Appendicies/35.09%3A_Recommended_Primary_Standards.txt
Learning Objectives After completing this unit the student will be able to: • Explain what it means to use spectroscopic methods for qualitative and quantitative analysis. • Identify the terms in and describe deviations to Beer’s Law. • Describe the effect of changing the slit width and the impact it will have on qualitative and quantitative analyses. • Qualitatively determine the relative error in absorbance measurements and determine the optimal range for measurement purposes. • Describe the desirable features of a radiation source. 6. Explain the advantages of a dual versus single-beam spectrophotometer. • Explain the difference between a 3- and 4-level laser and why it is not possible to have a 2-level laser. • Compare the output of and advantages of prisms and gratings as dispersing elements. • Explain how a photomultiplier tube works. • Explain how an array detector works and describe the advantages of using an array detector. 1: General Background on Molecular Spectroscopy Molecular spectroscopy relates to the interactions that occur between molecules and electromagnetic radiation. Electromagnetic radiation is a form of radiation in which the electric and magnetic fields simultaneously vary. One well known example of electromagnetic radiation is visible light. Electromagnetic radiation can be characterized by its energy, intensity, frequency and wavelength. What is the relationship between the energy (E) and frequency ($\nu$) of electromagnetic radiation? The fundamental discoveries of Max Planck, who explained the emission of light by a blackbody radiator, and Albert Einstein, who explained the observations in the photoelectric effect, led to the realization that the energy of electromagnetic radiation is proportional to its frequency. The proportionality expression can be converted to an equality through the use of Planck’s constant. $\mathrm{E = h\nu} \nonumber$ What is the relationship between the energy and wavelength ($\lambda$) of electromagnetic radiation? Using the knowledge that the speed of electromagnetic radiation (c) is the frequency times the wavelength ($\mathrm{c = \lambda\nu}$), we can solve for the frequency and substitute in to the expression above to get the following. $\mathrm{E = \dfrac{hc}{\lambda}} \nonumber$ Therefore the energy of electromagnetic radiation is inversely proportional to the wavelength. Long wavelength electromagnetic radiation will have low energy. Short wavelength electromagnetic radiation will have high energy. Write the types of radiation observed in the electromagnetic spectrum going from high to low energy. Also include what types of processes occur in atoms or molecules for each type of radiation. High E, high $\nu$, short $\lambda$: $\gamma$-rays – Nuclear energy transitions X-rays – Inner-shell electron transitions Ultraviolet – Valence electron transitions Visible – Valence electron transitions Infrared – Molecular vibrations Microwaves – Molecular rotations, Electron spin transitions Low E, low $\nu$, long $\lambda$: Radiofrequency – Nuclear spin transitions Atoms and molecules have the ability to absorb or emit electromagnetic radiation. A species absorbing radiation undergoes a transition from the ground to some higher energy excited state. A species emitting radiation undergoes a transition from a higher energy excited state to a lower energy state. Spectroscopy in analytical chemistry is used in two primary manners: (1) to identify a species and (2) to quantify a species. Identification of a species involves recording the absorption or emission of a species as a function of the frequency or wavelength to obtain a spectrum (the spectrum is a plot of the absorbance or emission intensity as a function of wavelength). The features in the spectrum provide a signature for a molecule that may be used for purposes of identification. The more unique the spectrum for a species, the more useful it is for compound identification. Some spectroscopic methods (e.g., NMR spectroscopy) are especially useful for compound identification, whereas others provide spectra that are all rather similar and therefore not as useful. Among methods that provide highly unique spectra, there are some that are readily open to interpretation and structure assignment (e.g., NMR spectra), whereas others (e.g., infrared spectroscopy) are less open to interpretation and structure assignment. Since molecules do exhibit unique infrared spectra, an alternative means of compound identification is to use a computer to compare the spectrum of the unknown compound to a library of spectra of known compounds and identify the best match. In this case, identification is only possible if the spectrum of the unknown compound is in the library. Quantification of a species using a spectroscopic method involves measuring the magnitude of the absorbance or intensity of the emission and relating that to the concentration. At this point, we will focus on the use of absorbance measurements for quantification. Consider a sample through which you will send radiation of a particular wavelength as shown in Figure $1$. You measure the power from the radiation source (Po) using a blank solution (a blank is a sample that does not have any of the absorbing species you wish to measure). You then measure the power of radiation that makes it through the sample (P). The ratio P/Po is a measure of how much radiation passed through the sample and is defined as the transmittance (T). $\mathrm{T = \dfrac{P}{P_o} \hspace{20px} and \hspace{20px} \%T = \left(\dfrac{P}{P_o}\right)\times 100} \nonumber$ The higher the transmittance, the more similar P is to Po. The absorbance (A) is defined as: $\mathrm{A = -\log T \textrm{ or } \log\left(\dfrac{P_o}{P}\right).} \nonumber$ The higher the absorbance, the lower the value of P, and the less light that makes it through the sample and to the detector.
textbooks/chem/Analytical_Chemistry/Molecular_and_Atomic_Spectroscopy_(Wenzel)/1%3A_General_Background_on_Molecular_Spectroscopy/1.1%3A_Introduction_to_Molecular_Spectroscopy.txt
What factors influence the absorbance that you would measure for a sample? Is each factor directly or inversely proportional to the absorbance? One factor that influences the absorbance of a sample is the concentration (c). The expectation would be that, as the concentration goes up, more radiation is absorbed and the absorbance goes up. Therefore, the absorbance is directly proportional to the concentration. A second factor is the path length (b). The longer the path length, the more molecules there are in the path of the beam of radiation, therefore the absorbance goes up. Therefore, the path length is directly proportional to the concentration. When the concentration is reported in moles/liter and the path length is reported in centimeters, the third factor is known as the molar absorptivity ($\varepsilon$). In some fields of work, it is more common to refer to this as the extinction coefficient. When we use a spectroscopic method to measure the concentration of a sample, we select out a specific wavelength of radiation to shine on the sample. As you likely know from other experiences, a particular chemical species absorbs some wavelengths of radiation and not others. The molar absorptivity is a measure of how well the species absorbs the particular wavelength of radiation that is being shined on it. The process of absorbance of electromagnetic radiation involves the excitation of a species from the ground state to a higher energy excited state. This process is described as an excitation transition, and excitation transitions have probabilities of occurrences. It is appropriate to talk about the degree to which possible energy transitions within a chemical species are allowed. Some transitions are more allowed, or more favorable, than others. Transitions that are highly favorable or highly allowed have high molar absorptivities. Transitions that are only slightly favorable or slightly allowed have low molar absorptivities. The higher the molar absorptivity, the higher the absorbance. Therefore, the molar absorptivity is directly proportional to the absorbance. If we return to the experiment in which a spectrum (recording the absorbance as a function of wavelength) is recorded for a compound for the purpose of identification, the concentration and path length are constant at every wavelength of the spectrum. The only difference is the molar absorptivities at the different wavelengths, so a spectrum represents a plot of the relative molar absorptivity of a species as a function of wavelength. Since the concentration, path length and molar absorptivity are all directly proportional to the absorbance, we can write the following equation, which is known as the Beer-Lambert law (often referred to as Beer’s Law), to show this relationship. $\mathrm{A = \varepsilon bc} \nonumber$ Note that Beer’s Law is the equation for a straight line with a y-intercept of zero. If you wanted to measure the concentration of a particular species in a sample, describe the procedure you would use to do so. Measuring the concentration of a species in a sample involves a multistep process. One important consideration is the wavelength of radiation to use for the measurement. Remember that the higher the molar absorptivity, the higher the absorbance. What this also means is that the higher the molar absorptivity, the lower the concentration of species that still gives a measurable absorbance value. Therefore, the wavelength that has the highest molar absorptivity ($\lambda$max) is usually selected for the analysis because it will provide the lowest detection limits. If the species you are measuring is one that has been commonly studied, literature reports or standard analysis methods will provide the $\lambda$max value. If it is a new species with an unknown $\lambda$max value, then it is easily measured by recording the spectrum of the species. The wavelength that has the highest absorbance in the spectrum is $\lambda$max. The second step of the process is to generate a standard curve. The standard curve is generated by preparing a series of solutions (usually 3-5) with known concentrations of the species being measured. Every standard curve is generated using a blank. The blank is some appropriate solution that is assumed to have an absorbance value of zero. It is used to zero the spectrophotometer before measuring the absorbance of the standard and unknown solutions. The absorbance of each standard sample at $\lambda$max is measured and plotted as a function of concentration. The plot of the data should be linear and should go through the origin as shown in the standard curve in Figure $2$. If the plot is not linear or if the y-intercept deviates substantially from the origin, it indicates that the standards were improperly prepared, the samples deviate in some way from Beer’s Law, or that there is an unknown interference in the sample that is complicating the measurements. Assuming a linear standard curve is obtained, the equation that provides the best linear fit to the data is generated. Note that the slope of the line of the standard curve in Figure $2$ is ($\varepsilon$b) in the Beer’s Law equation. If the path length is known, the slope of the line can then be used to calculate the molar absorptivity. The third step is to measure the absorbance in the sample with an unknown concentration. The absorbance of the sample is used with the equation for the standard curve to calculate the concentration. Suppose a small amount of stray radiation (PS) always leaked into your instrument and made it to your detector. This stray radiation would add to your measurements of Po and P. Would this cause any deviations to Beer's law? Explain. The way to think about this question is to consider the expression we wrote earlier for the absorbance. $\mathrm{A = \log\left(\dfrac{P_o}{P}\right)} \nonumber$ Since stray radiation always leaks in to the detector and presumably is a fixed or constant quantity, we can rewrite the expression for the absorbance including terms for the stray radiation. It is important to recognize that Po, the power from the radiation source, is considerably larger than $P_S$. Also, the numerator (Po + Ps) is a constant at a particular wavelength. $\mathrm{A = \log\left(\dfrac{P_o + P_s}{P + P_s}\right)} \nonumber$ Now let’s examine what happens to this expression under the two extremes of low concentration and high concentration. At low concentration, not much of the radiation is absorbed and P is not that much different than Po. Since $P_o \gg P_S$, $P$ will also be much greater than $P_S$. If the sample is now made a little more concentrated so that a little more of the radiation is absorbed, P is still much greater than PS. Under these conditions the amount of stray radiation is a negligible contribution to the measurements of Po and P and has a negligible effect on the linearity of Beer’s Law. As the concentration is raised, P, the radiation reaching the detector, becomes smaller. If the concentration is made high enough, much of the incident radiation is absorbed by the sample and P becomes much smaller. If we consider the denominator (P + PS) at increasing concentrations, P gets small and PS remains constant. At its limit, the denominator approaches PS, a constant. Since Po + PS is a constant and the denominator approaches a constant (Ps), the absorbance approaches a constant. A plot of what would occur is shown in Figure $3$. The ideal plot is the straight line. The curvature that occurs at higher concentrations that is caused by the presence of stray radiation represents a negative deviation from Beer’s Law. The derivation of Beer's Law assumes that the molecules absorbing radiation don't interact with each other (remember that these molecules are dissolved in a solvent). If the analyte molecules interact with each other, they can alter their ability to absorb the radiation. Where would this assumption break down? Guess what this does to Beer's law? The sample molecules are more likely to interact with each other at higher concentrations, thus the assumption used to derive Beer’s Law breaks down at high concentrations. The effect, which we will not explain in any more detail in this document, also leads to a negative deviation from Beer’s Law at high concentration. Beer's law also assumes purely monochromatic radiation. Describe an instrumental set up that would allow you to shine monochromatic radiation on your sample. Is it possible to get purely monochromatic radiation using your set up? Guess what this does to Beer's law. Spectroscopic instruments typically have a device known as a monochromator. There are two key features of a monochromator. The first is a device to disperse the radiation into distinct wavelengths. You are likely familiar with the dispersion of radiation that occurs when radiation of different wavelengths is passed through a prism. The second is a slit that blocks the wavelengths that you do not want to shine on your sample and only allows $\lambda$max to pass through to your sample as shown in Figure $4$. An examination of Figure $4$ shows that the slit has to allow some “packet” of wavelengths through to the sample. The packet is centered on $\lambda$max, but clearly nearby wavelengths of radiation pass through the slit to the sample. The term effective bandwidth defines the packet of wavelengths and it depends on the slit width and the ability of the dispersing element to divide the wavelengths. Reducing the width of the slit reduces the packet of wavelengths that make it through to the sample, meaning that smaller slit widths lead to more monochromatic radiation and less deviation from linearity from Beer’s Law. Is there a disadvantage to reducing the slit width? The important thing to consider is the effect that this has on the power of radiation making it through to the sample (Po). Reducing the slit width will lead to a reduction in Po and hence P. An electronic measuring device called a detector is used to monitor the magnitude of Po and P. All electronic devices have a background noise associated with them (rather analogous to the static noise you may hear on a speaker and to the discussion of stray radiation from earlier that represents a form of noise). Po and P represent measurements of signal over the background noise. As Po and P become smaller, the background noise becomes a more significant contribution to the overall measurement. Ultimately the background noise restricts the signal that can be measured and detection limit of the spectrophotometer. Therefore, it is desirable to have a large value of Po. Since reducing the slit width reduces the value of Po, it also reduces the detection limit of the device. Selecting the appropriate slit width for a spectrophotometer is therefore a balance or tradeoff of the desire for high source power and the desire for high monochromaticity of the radiation. It is not possible to get purely monochromatic radiation using a dispersing element with a slit. Usually the sample has a slightly different molar absorptivity for each wavelength of radiation shining on it. The net effect is that the total absorbance added over all the different wavelengths is no longer linear with concentration. Instead a negative deviation occurs at higher concentrations due to the polychromicity of the radiation. Furthermore, the deviation is more pronounced the greater the difference in the molar absorbtivity. Figure $5$ compares the deviation for two wavelengths of radiation with molar absorptivities that are (a) both 1,000, (b) 500 and 1,500, and (c) 250 and 1,750. As the molar absorptivities become further apart, a greater negative deviation is observed. Therefore, it is preferable to perform the absorbance measurement in a region of the spectrum that is relatively broad and flat. The hypothetical spectrum in Figure $6$ shows a species with two wavelengths that have the same molar absorptivity. The peak at approximately 250 nm is quite sharp whereas the one at 330 nm is rather broad. Given such a choice, the broader peak will have less deviation from the polychromaticity of the radiation and is less prone to errors caused by slight misadjustments of the monochromator. Consider the relative error that would be observed for a sample as a function of the transmittance or absorbance. Is there a preferable region in which to measure the absorbance? What do you think about measuring absorbance values above 1? It is important to consider the error that occurs at the two extremes (high concentration and low concentration). Our discussion above about deviations to Beer’s Law showed that several problems ensued at higher concentrations of the sample. Also, the point where only 10% of the radiation is transmitted through the sample corresponds to an absorbance value of 1. Because of the logarithmic relationship between absorbance and transmittance, the absorbance values rise rather rapidly over the last 10% of the radiation that is absorbed by the sample. A relatively small change in the transmittance can lead to a rather large change in the absorbance at high concentrations. Because of the substantial negative deviation to Beer’s law and the lack of precision in measuring absorbance values above 1, it is reasonable to assume that the error in the measurement of absorbance would be high at high concentrations. At very low sample concentrations, we observe that Po and P are quite similar in magnitude. If we lower the concentration a bit more, P becomes even more similar to Po. The important realization is that, at low concentrations, we are measuring a small difference between two large numbers. For example, suppose we wanted to measure the weight of a captain of an oil tanker. One way to do this is to measure the combined weight of the tanker and the captain, then have the captain leave the ship and measure the weight again. The difference between these two large numbers would be the weight of the captain. If we had a scale that was accurate to many, many significant figures, then we could possibly perform the measurement in this way. But you likely realize that this is an impractical way to accurately measure the weight of the captain and most scales do not have sufficient precision for an accurate measurement. Similarly, trying to measure a small difference between two large signals of radiation is prone to error since the difference in the signals might be on the order of the inherent noise in the measurement. Therefore, the degree of error is expected to be high at low concentrations. The discussion above suggests that it is best to measure the absorbance somewhere in the range of 0.1 to 0.8. Solutions of higher and lower concentrations have higher relative error in the measurement. Low absorbance values (high transmittance) correspond to dilute solutions. Often, other than taking steps to concentrate the sample, we are forced to measure samples that have low concentrations and must accept the increased error in the measurement. It is generally undesirable to record absorbance measurements above 1 for samples. Instead, it is better to dilute such samples and record a value that will be more precise with less relative error. Another question that arises is whether it is acceptable to use a non-linear standard curve. As we observed earlier, standard curves of absorbance versus concentration will show a non-linearity at higher concentrations. Such a non-linear plot can usually be fit using a higher order equation and the equation may predict the shape of the curve quite accurately. Whether or not it is acceptable to use the non-linear portion of the curve depends in part on the absorbance value where the non-linearity starts to appear. If the non-linearity occurs at absorbance values higher than one, it is usually better to dilute the sample into the linear portion of the curve because the absorbance value has a high relative error. If the non-linearity occurs at absorbance values lower than one, using a non-linear higher order equation to calculate the concentration of the analyte in the unknown may be acceptable. One thing that should never be done is to extrapolate a standard curve to higher concentrations. Since non-linearity will occur at some point, and there is no way of knowing in advance when it will occur, the absorbance of any unknown sample must be lower than the absorbance of the highest concentration standard used in the preparation of the standard curve. It is also not desirable to extrapolate a standard curve to lower concentrations. There are occasions when non-linear effects occur at low concentrations. If an unknown has an absorbance that is below that of the lowest concentration standard of the standard curve, it is preferable to prepare a lower concentration standard to ensure that the curve is linear over such a concentration region. Another concern that always exists when using spectroscopic measurements for compound quantification or identification is the potential presence of matrix effects. The matrix is everything else that is in the sample except for the species being analyzed. A concern can occur when the matrix of the unknown sample has components in it that are not in the blank solution and standards. Components of the matrix can have several undesirable effects. What are some examples of matrix effects and what undesirable effect could each have that would compromise the absorbance measurement for a sample with an unknown concentration? One concern is that a component of the matrix may absorb radiation at the same wavelength as the analyte, giving a false positive signal. Particulate matter in a sample will scatter the radiation, thereby reducing the intensity of the radiation at the detector. Scattered radiation will be confused with absorbed radiation and result in a higher concentration than actually occurs in the sample. Another concern is that some species have the ability to change the value of $\lambda$max. For some species, the value of $\lambda$max can show a pronounced dependence on pH. If this is a consideration, then all of the standard and unknown solutions must be appropriately buffered. Species that can hydrogen bond or metal ions that can form donor-acceptor complexes with the analyte may alter the position of $\lambda$max. Changes in the solvent can affect $\lambda$max as well.
textbooks/chem/Analytical_Chemistry/Molecular_and_Atomic_Spectroscopy_(Wenzel)/1%3A_General_Background_on_Molecular_Spectroscopy/1.2%3A_Beers_Law.txt
A spectrophotometer has five major components to it, a source, monochromator, sample holder, detector, and readout device. Most spectrophotometers in use today are linked to and operated by a computer and the data recorded by the detector is displayed in some form on the computer screen. 1.3: Instrumental Setup of a Spectrophotometer Describe the desirable features of a radiation source for a spectrophotometer. An obvious feature is that the source must cover the region of the spectrum that is being monitored. Beyond that, one important feature is that the source has high power or intensity, meaning that it gives off more photons. Since any detector senses signal above some noise, having more signal increases what is known as the signal-to-noise ratio and improves the detection limit. The second important feature is that the source be stable. Instability on the power output from a source can contribute to noise and can contribute to inaccuracy in the readings between standards and unknown samples. Plot the relative intensity of light emitted from an incandescent light bulb (y-axis) as a function of wavelength (x-axis). This plot is a classic observation known as blackbody radiation. On the same graph, show the output from a radiation source that operated at a hotter temperature. As shown in Figure \(7\), the emission from a blackbody radiator has a specific wavelength that exhibits maximum intensity or power. The intensity diminishes at shorter and longer wavelengths. The output from a blackbody radiator is a function of temperature. As seen in Figure \(7\), at hotter temperatures, the wavelength with maximum intensity moves toward the ultraviolet region of the spectrum. Examining the plots in Figure \(7\), what does this suggest about the power that exists in radiation sources for the infrared portion of the spectrum? The intensity of radiation in the infrared portion of the spectrum diminishes considerably for most blackbody radiators, especially at the far infrared portions of the spectrum. That means that infrared sources do not have high power, which ultimately has an influence on the detection limit when using infrared absorption for quantitative analysis. Blackbody radiators are known as continuous sources. An examination of the plots in Figure \(7\) shows that a blackbody radiator emits radiation over a large continuous band of wavelengths. A monochromator can then be used to select out a single wavelength needed for the quantitative analysis. Alternatively, it is possible to scan through the wavelengths of radiation from a blackbody radiator and record the spectrum for the species under study. Explain the advantages of a dual- versus single-beam spectrophotometer. One way to set up a dual-beam spectrophotometer is to split the beam of radiation from the source and send half through a sample cell and half through a reference cell. The reference cell has a blank solution in it. The detector is set up to compare the two signals. Instability in the source output will show up equally in the sample and reference beam and can therefore be accounted for in the measurement. Remember that the intensity of radiation from the source varies with wavelength and drops off toward the high and low energy region of the spectrum. The changes in relative intensity can be accounted for in a dual-beam configuration. A laser (LASER = Light Amplification by Stimulated Emission of Radiation) is a monochromatic source of radiation that emits one specific frequency or wavelength of radiation. Because lasers put out a specific frequency of radiation, they cannot be used as a source to obtain an absorbance spectrum. However, lasers are important sources for many spectroscopic techniques, as will be seen at different points as we further develop the various spectroscopic methods. What you probably know about lasers is that they are often high-powered radiation sources. They emit a highly focused and coherent beam. Coherency refers to the observation that the photons emitted by a laser have identical frequencies and waves that are in phase with each other. A laser relies on two important processes. The first is the formation of a population inversion. A population inversion occurs for an energy transition when more of the species are in the excited state than are in the ground state. The second is the process of stimulated emission. Emission is when an excited state species emits radiation (Figure \(8\)a). Absorption occurs when a photon with the exact same energy as the difference in energy between the ground and excited state of a species interacts with and transfers its energy to the species to promote it to the excited state (Figure \(8\)c). Stimulated emission occurs when an incident photon that has exactly the same energy as the difference in energy between the ground and excited state of a transition interacts with the species in the excited state. In this case, the extra energy that the species has is converted to a photon that is emitted. In addition, though, the incident photon also is emitted. One final point is that the two photons in the stimulated emission process have their waves in phase with each other (are coherent) (Figure \(8\)b). In absorption, one incident photon comes in and no photons come out. In stimulated emission, one incident photon comes in and two photons come out. Why is it impossible to create a 2-level laser? A 2-level laser involves a process with only two energy states, the ground and excited state. In a resting state, the system will have a large population of species in the ground state (essentially 100% as seen in Figure \(9\)) and only a few or none in the excited state. Incident radiation of an energy that matches the transition is then applied and ground state species absorb photons and become excited. The general transition process is illustrated in Figure \(9\)a. Species in the excited state will give up the excess energy either as an emitted photon or heat to the surroundings. We will discuss this in more detail later on, but for now, it is acceptable to realize that excited state species have a finite lifetime before they lose their energy and return to the ground state. Without worrying about the excited state lifetime, let’s assume that the excited species remain in that state and incident photons can continue to excite additional ground state species into the excited state. As this occurs, the number of species in the excited state (e.g., the excited state population) will grow and the number in the ground state will diminish. The key point to consider is the system where 50% of the species are in the excited state and 50% of the species are in the ground state, as shown in Figure \(9\)b. For a system with exactly equal populations of the ground and excited state, incident photons from the radiation source have an equal probability of interacting with a species in the ground or excited state. If a photon interacts with a species in the ground state, absorption of the photon occurs and the species becomes excited. However, if another photon interacts with a species in the excited state, stimulated emission occurs, the species returns to the ground state and two photons are emitted. The net result is that for every ground state species that absorbs a photon and becomes excited there is a corresponding excited species that undergoes stimulated emission and returns to the ground state. Therefore it is not possible to get beyond the point of a 50-50 population and never possible to get a population inversion. A 2-level system with a 50-50 population is said to be a saturated transition. Using your understanding of a 2-level system, explain what is meant by a 3-level and 4-level system. 3- and 4-level systems can function as a laser. How is it possible to achieve a population inversion in a 3- and 4-level system? The diagrams for a 3-level and 4-level laser system are shown in Figures 1.10 and 1.11, respectively. There are certain important features that are necessary to have something function as a 3- or 4-level laser. One is that there has to be a favorable relaxation process in which the species converts or transitions between the second and third levels in the diagrams. The transition from level 2 to level 3 must be more favorable than a transition from level 2 to level 1. Another relates to the relative lifetimes of the excited state levels. It must be the case that the lifetime of the species in level 3 is longer than the lifetime of the species in level 2. Assuming the two features described above are met, it is now possible to excite species from level 1 to level 2 using the radiation source. Species then transition to level 3 but, because of the longer lifetime, are effectively “stuck” there before returning back to the ground state (level 1). For the 3-level system, If they are stuck in level 3 long enough, it may be possible to deplete enough of the population from level 1 such that the population in level 3 is now higher than the population in level 1. The level 3 to level 1 transition is the lasing transition and note that the incident photons from the source have a different energy than this transition so no stimulated emission occurs. When the population inversion is achieved, a photon emitted from a species in level 3 can interact with another species that is excited to level 3, causing the stimulated emission of two photons. These emitted photons can interact with additional excited state species in level 3 to cause more stimulated emission and the result is a cascade of stimulated emission. This large cascade or pulse of photons all have the same frequency and are coherent. The process of populating level 3 in either the 3- or 4-level system using energy from the incident photons from the radiation source is referred to as optical pumping. For the 4-level laser, the lasing transition is from level 3 to level 4, meaning that a population inversion is needed between levels 3 and 4 and not levels 3 and 1. Which of the two (3- or 4-level system) is generally preferred in a laser and why? Since the population of level 4 is much lower than the population of level 1, it is much easier to achieve a population inversion in a 4-level laser compared to a 3-level laser. Therefore, the 4-level laser is generally preferred and more common than a 3-level laser.
textbooks/chem/Analytical_Chemistry/Molecular_and_Atomic_Spectroscopy_(Wenzel)/1%3A_General_Background_on_Molecular_Spectroscopy/1.3%3A__Instrumental_Setup_of_a_Spectrophotometer/1.3A%3A_Radiation_Sources.txt
The two most common ways of achieving monochromatic radiation from a continuous radiation source are to use either a prism or a grating. Explain in general terms the mechanism in a prism and grating that leads to the attainment of monochromatic radiation. Compare the advantages and disadvantages of each type of device. What is meant by second order radiation in a grating? Describe the difference between a grating that would be useful for the infrared region of the spectrum and one that would be useful for the ultraviolet region of the spectrum. A prism disperses radiation because different wavelengths of radiation have different refractive indices in the material that makes up the prism. That causes different angles of refraction that disperse the radiation as it moves through the prism (Figure \(12\)). A grating is a device that consists of a series of identically shaped, angled grooves as shown in Figure \(13\). The grating illustrated in Figure \(13\) is a reflection grating. Incoming light represented as A and B is collimated and appears as a plane wave. Therefore, as seen in Figure \(13\)a, the crest of the wave for A strikes a face of the grating before the crest of the wave for B strikes the adjoining face. Light that strikes the surface of the grating is scattered in all directions, one direction of which is shown in Figure \(13\)b for A and B. An examination of the paths for A and B in Figure \(13\) shows that B travels a further distance than A. For monochromatic radiation, if B travels an integer increment of the wavelength further than A, the two constructively interfere. If not, destructive interference results. Diffraction of polychromatic radiation off the grating leads to an interference pattern in which different wavelengths of radiation constructively and destructively interfere at different points in space. The advantage of a grating over a prism is that the dispersion is linear (Figure \(14\)). This means that a particular slit width allows an identical packet of wavelengths of radiation through to the sample. The dispersion of radiation with a prism is non-linear and, for visible radiation, there is less dispersion of the radiation toward the red end of the spectrum. See Figure \(14\) for a comparison of a glass and quartz prism. Note, the glass prism absorbs ultraviolet radiation in the range of 200-350 nm. The non-linear dispersion of a prism means that the resolution (ability to distinguish two nearby peaks) in a spectrum will diminish toward the red end of the spectrum. Linear dispersion is preferable. The other disadvantage of a prism is that it must transmit the radiation, whereas gratings usually rely on a reflection process. An important aspect of a grating is that more than one wavelength of radiation will exhibit constructive interference at a given position. Without incorporating other specific design features into the monochromator, all wavelengths that constructively interfere will be incident on the sample. For example, radiation with a wavelength of 300 nm will constructively interfere at the same position as radiation with a wavelength of 600 nm. This is referred to as order overlap. There are a variety of procedures that can be used to eliminate order overlap, details of which can be found at the following: Diffraction Gratings. The difference between gratings that are useful for the ultraviolet and visible region as compared to those that are useful for the infrared region involves the distance between the grooves. Gratings for the infrared region have a much wider spacing between the grooves. Explain the significance of the slit width of a monochromator. What is the advantage(s) of making the slit width smaller? What is the disadvantage(s) of making the slit width smaller? As discussed earlier, the advantage of making the slit width smaller is that it lets a smaller packet of wavelengths through to the sample. This improves the resolution in the spectrum, which means that it is easier to identify and distinguish nearby peaks. The disadvantage of making the slit width smaller is that it allows fewer photons (less power) through to the sample. This decreases the signal-to-noise ratio and raises the detection limit for the species being analyzed. 1.3C: Detectors Explain how a photomultiplier tube works. What are any advantages or disadvantages of a photomultiplier tube? A photomultiplier tube is commonly used to measure the intensity of ultraviolet and visible radiation. The measurement is based initially on the photoelectric effect and then on the amplification of the signal through a series of dynodes (Figure \(15\)). The initiation of the detection process involves radiation striking the surface of a photoactive surface and dislodging electrons. Electrons dislodged from this surface are accelerated toward the first dynode. This acceleration is accomplished by having the first dynode at a high voltage. Because of the acceleration, each electron released from the photoactive surface dislodges several electrons when it strikes the surface of the first dynode. Electrons emitted from the first dynode are accelerated toward the second dynode, etc. to eventually create a cascade of electrons that causes a large current. The advantage of the photomultiplier tube is its ability to measure relatively small amounts of electromagnetic radiation because of the amplification process that occurs. A disadvantage is that any spurious signal such as stray radiation is also amplified in the process, leading to an enhancement of the noise. The noise can be reduced by cooling the photomultiplier tube, which is done with some instruments. A caution when using a photomultiplier tube is that it must not be exposed to too high an intensity of radiation, since high intensity radiation can damage the photoelectric surface. Photomultiplier tubes are useful for the measurement of radiation that produces a current through the photoelectric effect – primarily ultraviolet and visible radiation. It is not useful for measuring the intensity of low energy radiation in the infrared and microwave portion of the spectrum. Describe a photodiode array detector. What advantages does it offer over other detection devices? A photodiode array detector consists of an array or series of adjacent photosensitive diodes (Figure \(16\)). Radiation striking a diode causes a charge buildup that is proportional to the intensity of the radiation. The individual members of the array are known as pixels and are quite small in size. Since many pixels or array elements can be fit onto a small surface area, it is possible to build an array of these pixels and shine dispersed light from a monochromator onto it, thereby measuring the intensity of radiation for an entire spectrum. The advantage of the photodiode array detector is the potential for measuring multiple wavelengths at once, thereby measuring the entire spectrum of a species at once. Unfortunately, photodiode arrays are not that sensitive. A more sensitive array device uses a charge-transfer process. These are often two-dimensional arrays with many more pixels than a photodiode array. Radiation striking pixels in the array builds up a charge that is measured in either a charge-injection device (CID) or charge-coupled device (CCD).
textbooks/chem/Analytical_Chemistry/Molecular_and_Atomic_Spectroscopy_(Wenzel)/1%3A_General_Background_on_Molecular_Spectroscopy/1.3%3A__Instrumental_Setup_of_a_Spectrophotometer/1.3B%3A_Monochromators.txt
Learning Objectives After completing this unit the student will be able to: • Compare and contrast atomic and molecular spectra. • Explain why atomic spectra consist of lines whereas molecular spectra at room temperature are broad and continuous. • Justify the difference in molecular spectra at room temperature and 10 K. • Describe the cause of Doppler broadening. • Determine the effect of conjugation on a UV/Vis absorption spectrum. • Determine the effect of non-bonding electrons on a UV/Vis absorption spectrum. • Determine the effect of solvent on the energy of $n-pi^*$ and $\pi-\pi^*$ transitions. • Evaluate the utility of UV/Vis spectroscopy as a qualitative and quantitative method. • Describe a procedure by which UV/Vis spectroscopy can be used to determine the pKa of a weak acid. 2: Ultraviolet Visible Absorption Spectroscopy Compare and contrast the absorption of ultraviolet (UV) and visible (VIS) radiation by an atomic substance (something like helium) with that of a molecular substance (something like ethylene). Do you expect different absorption peaks or bands from an atomic or molecular substance to have different intensities? If so, what does this say about the transitions? UV/VIS radiation has the proper energy to excite valence electrons of chemical species and cause electronic transitions. For atoms, the only process we need to think about is the excitation of electrons, (i.e., electronic transitions), from one atomic orbital to another. Since the atomic orbitals have discrete or specific energies, transitions among them have discrete or specific energies. Therefore, atomic absorption spectra consist of a series of “lines” at the wavelengths of radiation (or frequency of radiation) that correspond in energy to each allowable electronic transition. The diagram in Figure \(1\) represents the energy level diagram of any multielectron atom. The different lines in the spectrum will have different intensities. As we have already discussed, different transitions have different probabilities or different molar absorptivities, which accounts for the different intensities. The process of absorption for helium is shown in Figure \(2\) in which one electron is excited to a higher energy orbital. Several possible absorption transitions are illustrated in the diagram. The illustration in Figure \(3\) represents the atomic emission spectrum of helium and clearly shows the “line” nature of an atomic spectrum. For molecules, there are two other important processes to consider besides the excitation of electrons from one molecular orbital to another. The first is that molecules vibrate. Molecular vibrations or vibrational transitions occur in the infrared portion of the spectrum and are therefore lower in energy than electronic transitions. The second is that molecules can rotate. Molecular rotations or rotational transitions occur in the microwave portion of the spectrum and are therefore lower in energy than electronic and vibrational transitions. The diagram in Figure \(4\) represents the energy level diagram for a molecule. The arrows in the diagram represent possible transitions from the ground to excited states. Note that the vibrational and rotational energy levels in a molecule are superimposed over the electronic transitions. An important question to consider is whether an electron in the ground state (lowest energy electronic, vibrational and rotational state) can only be excited to the first excited electronic state (no extra vibrational or rotational energy), or whether it can also be excited to vibrationally and/or rotationally excited states in the first excited electronic state. It turns out that molecules can be excited to vibrationally and/or rotationally excited levels of the first excited electronic state, as shown by arrows in Figure \(4\). Molecules can also be excited to the second and higher excited electronic states. Therefore, we can speak of a molecule as existing in the second excited rotational state of the third excited vibrational state of the first excited electronic state. One consequence in the comparison of atomic and molecule absorption spectra is that molecular absorption spectra ought to have many more transitions or lines in them than atomic spectra because of all the vibrational and rotational excited states that exist. Compare a molecular absorption spectrum of a dilute species dissolved in a solvent at room temperature versus the same sample at 10K. The difference to consider here is that the sample at 10K will be frozen into a solid whereas the sample at room temperature will be a liquid. In the liquid state, the solute and solvent molecules move about via diffusion and undergo frequent collisions with each other. In the solid state, collisions are reduced considerably. What is the effect of collisions of solvent and solute molecules? Collisions between molecules cause distortions of the electrons. Since molecules in a mixture move with a distribution of different speeds, the collisions occur with different degrees of distortion of the electrons. Since the energy of electrons depends on their locations in space, distortion of the electrons causes slight changes in the energy of the electrons. Slight changes in the energy of an electron means there will be a slight change in the energy of its transition to a higher energy state. The net effect of collisions is to cause a broadening of the lines in the spectrum. The spectrum at room temperature will show significant collisional broadening whereas the spectrum at 10K will have minimal collisional broadening. The collisional broadening at room temperature in a solvent such as water is significant enough to cause a blurring together of the energy differences between the different rotational and vibrational states, such that the spectrum consists of broad absorption bands instead of discrete lines. By contrast, the spectrum at 10K will consist of numerous discrete lines that distinguish between the different rotationally and vibrationally excited levels of the excited electronic states. The diagrams in Figure \(5\) show the difference between the spectrum at room temperature and 10K, although the one at 10K does not contain nearly the number of lines that would be observed in the actual spectrum. Are there any other general processes that contribute to broadening in an absorption spectrum? The other general contribution to broadening comes from something known as the Doppler Effect. The Doppler Effect occurs because the species absorbing or emitting radiation is moving relative to the detector. Perhaps the easiest way to think about this is to consider a species moving away from the detector that emits a specific frequency of radiation heading toward the detector. The frequency of radiation corresponds to that of the energy of the transition, so the emitted radiation has a specific, fixed frequency. The picture in Figure \(6\) shows two species emitting waves of radiation toward a detector. It is worth focusing on the highest amplitude portion of each wave. Also, in Figure \(6\), assume that the detector is on the right side of the diagram and the right side of the two emitting spheres. The emission process to produce the wave of radiation requires some finite amount of time. If the species is moving away from the detector, even though the frequency is fixed, to the detector it will appear as if each of the highest amplitude regions of the wave is lagging behind where they would be if the species is stationary (see the upper sphere in Figure \(6\)). The result is that the wavelength of the radiation appears longer, meaning that the frequency appears lower. For visible radiation, we say that the radiation from the emitting species is red-shifted. The lower sphere in Figure \(6\) is moving towards the detector. Now the highest amplitude regions of the wave are appearing at the detector faster than expected. This radiation is blue-shifted. In a solution, different species are moving in different directions relative to the detector. Some exhibit no Doppler shift. Others would be blue-shifted whereas others would be red-shifted and the degree of red- and blue-shift varies among different species. The net effect would be that the emission peak is broadened. The same process occurs with the absorption of radiation as well. The emission spectrum in Figure \(7\) represents the Doppler broadening that would occur for a gas phase atomic species where the atoms are not moving (top) and then moving with random motion (bottom). A practical application of the Doppler Effect is the measurement of the distance of galaxies from the earth. The universe is expanding away from a central point. Hubble’s Law and the Hubble effect is an observation that the further a galaxy is from the center of the universe, the faster it moves. There is also a precise formula that predicts the speed of movement relative to the distance from the center of the universe. Galaxies further from the center of the universe therefore show a larger red shift in their radiation due to the Doppler Effect than galaxies closer to the center of the universe. Measurements of the red-shift are used to determine the placement of galaxies in the universe.
textbooks/chem/Analytical_Chemistry/Molecular_and_Atomic_Spectroscopy_(Wenzel)/2%3A_Ultraviolet_Visible_Absorption_Spectroscopy/2.1%3A_Introduction.txt
Compare the UV absorption spectrum of 1-butene to 1,3-butadiene. In organic compounds, the bonding orbitals are almost always filled and the anti-bonding orbitals are almost always empty. The important consideration becomes the ordering of the molecular orbitals in an energy level diagram. Figure $9$ shows the typical ordering that would occur for an organic compound with $\pi$ orbitals. The most important energy transition to consider in Figure $9$ is the one from the highest occupied molecular orbital (HOMO) to the lowest unoccupied molecular orbital (LUMO). This will be the lowest energy transition. In the case of 1-butene, the lowest energy transition would be the $\pi$-$\pi$* transition. The UV/VIS absorption spectrum for 1-butene is shown in Figure $10$. The $\lambda$max value has a value of about 176 nm, which is in the vacuum ultraviolet portion of the spectrum. 1,3-Butadiene has two double bonds and is said to be a conjugated system. There are two ways we could consider what happens in butadiene. The first, shown Figure $11$, is to consider each double bond separately showing how the p-orbitals overlap to form the $\pi$ and $\pi$* orbitals (Figure $11$a). Each of these double bonds and energy level diagrams is comparable to the double bond in 1-butene. However, because of the conjugation in 1,3-butadiene, you can think of the $\pi$ and $\pi$* orbitals from each double bond as further overlapping to create the energy level diagram in the bottom picture (Figure $11$b). Because of the additional overlap, the lowest energy transition in butadiene is lower than that in 1-butene. Therefore, the spectrum is expected to shift toward the red. A better way to consider the situation is to examine all the possible orientations of the p-orbitals in 1,3-butadiene. The picture in Figure $12$ provides a representation of 1.3-butadiene showing how the four p-orbitals are all positioned side-by-side to overlap with each other. Using representations of the p-orbitals in which the dark color indicates the positive region of the wave function and a light color indicates the negative region of the wave function, draw all of the possible ways in which the wave functions of the four p-orbitals can overlap with each other. The four pictures in Figure $13$ represent the possible alignments of the signs of the wave functions in 1,3-butadiene. In Figure $13$a, all four p-orbitals constructively overlap with each other. In Figure $13$b, two adjacent pairs of p-orbitals constructively overlap with each other. In Figure $13$c, only the pair of p-orbitals in the center has constructive overlap. In Figure $14$d, there is no constructive overlap and only destructive overlap occurs. Rank these from high to low energy. The orbital in which all four p-orbitals overlap would be the lowest in energy (Figure $14$). The next has two regions of overlap. The third has only one region of overlap and the highest energy orbital has no regions of overlap. Because there are four electrons to put in the orbitals (one from each of the contributing p-orbitals), the bottom two orbitals are filled and the top two are empty. The lowest energy HOMO to LUMO transition will be lower than observed in 1-butene. The UV/VIS spectrum of 1,3-butadiene is shown in Figure $15$. In this case, the $\lambda$max value is at about 292 nm, a significant difference from the value of 176 nm in 1-butene. The effect of increasing conjugation is to shift the spectrum toward longer wavelength (lower frequency, lower energy) absorptions. Another comparative set of conjugated systems occurs with fused ring polycyclic aromatic hydrocarbons such as naphthalene, anthracene and pentacene. The spectra in Figure $16$ are for benzene, naphthalene, anthracene and pentacene. Note that as more rings and more conjugation are added, the spectrum shifts further toward and into the visible region of the spectrum. Figure $16$. UV/VIS absorption spectra of benzene, naphthalene, anthracene and pentacene.
textbooks/chem/Analytical_Chemistry/Molecular_and_Atomic_Spectroscopy_(Wenzel)/2%3A_Ultraviolet_Visible_Absorption_Spectroscopy/2.2%3A_Effect_of_Conjugation.txt
Compare the UV absorption spectrum of benzene and pyridine. Benzene has a set of conjugated $\pi$-bonds and the lowest energy transition would be a $\pi$-$\pi$* transition as shown in Figure $17$. The UV/VIS absorption spectrum for benzene is shown in Figure $18$. Benzene absorbs radiation in the vacuum ultraviolet over the range from 160-208 nm with a $\lambda$max value of about 178 nm. Pyridine has a similar conjugation of double bonds comparable to what occurs in benzene. For pyridine, the lowest energy transition involves the n-$\pi$* orbitals and this will be much lower in energy than the $\pi$-$\pi$* transition in pyridine or benzene. The UV/VIS absorption spectrum of pyridine is shown in Figure $20$. The shift toward higher wavelengths when compared to benzene is quite noticeable in the spectrum of pyridine, where the peaks from 320-380 nm represent the n-$\pi$* transition and the peak at about 240 nm is a $\pi$-$\pi$* transition. Note that intensity and therefore the molar absorptivity of the n-$\pi$* transition is lower than that of the $\pi$-$\pi$* transition. This is usually the case with organic compounds. Dye molecules absorb in the visible portion of the spectrum. They absorb wavelengths complementary to the color of the dye. Most $\pi$-$\pi$* transitions in organic molecules are in the ultraviolet portion of the spectrum unless the system is highly conjugated. Visible absorption is achieved in dye molecules by having a combination of conjugation and non-bonding electrons. Azo dyes with the N=N group are quite common, one example of which is shown in Figure $21$. 2.4: Effect of Solvent The peaks in the 320-380 nm portion of the UV absorption spectrum of pyridine shift noticeably toward the blue (high energy) portion of the spectrum on changing the solvent from hexane to methanol. Account for this change. These are the lowest energy peaks in the spectrum and correspond to the n-$\pi$* transition in pyridine. Hexane (C6H14) is a non-polar hydrocarbon. Methanol (CH3OH) is a polar solvent with the ability to form hydrogen bonds. For pyridine, the hydrogen atom of the hydroxyl group of methanol will form hydrogen bonds with the lone pair on the nitrogen atoms, as shown in Figure $22$. Hexane cannot form such hydrogen bonds. In order to account for the blue-shift in the spectrum, we need to consider what, if anything, will happen to the energies of the n, $\pi$, and $\pi$* orbitals. Bonding between two atomic orbitals leads to the formation of a bonding and anti-bonding molecular orbital, one of which drops in energy and the other of which rises in energy. The electrostatic attraction between a positively charged hydrogen atom and negatively charged lone pair of electrons in a hydrogen-bond (as illustrated in Figure $22$ for methanol and pyridine) is a stabilizing interaction. Therefore, the energy of the non-bonding electrons will be lowered. The picture in Figure $23$ shows representations of a $\pi$- and $\pi$*-orbital. Electrons in a ­$\pi$-orbital may be able to form a weak dipole-dipole interaction with the hydroxyl hydrogen atom of methanol. This weak interaction may cause a very slight drop in energy of the $\pi$-orbital, but it will not be nearly as pronounced as that of the non-bonding electrons. Similarly, if an electron has been excited to the $\pi$*-orbital, it has the ability to form a weak dipole-dipole interaction with the hydroxyl hydrogen atom of methanol. This weak interaction will cause a drop in energy of the $\pi$*-orbital, but it will not be nearly as pronounced as that of the non-bonding electrons. However, the drop in energy of the $\pi$*-orbital will be larger than that of the $\pi$-orbital because the $\pi$*-orbital points out from the C=C bond and is more accessible to interact with the hydroxyl hydrogen atom of methanol than the $\pi$-orbital. The diagram in Figure $24$ shows the relative changes in the energies of the n, $\pi$, and $\pi$* orbitals that would occur on changing the solvent from hexane to methanol with stabilization occurring in the order n > $\pi$* > $\pi$. An examination of the relative energies between hexane and methanol shows that both the n and $\pi$* levels drop in energy, but the drop of the n level is greater than the drop of the $\pi$* level. Therefore, the n-$\pi$* transition moves to higher energy, hence a blue-shift is observed in the peaks in the spectrum in the 320-380 nm range of pyridine. The blue-shift that is observed is referred to as a hypsochromic shift. The peaks in the UV spectrum of benzene shift slightly toward the red (low energy) portion of the spectrum on changing the solvent from hexane to methanol. Account for this change. The absorption in benzene corresponds to the $\pi$-$\pi$* transition. Using the diagram in Figure $24$, the drop in energy of the $\pi$*-orbital is more than that of the $\pi$-orbital. Therefore, the $\pi$-$\pi$* transition is slightly lower in energy and the peaks shift toward the red. The red-shift is referred to as a bathochromic shift. Note as well that the change in the position of the peak for the $\pi$-$\pi$* transition of benzene would be less than that for the n-$\pi$* transition of pyridine because the stabilization of the non-bonding electrons is greater than the stabilization of the electrons in the $\pi$*-orbital.
textbooks/chem/Analytical_Chemistry/Molecular_and_Atomic_Spectroscopy_(Wenzel)/2%3A_Ultraviolet_Visible_Absorption_Spectroscopy/2.3%3A_Effect_of_Non-bonding_Electrons.txt
Is UV/VIS spectroscopy useful as a qualitative tool? The answer depends in part of what type of system you are examining. Ultraviolet-absorbing organic molecules usually involve n-$\pi$* and $\pi$-$\pi$* transitions. Since UV/VIS absorption spectra are usually recorded at room temperature in solution, collisional broadening leads to a blurring together of all of the individual lines that would correspond to excitations to the different vibrational and rotational states of a given electronic state. As such, UV/VIS absorption spectra of organic compounds are not all that different and distinct from each other. Whereas we can reliably assign unique structures to molecules using the spectra that are obtained in NMR spectroscopy, the spectra in UV/VIS spectroscopy do not possess enough detail for such an analysis. Therefore, UV/VIS spectroscopy is not that useful a tool for qualitative analysis of organic compounds. However, a particular organic compound does have a specific UV/VIS absorption spectrum as seen in the various examples provided above. If the spectrum of an unknown compound exactly matches that of a known compound (provided both have been recorded under the same conditions – in the same solvent, at the same pH, etc.), it is strong evidence that the compounds are the same. However, because of the featureless nature of many UV/VIS spectra, such a conclusion must be reached with caution. The use of a UV-diode array detector as a liquid chromatographic detection method is quite common. In this case, the match of identical spectra with the match in retention time between a known and unknown can be used to confirm an assignment of the identity of the compound. Many transition metal ions have distinct UV/VIS absorption spectra that involve d-d electron transitions. The position of peaks in the spectra can vary significantly depending on the ligand, and there is something known as the spectrochemical series that can be used to predict certain changes that will be observed as the ligands are varied. UV/VIS spectroscopy can oftentimes be used to reliably confirm the presence of a particular metal species in solution. Some metal species also have absorption processes that result from a charge transfer process. In a charge transfer process, the electron goes from the HOMO on one species to the LUMO on the other. In metal complexes, this can involve a ligand-to-metal transition or metal-to-ligand transition. The ligand-to-metal transition is more common and the process effectively represents an internal electron transfer or redox reaction. Certain pairs of organic compounds also associate in solution and exhibit charge-transfer transitions. An important aspect of charge transfer transitions is that they tend to have very high molar absorptivities. Is UV/VIS spectroscopy useful as a quantitative tool? We have the ability to sensitively measure UV/VIS radiation using devices like photomultiplier tubes or array detectors. Provided the molar absorptivity is high enough, UV/VIS absorption is a highly sensitive detection method and is a useful tool for quantitative analysis. Since many substances absorb broad regions of the spectrum, it is prone to possible interferences from other components of the matrix. Therefore, UV/VIS absorption spectroscopy is not that selective a method. The compound under study must often be separated from other constituents of the sample prior to analysis. The coupling of liquid chromatography with ultraviolet detection is one of the more common analysis techniques. In addition to the high sensitivity, the use of UV/VIS absorption for quantitative analysis has wide applicability, is accurate, and is easy to use. If you were using UV spectroscopy for quantitative analysis, what criteria would you use in selecting a wavelength for the analysis? The best wavelength to use is the one with the highest molar absorptivity ($\lambda$max), provided there are no interfering substances that absorb at the same wavelength. If so, then there either needs to be a separation step or it may be possible to use a different wavelength that has a high enough molar absorptivity but no interference from components of the matrix. What variables influence the recording of UV/VIS absorption spectra and need to be accounted for when performing qualitative and quantitative analyses? We have discussed several of these already in the unit. The solvent can have an effect and cause bathochromic and hypsochromic shifts. Species in the matrix that may form dipole-dipole interactions including hydrogen bonds can alter the spectra as well. Metal ions that can form donor-acceptor complexes can have the same effect. Temperature can have an effect on the spectrum. The electrolyte concentration can have an effect as well. As discussed above, the possibility that the sample has interferences that absorb the same radiation must always be considered. Finally, pH can have a pronounced effect because the spectrum of protonated and deprotonated acids and bases can be markedly different from each other. In fact, UV/VIS spectroscopy is commonly used to measure the pKa of new substances. The reaction below shows a generalized dissociation of a weak acid (HA) into its conjugate base. $\mathrm{HA + H_2O = A^– + H_3O^+} \nonumber$ Provided the UV/VIS absorption spectra of HA and A– differ from each other, describe a method that you could use to measure the pKa of the acid. This rate of dissociation represented above is slow on the time scale of absorption (the absorption of a photon occurs over the time scale of 10-14 to 10-15 seconds). Because the reaction rate is slow, this means that during the absorption of a photon, the species is only in one of the two forms (either HA or A). Therefore, if the solution is at a pH where both species are present, peaks for both will show up in the spectrum. To measure the pKa, standards must first be analyzed in a strongly acidic solution, such that all of the species is in the HA form, and a standard curve for HA can be generated. Then standards must be analyzed in a strongly basic solution, such that all of the species is in the A form, to generate a standard curve for A. At intermediate pH values close to the pKa, both HA and A will be present and the two standard curves can be used to calculate the concentration of each species. The pH and two concentrations can then be substituted into the Henderson-Hasselbalch equation to determine the pKa value. $\mathrm{pH = pKa + \log\left(\dfrac{[A^–]}{[HA]}\right)} \nonumber$ 2.6: Evaporative Light Scattering Detection Evaporative light scattering detection is a specialized technique in which UV radiation is used to detect non-UV-absorbing compounds separated by liquid chromatography. The column effluent is passed through a heated chamber that evaporates the mobile phase solvent. Non-volatile analyte compounds, which is usually the case for compounds separated by liquid chromatography, form solid particulates when the solvent is evaporated. The solid particulates scatter UV radiation, which will lead to a reduction in the UV power at the detector (i.e., photomultiplier tube) when a compound elutes from the chromatographic column. The method is more commonly used to determine the presence and retention time of non-UV-absorbing species in a chromatographic analysis rather than their concentration. It is common in liquid chromatographic separations to employ a buffer to control the pH of the mobile phase. Many buffers will form particulates on evaporation of the solvent and interfere with evaporative light scattering detection. Evaporative light scattering detection is encompassed more broadly within a technique known as turbidimetry. In turbidometric measurements, the detector is placed in line with the source and the decrease in power from scattering by particulate matter is measured. Nephelometry is another technique based on scattering, except now the detector is placed at 90o to the source and the power of the scattered radiation is measured. Turbidmetry can be measured using a standard UV/VIS spectrophotometer; nephelometry can be measured using a standard fluorescence spectrophotometer (discussed in Chapter 3). Turbimetry is better for samples that have a high concentration of scattering particles where the power reaching the detector will be significantly less than the power of the course. Nephelometry is preferable for samples with only low concentration of scattering particles. Turbidimetry and nephelometry are widely used to determine the clarity of solutions such as water, beverages, and food products.
textbooks/chem/Analytical_Chemistry/Molecular_and_Atomic_Spectroscopy_(Wenzel)/2%3A_Ultraviolet_Visible_Absorption_Spectroscopy/2.5%3A_Applications.txt
Learning Objectives After completing this unit the student will be able to: • Describe the difference between a singlet and triplet state. • Draw an energy level diagram and identify the transitions that correspond to absorption, fluorescence, internal conversion, radiationless decay, intersystem crossing and phosphorescence. • Explain why phosphorescence emission is weak in most substances. • Draw a diagram that shows the layout of the components of a fluorescence spectrophotometer. • Describe the difference between a fluorescence excitation and emission spectrum. • Draw representative examples of fluorescence excitation and emission spectra. • Describe a procedure for measuring phosphorescence free of any interference from fluorescence. • Justify why fluorescence measurements are often more sensitive than absorption measurements. • Describe the meaning and consequences of self-absorption. • Identify variables including the effect of pH that can influence the intensity of fluorescence. • Identify the features that occur in organic molecules that are likely to have high fluorescent quantum yields. • Compare two molecules and determine which one will undergo more collisional deactivation. Luminescent methods refer to a family of techniques in which excited state species emit electromagnetic radiation. Among luminescent methods are various sub-categories that include the processes of fluorescence, phosphorescence, chemiluminescence, bioluminescence and triboluminescence. Among these different sub-categories, fluorescence spectroscopy is by far the most common technique used for analysis purposes. You are no doubt familiar with fluorescent lights. This unit will allow you to understand how such a light works. 3: Molecular Luminescence Fluorescence only occurs after a chemical species has first been excited by electromagnetic radiation. The emission of radiation by a solid object heated in a flame (e.g., a piece of iron) is not fluorescence because the excitation has occurred thermally rather than through the absorption of electromagnetic radiation. Fluorescence can occur from species that have been excited by UV/VIS radiation. To consider what happens in the process of fluorescence, we need to think of the possible energy states for a ground and excited state system. Draw an energy level diagram for a typical organic compound with $\pi$ and $\pi$ * orbitals. Figure $1$ represents the energy levels for a typical organic compound in which the $\pi$ orbitals are full and the $\pi$* orbitals are empty. Now consider the electron spin possibilities for the ground and excited state. Are there different possible ways to orient the spins (if so, these represent different spin states). The ground state, which is shown on the left in Figure $1$, has two electrons in the $\pi$-orbital. These two electrons must have opposite spins or else they would have the same four quantum numbers. Therefore, there is only one possible way to align the spins of the two electrons in the $\pi$-orbital. The excited state has one electron in the $\pi$-orbital and one electron in the $\pi$*-orbital as shown in Figure $1$. In this case, there are two possible ways we might align the spins. In one case, the electron in the $\pi$*-orbital could have the opposite spin of the electron in the $\pi$-orbital (e.g., the electrons have paired spins, even though they are in different orbitals – see Figure $2$, middle diagram). In the other case, the electron in the $\pi$*-orbital could have a spin that is parallel with the electron in the $\pi$-orbital (see Figure $2$ – far right diagram). In both cases, it does not matter which electron has spin-up and which has spin-down, the only important point is that in one case the two spins are opposite and in the other they are parallel. The energy level diagram in Figure $2$ shows representations for the two possibilities. Do you think these different spin states have different energies? Since they are different from each other (i.e., spins parallel versus spins paired), it makes sense that they would have different energies. Which one do you expect to be lower in energy? To answer this question, we have to think back to a rule we established with placing electrons into atomic or molecular orbitals that have the same energy (i.e., are degenerate). We learned that electrons go into degenerate orbitals with parallel spins and only pair up their spins when forced to do so (e.g., an atomic p3 configuration has three unpaired electrons with parallel spins; only when we added a fourth electron to make a p4 configuration do two of the electrons have paired spins). The rationale we gave for this observation is that configurations with parallel spins in degenerate orbitals are lower in energy than configurations with paired spins (i.e., it took energy to pair up electron spins). Applying this general concept to the situation above, we can reason that the configuration in which the electrons in the $\pi$- and $\pi$*-orbitals have parallel spins is lower in energy than the configuration in which the two electrons have paired spins. The energy level diagrams in Figure $3$ show the lower energy of the configuration where the electrons have parallel spins. If the spin state is defined as (2S + 1) where S represents the total electronic spin for the system, try to come up with names for the ground and possible excited states for the system that are based on their spin state. Remember that spin quantum numbers are either +½ or –½. S, the total electronic spin for the system, is the sum of the individual spin quantum numbers for all of the electrons. In the case of the ground state, for every electron with a spin of +½ there is an electron with a spin of –½. Therefore, the value of S is zero. The spin state, which is 2S + 1, would have a value of 1. In the case of the excited state in which the electrons have paired spins (+½ and –½), the value of S is also zero. Therefore, the spin state, which is 2S + 1, would have a value of 1. In the case of the excited state in which the electrons have parallel spins (+½ and +½; by convention, we use the positive value of the spin for parallel spins when determining the spin state), the value of S is now one. Therefore, the spin state, which is 2S + 1, would have a value of 3. The name we use to signify a system with a spin state of one is a singlet state. The name we use to signify a system with a spin state of three is a triplet state. Note that the ground state is a singlet state and that one of the excited states is a singlet state as well. We differentiate these by denoting the energy level with a number subscript. So the ground singlet state is denoted as S0 whereas the first excited state is denoted as S1. It is possible to excite a molecular species to higher electronic states so that higher energy S2, S3, etc. singlet states exist as well. The triplet state would be denoted as T1. There are also T2, T3, etc. as well. Now we can draw a more complex energy diagram for the molecule that shows different singlet and triplet levels (Figure $4$). Draw a diagram of the energy levels for such a molecule. Draw arrows for the possible transitions that could occur for the molecule. Note in Figure $4$ how a triplet state is slightly lower in energy than the corresponding singlet state. Note as well that there are vibrational and rotational levels superimposed within the electronic states as we observed before when considering UV/VIS spectroscopy. The energy level diagram in Figure $4$ shows the transitions that can occur within this manifold of energy states for an organic molecule. The transitions are numbered to facilitate our discussion of them. Transition 1 (Absorption) The transitions labeled with the number (1) in Figure $4$ represent the process of absorption of incident radiation that promotes the molecule to an excited electronic state. The diagram shows the absorption process to the S1 and S2 states. It is also possible to excite the molecule to higher vibrational and rotational levels within the excited electronic states, so there are many possible absorption transitions. The following are equations that show the absorption of different frequencies of radiation needed to excite the molecule to S1 and S2. $\mathrm{S_0 + h\nu = S_1} \nonumber$ $\mathrm{S_0 + h\nu ’ = S_2} \nonumber$ It is reasonable at first to think that there is an absorption transition that goes directly from the S0 to the T1 state. This is a transition that involves a spin-flip and it turns out that transitions that involve a spin-flip or change in spin state are forbidden, meaning that they do not happen (although, as we will soon see, sometimes transitions that are forbidden do happen). What is important here is that you will not get direct excitation from the S0 level to a higher energy triplet state. These transitions are truly forbidden and do not happen. Transition 2 (Internal Conversion) Internal conversion is the process in which an electron crosses over to another electronic state of the same spin multiplicity (e.g., singlet-to-singlet, triplet-to-triplet). The internal conversion in Figure $4$ is from S2 to S1 and involves a crossover into a higher energy vibrational state of S1. It is also possible to have internal conversion from S1 to a higher vibrational level of S0. Transition 3 (Radiationless decay – loss of energy as heat) The transitions labeled with the number (3) in Figure $4$ are known as radiationless decay or external conversion. These generally correspond to the loss of energy as heat to surrounding solvent or other solute molecules. $\mathrm{S_1 = S_0 + heat} \nonumber$ $\mathrm{T_1 = S_0 + heat} \nonumber$ Note that systems in S1 and T1 can lose their extra energy as heat. Also, systems excited to higher energy vibrational and rotational states also lose their extra energy as heat. The energy diagram level in Figure $4$ shows systems excited to higher vibrational levels of S1 and all of these will rapidly lose some of the extra energy as heat and drop down to the S1 level that is only electronically excited. An important consideration that effects the various processes that take place for excited state systems is the lifetimes of the different excited states. The lifetime of a particular excited state (e.g. the S1 state) depends to some degree of the specific molecular species being considered and the orbitals involved, but measurements of excited state lifetimes for many different compounds allows us to provide ballpark numbers of the lifetimes of different excited states. The lifetime of an electron in an S2 state is typically on the order of 10-15 second. The lifetime of an electron in an S1 state depends on the energy levels involved. For a $\pi$-$\pi$* system, the lifetimes range from 10-7 to 10-9 second. For a n-$\pi$* system, the lifetimes range from 10-5 to 10-7 second. Since $\pi$-$\pi$* molecules are more commonly studied by fluorescence spectroscopy, S1 lifetimes are typically on the order of 10-8 second. While this is a small number on an absolute scale of numbers, note that it is a large number compared to the lifetimes of the S2 state. The lifetime of a vibrational state is typically on the order of 10-12 second. Note that the lifetime of an electron in the S1 state is significantly longer than the lifetime of an electron in a vibrationally excited state of S1. That means that systems excited to vibrationally excited states of S1 rapidly lose heat (in 10-12 second) until reaching S1, where they then “pause” for 10-8 second. Transition 4 (Fluorescence) The transition labeled (4) in Figure $4$ denotes the loss of energy from S1 as radiation. This process is known as fluorescence. $\mathrm{S_1 = S_0 + h\nu} \nonumber$ Therefore, molecular fluorescence is a term used to describe a singlet-to-singlet transition in a system where the chemical species was first excited by absorption of electromagnetic radiation. Note that the diagram in Figure $4$ does not show molecular fluorescence occurring from the S2 level. Fluorescence from the S2 state is extremely rare in molecules and there are only a few known systems where it occurs. Instead, what happens is that most molecules excited to energy states higher than S1 quickly (10-15 second) undergo an internal conversion to a high energy vibrational state of S1. They then rapidly lose the extra vibrational energy as heat and “pause” in the S1 state. From S1, they can either undergo fluorescence or undergo another internal conversion to a high energy vibrational state of S0 and then lose the energy as heat. The extent to which fluorescence or loss of heat occurs from S1 depends on particular features of the molecule and solution that we will discuss in more detail later in this unit. An important aspect of fluorescence from the S1 state is that the molecule can end up in vibrationally excited states of S0, as shown in the diagram above. Therefore, fluorescence emission from an excited state molecule can occur at a variety of different wavelengths. Just like we talked about with absorbance and the probability of different transitions (reflected in the magnitude of the molar absorptivity), fluorescent transitions have different probabilities as well. In some molecules, the S1-to-S0 fluorescent transition is the most probable, whereas in other molecules the most probable fluorescent transition may involve a higher vibrational level of S1. A molecule ending up in a higher vibrational level of S1 after a fluorescent emission will quickly lose the extra energy as heat and drop down to S0. So how do fluorescent light bulbs work? Inside the tube that makes up the bulb is a gas comprised of argon and a small amount of mercury. An electrical current that flows through the gas excites the mercury atoms causing them to emit light. This light is not fluorescence because the gaseous species was excited by an electrical current rather than radiation. The light emitted by the mercury strikes the white powdery coating on the inside of the glass tube and excites it. This coating then emits light. Since the coating was excited by light and emits light, it is a fluorescence emission. Transition 5 (Intersystem crossing) The transition labeled (5) in Figure $4$ is referred to as intersystem crossing. Intersystem crossing involves a spin-flip of the excited state electron. Remember that the electron has “paused” in S1 for about 10-8 second. While there, it is possible for the species to interact with things in the matrix (e.g. collide with a solvent molecule) that can cause the electron in the ground and/or excited state to flip its spin. If the spin flip occurs, the molecule is now in a vibrationally excited level of T1 and it rapidly loses the extra vibrational energy as heat to drop down to the T1 electronic level. What do you expect for the lifetime of an electron in the T1 state? Earlier we had mentioned that transitions that involve a change in spin state are forbidden. Theoretically that means that an electron in the T1 state ought to be trapped there, because the only place for it to go on losing energy is to the S0 state. The effect of this is that electrons in the T1 state have a long lifetime, which can be on the order of 10-4 to 100 seconds. There are two possible routes for an electron in the T1 state. One is that another spin flip can occur for one of the two electrons causing the spins to be paired. If this happens, the system is now in a high-energy vibrational state of S0 and the extra energy is lost rapidly as radiationless decay (transition 3) or heat to the surroundings. Transition 6 (Phosphorescence) The other possibility that can occur for a system in T1 is to emit a photon of radiation. Although, theoretically a forbidden process, it does happen for some molecules. This emission, which is labeled (6) in Figure $4$, is known as phosphorescence. There are two common occasions where you have likely seen phosphorescence emission. One is from glow-in-the-dark stickers. The other is if you have ever turned off your television in a dark room and observed that the screen has a glow that takes a few seconds to die down. Phosphorescence is usually a weak emission from most substances. Why is phosphorescence emission weak in most substances? One reason why phosphorescence is usually weak is that it requires intersystem crossing and population of the T1 state. In many compounds, radiationless decay and/or fluorescence from the S1 state is preferable to intersystem crossing and not many of the species ever make it to the T1 state. Systems that happen to have a close match between the energy of the S1 state and a higher vibrational level of the T1 state may have relatively high rates of intersystem crossing. Compounds with non-bonding electrons often have higher degrees of intersystem crossing because the energy difference between the S1 and T1 states in these molecules is less. Paramagnetic substances such as oxygen gas (O2) promote intersystem crossing because the magnetic dipole of the unpaired electrons of oxygen can interact with the magnetic spin dipole of the electrons in the species under study, although the paramagnetism also diminishes phosphorescence from T1 as well. Heavy atoms such as Br and I in a molecule also tend to promote intersystem crossing. A second reason why phosphorescence is often weak has to do with the long lifetime of the T1 state. The longer the species is in the excited state, the more collisions it has with surrounding molecules. Collisions tend to promote the loss of excess energy as radiationless decay. Such collisions are said to quench fluorescence or phosphorescence. Observable levels of phosphorescent emission will require that collisions in the sample be reduced to a minimum. Hence, phosphorescence is usually measured on solid substances. Glow-in-the-dark stickers are a solid material. Chemical substances dissolved in solution are usually cooled to the point that the sample is frozen into a solid glass to reduce collisions before recording the phosphorescence spectrum. This requires a solvent that freezes to a clear glass, something that can be difficult to achieve with water as it tends to expand and crack when frozen. Which transition ($\pi$*-$\pi$ or $\pi$*-n) would have a higher fluorescent intensity? Justify your answer. There are two reasons why you would expect the $\pi$*-n transition to have a lower fluorescent intensity. The first is that the molar absorptivity of n-$\pi$* transitions is less than that of $\pi$-$\pi$* transitions. Fewer molecules are excited for the n-$\pi$* case, so fewer are available to fluoresce. The second is that the excited state lifetime of the n-$\pi$* state (10-5-10-7 second) is longer than that of the $\pi$-$\pi$* state (10-7-10-9 second). The longer lifetime means that more collisions and more collisional deactivation will occur for the n-$\pi$* system than the $\pi$-$\pi$* system. Now that we understand the transitions that can occur in a system to produce fluorescence and phosphorescence occurs, we can examine the instrumental setup of a fluorescence spectrophotometer.
textbooks/chem/Analytical_Chemistry/Molecular_and_Atomic_Spectroscopy_(Wenzel)/3%3A_Molecular_Luminescence/3.2%3A_Energy_States_and_Transitions.txt
What would constitute the basic instrumental design of a fluorescence spectrophotometer? In many ways the design of a fluorescence spectrophotometer is similar to an UV/VIS absorption spectrophotometer. We need a source of radiation and a monochromator to select out the desired wavelength of light. The device needs a sample holder and a detector to measure the intensity of the radiation. Just like UV/VIS absorption spectroscopy, radiation is used to excite the sample. Unlike absorption spectroscopy, a fluorescent sample emits radiation, and the emission goes from the S1 level to either the S0 level or higher vibrational states of the S0 level. Since fluorescence involves an excitation and emission process, and the wavelengths that these two processes occur at will almost always be different, a fluorescence spectrophotometer requires an excitation and emission monochromator. Also, since the emitted radiation leaves the sample in all directions, the detector does not need to be at 180o relative to the source as in an absorption instrument. Usually the detector is set at 90o to the incident beam and mirrors are placed around the sample cell 180o to the source and 180o to the detector to reflect the source beam back through the sample and to reflect emitted radiation toward the detector. A diagram of the components of a fluorescence spectrophotometer is shown in Figure \(1\). 3.4: Excitation and Emission Spectra What would be the difference between an excitation and emission spectrum in fluorescence spectroscopy? In an excitation spectrum, the emission monochromator is set to some wavelength where the sample is known to emit radiation and the excitation monochromator is scanned through the different wavelengths. The excitation spectrum will look similar if not identical to the absorption spectrum obtained in UV/VIS spectroscopy. In an emission spectrum, the excitation monochromator is set to some wavelength known to excite the sample and the emission monochromator is scanned through the different wavelengths. Draw representative examples of the excitation and emission spectrum for a molecule. The important point to realize is that the only peak that overlaps between the excitation and emission spectrum is the S0-S1 transition. Otherwise, all the excitation peaks occur at higher frequencies or shorter wavelengths and all of the emission peaks occur at lower frequencies or longer wavelengths. The spectra in Figure \(6\) show the excitation and emission spectra of anthracene. Note that the only overlap occurs at 380 nm, which corresponds to the S0-S1 transition. Describe a way to measure the phosphorescence spectrum of a species that is not compromised by the presence of any fluorescence emission. The important thing to consider in addressing this question is that the lifetime of the S1 state from which fluorescence occurs is approximately 10-8 second whereas the lifetime of the T1 state from which phosphorescence occurs is on the order of 10-4 to 100 seconds. Because of these different lifetimes, fluorescence emission will decay away rather quickly while phosphorescence emission will decay away more slowly. The diagram in Figure \(7\) shows representations for the decay of fluorescence versus phosphorescence as a function of time if the radiation source was turned off. The two can be distinguished by using a pulsed source. A pulsed source is turned on for a brief instant and then turned off. Many fluorescent spectrophotometers use a pulsed source. The electronics on the detector can be coordinated with the source pulsing. When measuring fluorescence, the detector reads signal when the pulse is on. When measuring phosphorescence, a delay time during which the detector is turned off occurs after the pulse ends. Then the detector is turned on for some period of time, which is referred to as the gate time. Figure \(7\) also shows where the delay and gate times might be set for the sample represented in the decay curves. The proper gate time depends in part on how slow the phosphorescence decays. You want a reasonable length of time to measure enough signal, but if the gate time is too long and weak to no phosphorescence occurs at the end, the detector is mostly measuring noise and the signal-to-noise ratio will be reduced. If performing quantitative analysis in fluorescence spectroscopy, which wavelengths would you select from the spectra you drew in the problem above? The two best wavelengths would be those that produced the maximum signal on the excitation and emission spectra. That will lead to the most sensitivity and lowest detection limits in the analysis. For the spectra of anthracene drawn in Figure \(6\), that would correspond to an excitation wavelength of 360 nm and emission wavelength of 402 nm. The one exception is if the S0-S1 transition is the maximum on both spectra, which would mean having the excitation and emission monochromators set to the same wavelength. The problem that occurs here is that the excitation beam of radiation will always exhibit some scatter as it passes through the sample. Scattered radiation appears in all directions and the detector has no way to distinguish this from fluorescence. Usually the excitation and emission wavelengths must be offset by some suitable value (often 30 nm) to keep the scatter to acceptable levels.
textbooks/chem/Analytical_Chemistry/Molecular_and_Atomic_Spectroscopy_(Wenzel)/3%3A_Molecular_Luminescence/3.3%3A_Instrumentation.txt
The quantum yield ($\varphi_F$) is a ratio that expresses the number of species that fluoresce relative to the total number of species that were excited. Earlier we said that anything that reduces the number of excited state species that undergo fluorescence is said to quench the fluorescence. The expression for the quantum yield will depend on the rate constants for the different processes that can occur for excited state species. Referring back to our original drawing of the different processes that can occur, we can write the following expression for the quantum yield, where kF is the rate constant for fluorescence, kIC is the rate constant for internal conversion, kEC is the rate constant for external conversion, kISC is the rate constant for intersystem crossing and kC is the rate constant for any other competing processes and includes photodecomposition of the sample. Excited state species sometimes have sufficient energy to decompose through processes of dissociation or predissociation. In dissociation, the electron is excited to a high enough vibrational level that the bond ruptures. In predissociation, the molecule undergoes internal conversion from a higher electronic state to an upper vibrational level of a lower electronic state prior to bond rupture. When putting a sample into a fluorescence spectrophotometer, it is usually desirable to block the excitation beam until just before making the measurement to minimize photodecomposition. $\mathrm{\varphi_F = \dfrac{k_F}{k_F + k_{IC} + k_{EC} +k_{ISC} + k_C}} \nonumber$ Since this is a ratio, the limits of $\varphi$F are from 0 to 1. Species with quantum yields of 0.01 or higher (1 out of 100 excited species actually undergo fluorescence) are useful for analysis purposes. Which method is more sensitive, absorption or fluorescence spectroscopy? On first consideration it might seem reasonable to think that absorption spectroscopy is more sensitive than fluorescence spectroscopy. As stated above, for some compounds that we measure by fluorescence, only one of the 100 species that is excited undergoes fluorescence emission. In this case, 100 photons are absorbed but only one is emitted. The answer though requires a different consideration. The measurement of absorption involves a comparison of $P$ to $P_o$. At low concentrations, these two values are large and similar in magnitude. Therefore, at low concentrations, absorption involves the measurement of a small difference between two large signals. Fluorescence, on the other hand, is measured at 90o to the source. In the absence of fluorescence, as in a blank solution, there ought to be no signal reaching the detector (however, there is still some scattered and stray light that may reach the detector as noise). At low concentrations, fluorescence involves the measurement of a small signal over no background. For comparison, suppose you tried to use your eyes to distinguish the difference between a 100 and 99 Watt light bulb and the difference between complete darkness and a 1 Watt light bulb. Your eyes would have a much better ability to determine the small 1 Watt signal over darkness than the difference between two large 100 and 99 Watt signals. The same occurs for the electronic measurements in a spectrophotometer. Therefore, because emission involves the measurement of a small signal over no background, any type of emission spectroscopy has an inherent sensitivity advantage of one to three orders of magnitude over measurements of absorption. Fluorescence spectroscopy is an especially sensitive analysis method for those compounds that have suitable quantum yields.
textbooks/chem/Analytical_Chemistry/Molecular_and_Atomic_Spectroscopy_(Wenzel)/3%3A_Molecular_Luminescence/3.5._Quantum_Yield_of_Fluorescence_%28%28varphi_ce_F%29%29.txt
What variables influence fluorescence measurements? For each variable, describe its relationship to the intensity of fluorescence emission. There are a variety of variables that influence the signal observed in fluorescence spectroscopy. As seen in the original diagram showing the various energy levels and transitions that can occur, anything that can quench the fluorescent transition will affect the intensity of the fluorescence. When discussing absorption spectroscopy, an important consideration is Beer’s Law. A similar relationship exists for fluorescence spectroscopy, as shown below, in which $I$ is the fluorescence intensity, $\varepsilon$ is the molar absorptivity, $b$ is the path length, $c$ is the concentration, and $P_o$ is the source power. $\mathrm{I = 2.303K’\varepsilon bcP_o} \nonumber$ Not surprisingly, fluorescence intensity varies linearly with the path length and with the concentration. K’ is a constant that is dependent on the geometry and other factors and includes the fluorescence quantum yield. Since $\varphi_F$ is a constant for a given system, K’ is defined as K”$\varphi$F. Of particular interest is that the fluorescence intensity relates directly to the source power. It stands to reason that the higher the source power, the more species that absorb photons and become excited, and therefore the more that eventually emit fluorescence radiation. This suggests that high-powered lasers, provided they emit at the proper wavelength of radiation to excite a system, have the potential to be excellent sources for fluorescence spectroscopy. The equation above predicts a linear relationship between fluorescence intensity and concentration. However, the utility of this equation breaks down at absorbance values of 0.05 or higher leading to a negative deviation of the standard curve. Something else that can possibly occur with fluorescence or other emission processes is that emitted photons can be reabsorbed by ground state molecules. This is a particular problem if the S1-S0 emission transition is the one being monitored. In this situation, at high concentrations of analyte, the fluorescence intensity measured at the detector may actually start to drop as shown in the standard curve in Figure $8$. Any changes in the system that will affect the number and force of collisions taking place in the solution will influence the magnitude of the fluorescence emission. Collisions promote radiationless decay and loss of extra energy as heat, so more collisions or more forceful collisions will promote radiationless decay and reduce fluorescence emission. Therefore, fluorescent intensity is dependent on the temperature of the solution. Higher temperatures will speed up the movement of the molecules (i.e., higher translational energy) leading to more collisions and more forceful collisions, thereby reducing the fluorescent intensity. Insuring that all the measurements are done at the same temperature is important. Reducing the temperature of the sample will also increase the signal-to-noise ratio. Another factor that will affect the number of collisions is the solvent viscosity. More viscous solutions will have fewer collisions, less collisional deactivation, and higher fluorescent intensity. The solvent can have other effects as well, similar to what we previously discussed in the section on UV/VIS absorption spectroscopy. For example, a hydrogen-bonding solvent can influence the value of $\lambda$max in the excitation and emission spectra by altering the energy levels of non-bonding electrons and electrons in $\pi$* orbitals. Other species in the solution (e.g., metal ions) may also associate with the analyte and change the $\lambda$max values. Many metal ions and dissolved oxygen are paramagnetic. We already mentioned that paramagnetic species promote intersystem crossing, thereby quenching the fluorescence. Removal of paramagnetic metal ions from a sample is not necessarily a trivial matter. Removing dissolved oxygen gas is easily done by purging the sample with a diamagnetic, inert gas such as nitrogen, argon or helium. All solution-phase samples should be purged of oxygen gas prior to the analysis. Another concern that can distinguish sample solutions from the blank and standards is the possibility that the unknown solutions have impurities that can absorb the fluorescent emission from the analyte. Comparing the fluorescent excitation and emission spectra of the unknown samples to the standards may provide an indication of whether the unknown has impurities that are interfering with the analysis. The pH will also have a pronounced effect on the fluorescence spectrum for organic acids and bases. An interesting example is to consider the fluorescence emission spectrum for the compound 2-naphthol. The hydroxyl hydrogen atom is acidic and the compound has a pKa of 9.5. At a pH of 1, the compound exists almost exclusively as the protonated 2-naphthol. At a pH of 13, the compound exists almost exclusively as the deprotonated 2-naphtholate ion. At a pH equal to the pKa value, the solution would consist of a 50-50 mixture of the protonated and deprotonated form. The most obvious thing to note is the large difference in the $\lambda$max value for the neutral 2-naphthol (355 nm) and the anionic 2-naphtholate ion (415 nm). The considerable difference between the two emission spectra occurs because the presence of more resonance forms leads to stabilization (i.e., lower energy) of the excited state. As shown in Figure $10$, the 2-naphtholate species has multiple resonance forms involving the oxygen atom whereas the neutral 2-naphthol species only has a single resonance form. Therefore, the emission spectrum of the 2-naphtholate ion is red-shifted relative to that of the 2-naphthol species. Consider the reaction shown below for the dissociation of 2-naphthol. This reaction may be either slow (slow exchange) or fast (fast exchange) on the time scale of fluorescence spectroscopy. Draw the series of spectra that would result for an initial concentration of 2-naphthol of 10-6 M if the pH was adjusted to 2, 8.5, 9.5, 10.5, and 13 and slow exchange occurred. Draw the spectra at the same pH when the exchange rate is fast. If slow exchange occurs, an individual 2-naphthol or 2-naphtholate species stays in its protonated or deprotonated form during the entire excitation-emission process and emits its characteristic spectrum. Therefore, when both species are present in appreciable concentrations, two peaks occur in the spectrum for each of the individual species. On the left side of Figure $11$, at pH 2, all of the species is in the neutral 2-naphthol form, whereas at pH 13 it is all in the anionic 2-naphtholate form. At pH 9.5, which equals the pKa value, there is a 50-50 mixture of the two and the peaks for both species are equal in intensity. At pH 8.5 and 10.5, one of the forms predominates. The intensity of each species is proportional to the concentration. If fast exchange occurs, as seen on the right side of Figure $11$, a particular species rapidly changes between its protonated and deprotonated form during the excitation and emission process. Now the emission is a weighted time average of the two forms. If the pH is such that more neutral 2-naphthol is present in solution, the maximum is closer to 355 nm (pH = 8.5). If the pH is such that more anionic 2-naphtholate is present in solution, the maximum is closer to 415 nm (pH = 10.5). At the pKa value (9.5), the peak appears in the middle of the two extremes. What actually happens – is the exchange fast or slow? The observation is that the exchange of protons that occurs in the acid-base reaction is slow on the time scale of fluorescence spectroscopy. Remember that the lifetime of an excited state is about 10-8 second. This means that the exchange rate of protons among the species in solution is slower than 10-8 second and the fluorescence emission spectrum has peaks for both the 2-naphthol and 2-naphtholate species. Devise a procedure that might allow you to determine the pKa of a weak acid such as 2-naphthol. The pKa value of an acid is incorporated into an expression called the Henderson-Hasselbalch equation, which is shown below where HA represents the protonated form of any weak acid and A is its conjugate base. $\mathrm{pH = pKa + \log \dfrac{[A^–]}{[HA]}} \nonumber$ If a standard curve was prepared for 2-naphthol at a highly acidic pH and 2-naphtholate at a highly basic pH, the concentration of each species at different intermediate pH values when both are present could be determined. These concentrations, along with the known pH, can be substituted into the Henderson-Hasselbach equation to calculate pKa. As described earlier, this same process is used quite often in UV/VIS spectroscopy to determine the pKa of acids, so long as the acid and base forms of the conjugate pair have substantially different absorption spectra. If you do this with the fluorescence spectra of 2-naphthol; however, you get a rather perplexing set of results in that slightly different pKa values are calculated at different pH values where appreciable amounts of the neutral and anionic form are present. This occurs because the pKa of excited state 2-naphthol is different from the pKa of the ground state. Since the fluorescence emission occurs from the excited state, this difference will influence the calculated pKa values. A more complicated set of calculations can be done to determine the excited state pKa values. UV/VIS spectroscopy is therefore often an easier way to measure the pKa of a species than fluorescence spectroscopy. Because many compounds are weak acids or bases, and therefore the fluorescence spectra of the conjugate pairs might vary considerably, it is important to adjust the pH to insure all of either the protonated or deprotonated form. Which compound will have a higher quantum yield: anthracene or diphenylmethane? Answering this question involves a consideration of the effect that collisions of the molecules will have in causing radiationless decay. Note that anthracene is quite a rigid molecule. Diphenylmethane is rather floppy because of the methylene bridge between the two phenyl rings. Hopefully it is reasonable to see that collisions of the floppy diphenylmethane are more likely to lead to radiationless decay than collisions of the rigid anthracene molecules. Another way to think of this is the consequences of a crash between a Greyhound bus (i.e., anthracene) and a car towing a boat (i.e., diphenylmethane). It might be reasonable to believe that under most circumstances, the car would suffer more damage in the collision. Molecules that are suitable for analysis by fluorescence spectroscopy are therefore rigid species, often with conjugated $\pi$ systems, that undergo less collisional deactivation. As such, fluorescence spectroscopy is a much more selective method than UV/VIS absorption spectroscopy. In many cases, a suitable fluorescent chromophore is first attached to the compound under study. For example, a fluorescent derivatization agent is commonly used to analyze amino acids that have been separated by high performance liquid chromatography. The advantage of performing such a derivatization step is because of the high sensitivity of fluorescence spectroscopy. Because of the high sensitivity of fluorescence spectroscopy, it makes it all the more important to control the variables described above as they will then have a more pronounced effect with the potential to cause errors in the measurement. 3.7: Other Luminescent Methods Two other important forms of luminescence are chemiluminescence and bioluminescence. Chemiluminescence refers to a process in which a chemical reaction forms a product molecule that is in an excited state. The excited state product then emits radiation. The classic example of a chemiluminescent process involves the reaction of luminol with hydrogen peroxide (H2O2) in the presence of a catalyst as shown below. The reaction generates 3-aminophthalate in an excited state and it emits a bluish light. The luminal reaction is used in forensics to detect the presence of blood. In this case, the iron from the hemoglobin serves as the catalyst. Another important example of a chemiluminescent reaction involves the reaction of nitric oxide (NO) with ozone (O3) to produce excited state nitrogen dioxide (NO2*) and oxygen gas. Nitric oxide is an important compound in atmospheric chemistry and, with the use of an ozone generator, it is possible to use the chemiluminescent reaction as a sensitive way of measuring NO. $\mathrm{NO = O_3 = NO_2^* + O_2} \nonumber$ $\mathrm{NO_2^* = NO_2 + h\nu} \nonumber$ An important feature of both chemiluminescent reactions above is that peroxide and ozone, which are strong oxidants, have an unstable or energetic chemical bond. Chemiluminescence is a rare process only occurring in a limited number of chemical reactions. Bioluminescence refers to a situation when living organisms use a chemiluminescent reaction to produce a luminescent emission. The classic example is fireflies. There are also a number of bioluminescent marine organisms. Triboluminescence is a form of luminescence caused by friction. Breaking or crushing a wintergreen-flavored lifesaver in the dark produces triboluminescence. The friction of the crushing action excites sugar molecules that emit ultraviolet radiation, which is triboluminescence but cannot be seen by our eyes. However, the ultraviolet radiation emitted by the sugar is absorbed by fluorescent methyl salicylate molecules that account for the wintergreen flavor. The methyl salicylate molecules emit the light that can be seen by our eyes. Finally, light sticks also rely on a fluorescent process. Bending the light stick breaks a vial that leads to the mixing of phenyl oxalate ester and hydrogen peroxide. Two subsequent decomposition reactions occur, the last of which releases energy that excites a fluorescent dye. Emission from the dye accounts for the glow from the light stick.
textbooks/chem/Analytical_Chemistry/Molecular_and_Atomic_Spectroscopy_(Wenzel)/3%3A_Molecular_Luminescence/3.6%3A_Variables_that_Influence_Fluorescence_Measurements.txt
Learning Objectives After completing this unit the student will be able to: • Describe the selection rule for infrared-active transitions. • Determine the vibrations for a triatomic molecule and identify whether they are infrared-active. • Draw the design of a non-dispersive infrared spectrophotometer and describe how it functions. • Describe the difference between time and frequency domain spectra. • Explain how a Michelson Interferometer can be used to obtain a time domain spectrum. • Explain the advantages of Fourier Transform infrared spectroscopy over conventional infrared spectroscopy. 4: Infrared Spectroscopy Infrared radiation is the proper energy to excite vibrations in molecules. The IR spectrum consists of near (4,000-12,800 cm-1), mid (200-4,000 cm-1) and far (10-200 cm-1) regions. The mid-IR region is most commonly used for analysis purposes. Vibrational excitations correspond to changes in the internuclear distances within molecules. You have likely recorded infrared spectra in your organic chemistry course. Thinking back to the instrument you used to record the spectrum, consider the following question. Can infrared spectra be recorded in air? If so, what does this say about the major constituents of air? Thinking back to the instrument you used in your organic chemistry course, you presumably realize that no attempt was made to remove air from the system. The beam of infrared radiation passed through the air, indicating that the major constituents of air (nitrogen gas, N2, and oxygen as, O2) either do not absorb infrared radiation or absorb in another region of the spectrum. You likely know that double and triple bonds have strong absorptions in the mid-IR region of the spectrum. N2 and O2 have triple and double bonds, respectively, so it turns out that N2 and O2 do not absorb infrared radiation. There are certainly minor constituents of the air (e.g. carbon dioxide) that do absorb infrared radiation, and these are accounted for by either using a dual beam configuration on a continuous wave infrared spectrophotometer or by recording a background spectrum on a fourier transform infrared spectrophotometer. Why don’t the major constituents of air absorb infrared radiation? It might be worth noting that a molecule such as hydrogen chloride (HCl) does absorb infrared light. In order for a vibration to absorb infrared radiation and become excited, the molecule must change its dipole moment during the vibration. Homonuclear diatomic molecules such as N2 and O2 do not have dipole moments. If the molecule undergoes a stretching motion as shown in Figure \(1\), where the spheres represent the two nuclei, there is no change in the dipole moment during the vibrational motion, therefore N2 and O2 do not absorb infrared radiation. HCl does have a dipole moment. Stretching the HCl bond leads to a change in the dipole moment. If we stretched the bond so far as to break the bond and produce the two original neutral atoms, there would be no dipole moment. Therefore, as we lengthen the bond in HCl, the dipole moment gets smaller. Because the dipole moment of HCl changes during a stretching vibration, it absorbs infrared radiation. Describe the vibrations of carbon dioxide (CO2) and determine which ones absorb infrared radiation. The number of possible vibrations for a molecule is determined by the degrees of freedom of the molecule. The degrees of freedom for most molecules are (3N – 6) where N is the number of atoms. The degrees of freedom for a linear molecule are (3N – 5). Carbon dioxide is a linear molecule so it has four degrees of freedom and four possible vibrations. One vibration is the symmetrical stretch (Figure \(2\)). Each bond dipole, which is represented by the arrows, does change on stretching, but the overall molecular dipole is zero throughout. Since there is no net change in the molecular dipole, this vibration is not IR active. A second vibration is the asymmetrical stretch (Figure \(3\)). Each bond dipole does change on stretching and the molecule now has a net dipole. Since the molecular dipole changes during an asymmetrical stretch, this vibration is IR active. The third vibration is the bending vibration (Figure \(4\)). There are two bending vibrations that occur in two different planes. Both are identical so both have the same energy and are degenerate. The bending motion does lead to a net molecular dipole. Since the molecular dipole changes during the bending motion, these vibrations are IR active. An atomic stretching vibration can be represented by a potential energy diagram as shown in Figure \(5\) (also referred to as a potential energy well). The x-axis is the internuclear distance. Note that different vibrational energy levels, which are shown on the diagram as a series of parallel lines, are superimposed onto the potential well. Also note that, if the bond gets to too high a vibrational state, it can be ruptured. IR spectra are recorded in reciprocal wavenumbers (cm-1) and there are certain parts of the mid-IR spectrum that correspond to specific vibrational modes of organic compounds. 2700-3700 cm-1: Hydrogen stretching 1950-2700 cm-1: Triple bond stretching 1550-1950 cm-1: Double bond stretching 700 -1500 cm-1: Fingerprint region An important consideration is that as molecules get complex, the various vibrational modes get coupled together and the infrared (IR) absorption spectrum becomes quite complex and difficult to accurately determine. Therefore, while each compound has a unique IR spectrum (suggesting that IR spectroscopy ought to be especially useful for the qualitative analysis – compound identification – of compounds), interpreting IR spectra is not an easy process. When using IR spectra for compound identification, usually a computer is used to compare the spectrum of the unknown compound to a library of spectra of known compounds to find the best match. IR spectroscopy can also be used for quantitative analysis. One limitation to the use of IR spectroscopy for quantitative analysis is that IR sources have weak power that enhances the noise relative to signal and reduces the sensitivity of the method relative to UV/Visible absorption spectroscopy. Also, IR detectors are much less sensitive than those for the UV/VIS region of the spectrum. IR bands are narrower than observed in UV/VIS spectra so instrumental deviations to Beer’s Law (e.g., polychromatic radiation) are of more concern. Fourier transform methods are often used to enhance the sensitivity of infrared methods, and there are some specialized IR techniques that are used as well.
textbooks/chem/Analytical_Chemistry/Molecular_and_Atomic_Spectroscopy_(Wenzel)/4%3A_Infrared_Spectroscopy/4.1%3A_Introduction_to_Infrared_Spectroscopy.txt
Non-Dispersive Infrared Spectroscopy One technique is called non-dispersive infrared (NDIR) spectroscopy. NDIR is usually used to measure a single constituent of an air sample. Think what the name implies and consider how such an instrument might be designed.  The word non-dispersive implies that the instrument does not use a monochromator. The design of a NDIR is illustrated in Figure \(6\). Common things that are often measured using NDIR are the amounts of carbon monoxide and hydrocarbons in automobile exhaust. The device either splits the beam or uses two identical sources, one of which goes through a reference cell and the other of which goes through the sample cell. The sample of air (e.g., auto exhaust) is continually drawn through the sample cell during the measurement. The reference cell is filled with a non-absorbing gas. The detector cell is filled with the analyte (i.e., carbon monoxide, which has an IR absorption band in the region from 2050-2250 cm-1). If the system is designed to measure carbon monoxide, the reference cell does not absorb any radiation from 2050-2250 cm-1. The sample cell absorbs an amount of radiation from 2050-2250 cm-1 proportional to the concentration of carbon monoxide in the sample. The two detector cells, which are filled with carbon monoxide, absorb all of the radiation from 2050-2250 cm-1 that reaches them. The infrared energy absorbed by the detector cells is converted to heat, meaning that the molecules in the cell move faster and exert a greater pressure. Because the reference cell did not absorb any of the radiation from 2050-2250 cm-1, the detector cell on the reference side will have a higher temperature and pressure than the detector cell on the side with the sample. A flexible metal diaphragm is placed between the two cells and forms part of an electronic device known as a capacitor. Note that the capacitor has a gap between the two metal plates, and the measured capacitance varies according to the distance between the two plates. Therefore, the capacitance is a measure of the pressure difference of the two cells, which can be related back to the amount of carbon monoxide in the sample cell. The device is calibrated using a sealed sample cell with a known amount of carbon monoxide. When measuring hydrocarbons, methane (CH4) is used for the calibration since it is a compound that has a C-H stretch of similar energy to the C-H stretching modes of other hydrocarbons. Another common application of NDIR would be as a monitoring device for lethal levels of carbon monoxide in a coal mine. Another specialty application is known as attenuated total reflectance spectroscopy (ATR). ATR involves the use of an IR transparent crystal in which the sample is either coated or flows over both sides of the crystal. A representation of the ATR device is shown in Figure \(7\). The radiation enters the crystal in such a way that it undergoes a complete internal reflection inside the crystal. The path is such that many reflections occur as the radiation passes through the crystal. At each reflection, the radiation slightly penetrates the coated material and a slight absorption occurs. The reason for multiple reflections is to increase the path length of the radiation through the sample. The method can be used to analyze opaque materials that do not transmit infrared radiation. An inconvenience when recording IR spectra is that glass cells cannot be used since glass absorbs IR radiation. Liquid samples are often run neat between two salt plates. Since solvents absorb IR radiation, IR cells usually have rather narrow path lengths to keep solvent absorption to acceptable levels. Solid samples are often mixed with KBr and pressed into an IR transparent pellet. Another way to record an IR spectrum of a solid sample is to perform a diffuse reflectance measurement. The beam strikes the surface of a fine powder and as in ATR some of the radiation is absorbed. Suitable signal-to-noise for diffuse reflectance IR usually requires the use of Fourier transform IR methods.
textbooks/chem/Analytical_Chemistry/Molecular_and_Atomic_Spectroscopy_(Wenzel)/4%3A_Infrared_Spectroscopy/4.2%3A_Specialized_Infrared_Methods.txt
Up until this point, when recording a spectrum, we have described methods in which a monochromator is used to systematically scan through the different wavelengths or frequencies while recording either the absorbance or emission intensity. Spectra recorded in such a mode are said to be in the frequency domain. Fourier transform methods are designed in such a way that they record the spectra in the time domain. The plot in Figure $8$ represents a particular wavelength or frequency of radiation in its time domain. What we observe in the time domain is the oscillation of the amplitude of the wave as a function of time. The waveform drawn above has a certain amplitude as well as a single, specific frequency. If a species in a sample could absorb this particular frequency of radiation, we would observe that the amplitude of this wave diminishes. We could then convert this to a frequency domain spectrum, which would consist of a single line as shown in Figure $9$. The frequency domain spectrum would have a single line at the same frequency as before, but its amplitude would be reduced. Suppose we have a frequency domain spectrum that consisted of two single lines, each with a different frequency. The time domain spectrum of this would now consist of two waves, one for each of the frequencies. The net time domain spectrum would be the addition of those two waves. If there were many frequencies, then the time domain wave form would be a complex pattern. A Fourier transform (FT) is a mathematical procedure that can be used to determine the individual frequency components and their amplitudes that are used to construct a composite wave. The Fourier transform allows you to convert a time domain spectrum to a frequency domain spectrum. Note that time domain spectra are difficult to interpret for either qualitative or quantitative analysis. Frequency domain spectra are more readily interpreted and used for qualitative and quantitative analysis. Yet there are certain advantages to recording a spectrum in the time domain using FT methods. The two most common spectroscopic techniques that are done in an FT mode are IR and NMR spectroscopy. These are two methods that are not the most sensitive among the various spectroscopic techniques that are available, and one advantage of FT methods is that they can be used to improve the signal-to-noise ratio. Recording an FT-IR spectrum requires a process in which the radiation from the source is somehow converted to the time domain. The most common way of achieving this with IR radiation is to use a device known as a Michelson interferometer. A diagram of a Michelson interferometer is shown in Figure $10$. In the Michelson interferometer, radiation from the source is collimated and sent to the beam splitter. At the splitter, half of the radiation is reflected and goes to the fixed mirror. The other half is transmitted through and goes to the moveable mirror. The two beams of radiation reflect off of the two mirrors and meet back up at the beam splitter. Half of the light from the fixed mirror and half of the light from the moveable mirror recombines and goes to the sample. When the moveable mirror is at position 0, it is exactly the same distance from the beam splitter as the fixed mirror. Knowing an exact location of the 0-position is essential to the proper functioning of a Michelson interferometer. The critical factor is to consider what happens to particular wavelengths of light at the moveable mirror is moved to different positions. Plot the intensity of radiation at the sample versus the position of the moveable mirror for monochromatic radiation of wavelength x, 2x or 4x. An important thing to recognize in drawing these plots is that, if the mirror is at –½x, the radiation that goes to the moveable mirror travels an extra distance x compared to the radiation that goes to the fixed mirror (It travels an extra ½x to get to the moveable mirror and an extra ½x to get back to the zero position). If the two beams of radiation recombine at the beam splitter in phase with each other, they will constructively interfere. If the two beams of radiation recombine at the beam splitter out of phase with each other, they will destructively interfere. Using this information, we can then determine what mirror positions will lead to constructive and destructive interference for radiation of wavelengths x, 2x and 4x. The plots that are obtained for wavelength x, 2x and 4x are shown in Figure $11$. There are two important consequences from the plots in Figure $11$. The first is that for each of these wavelengths, the intensity of the radiation at the sample oscillates from full amplitude to zero amplitude as the mirror is moved. In a Michelson interferometer, the moveable mirror is moved at a fixed speed from one extreme (e.g., +x extreme) to the other (e.g., –x extreme). After the relatively slow movement in one direction, the moveable mirror is then rapidly reset to the original position (in the example we are using, it is reset back to the +x extreme), and then moved again to record a second spectrum that is added to the first. Because the mirror moves at a set, fixed rate, the intensity of any one of these three wavelengths varies as a function of time. Each wavelength now has a time domain property associated with it. The second important consequence is that the time domain property of radiation with wavelengths x, 2x and 4x is different. An examination of the plots in Figure $11$ shows that the pattern of when the radiation is at full and zero amplitude is different for the radiation with wavelength x, 2x or 4x. The aggregate plot of all of these wavelengths added together is called an interferogram. If a sample could absorb infrared radiation of wavelength x, the intensity of light at this wavelength would drop after the sample and it would be reflected in the interferogram. The usual process of recording an FT-IR spectrum is to record a background interferogram with no sample in the cell. The interferogram with a sample in the cell is then recorded and subtracted from the background interferogram. The difference is an interferogram reflecting the radiation absorbed from the sample. This time domain infrared spectrum can then be converted to a frequency domain infrared spectrum using the Fourier transform. It is usually common to record several interferograms involving repetitive scans of the moveable mirror and then adding them together. An advantage of using multiple scans is that the signal of each scan is additive. Noise is a random process so adding together several scans leads to a reduction due to cancelling out of some of the noise. Therefore, adding together multiple scans will lead to an improvement in the signal-to-noise ratio. The improvement in the signal-to-noise ratio actually goes up as the square root of the number of scans. This means that recording twice as many scans, which takes twice as long, does not double the signal-to-noise ratio. As such, there are diminishing returns to running excessively large numbers of scans if the sample has an especially weak signal (e.g., due to a low concentration) because the time for the experiment can become excessive. Two important characteristics of an FT-IR spectrophotometer are to have an accurate location of the zero position and a highly reproducible movement of the mirror. Identifying the exact location of the zero position and controlling the mirror movement is usually accomplished in FT-IR spectrophotometers using a laser system. With regards to mirror movement, since the position is equated with time, it is essential that the mirror move with exactly the same speed over the entire scan, and that the speed remain identical for each scan. More expensive FT-IR spectrophotometers have better control of the mirror movement. What are the advantages of FT-IR spectrophotometers over conventional IR spectrophotometers that use a monochromator? We have already mentioned one, which is the ease of recording multiple spectra and adding them together. Whereas a conventional scanning spectrophotometer that uses a monochromator takes several minutes to scan through the wavelengths, the mirror movement in an FT-IR occurs over a few seconds. Another advantage is that an FT-IR has no slits and therefore has a high throughput of radiation. Essentially all of the photons from the source are used in the measurement and there are no losses of power because of the monochromator. Since IR sources have weaker power than UV and visible sources, this is an important advantage of FT-IR instruments. This is especially so in the far IR region where the source power drops off considerably. The ability to add together multiple scans combined with the higher throughput of radiation leads to a significant sensitivity advantage of FT-IR over conventional IR spectrophotometers that use a monochromator. As such, FT-IR instruments can be used with much lower concentrations of substances. An FT-IR will also have much better resolution than a conventional scanning IR, especially if there is reproducible movement of the mirror. Resolution is the ability to distinguish two nearby peaks in the spectrum. The more reproducible the mirror movement, the better the resolution. Distinguishing nearby frequencies is more readily accomplished by a Fourier transform of a composite time domain wave than it is using a monochromator comprised of a grating and slits.
textbooks/chem/Analytical_Chemistry/Molecular_and_Atomic_Spectroscopy_(Wenzel)/4%3A_Infrared_Spectroscopy/4.3%3A_Fourier-Transform_Infrared_Spectroscopy_%28FT-IR%29.txt
Learning Objectives After completing this unit the student will be able to: • Determine whether the molecular vibrations of a triatomic molecule are Raman active. • Explain the difference between Stokes and anti-Stokes lines in a Raman spectrum. • Justify the difference in intensity between Stokes and anti-Stokes lines. • Draw the Stokes and anti-Stokes lines in a Raman spectrum of a compound when given the energies of the different transitions. Raman spectroscopy is an alternative way to get information about the infrared transitions within a molecule. In order for a vibrational transition to be Raman active, the molecule must undergo a change in polarizability during the vibration. Polarizability refers to the ease of distorting electrons from their original position. The polarizability of a molecule decreases with increasing electron density, increasing bond strength, and decreasing bond length. Consider the molecular vibrations of carbon dioxide and determine whether or not they are Raman active. The symmetric stretch of carbon dioxide is not IR active because there is no change in the net molecular dipole (Figure \(1\)). Since both bonds are stretched (i.e., lengthened), both bonds are more easily polarizable. The overall molecular polarizability changes and the symmetric stretch is Raman active. The asymmetric stretch of carbon dioxide is IR active because there is a change in the net molecular dipole (Figure \(2\)). In the asymmetric stretch, one bond is stretched and is now more polarizable while the other bond is compressed and is less polarizable. The change in polarizability of the longer bond is exactly offset by the change in the shorter bond such that the overall polarizability of the molecule does not change. Therefore, the asymmetric stretch is not Raman active. The bending motion of carbon dioxide is IR active because there is a change in the net molecular dipole (Figure \(3\)). Since the bending motion involves no changes in bond length, there is no change in the polarizability of the molecule. Therefore, the bending motion is not Raman active. Note that the IR active vibrations of carbon dioxide (asymmetric stretch, bend) are Raman inactive and the IR inactive vibration (symmetric stretch) is Raman active. This does not occur with all molecules, but often times, the IR and Raman spectra provide complementary information about many of the vibrations of molecular species. Raman spectra are usually less complex than IR spectra. An intriguing aspect of Raman spectroscopy is that information about the vibrational transitions is obtained using visible radiation. The process involves shining monochromatic visible radiation on the sample. The visible radiation interacts with the molecule and creates something that is known as a virtual state. From this virtual state it is possible to have a modulated scatter known as Raman scatter. Raman scatter occurs when there is a momentary distortion of the electrons in a bond of a molecule. The momentary distortion means that the molecule has an induced dipole and is temporarily polarized. As the bond returns to its normal state, the radiation is reemitted as Raman scatter. One form of the modulated scatter produces Stokes lines. The other produces anti-Stokes lines. Stokes lines are scattered photons that are reduced in energy relative to the incident photons that interacted with the molecule. The reductions in energy of the scatter photons are proportional to the energies of the vibrational levels of the molecule. Anti-Stokes lines are scattered photons that are increased in energy relative to the incident photons that interacted with the molecule. The increases in energy of the scatter photons are proportional to the energies of the vibrational levels of the molecule. The energy level diagram in Figure \(4\) shows representations for IR absorption, Rayleigh scatter, Stokes Raman scatter and anti-Stokes Raman scatter. For Stokes lines, the incident photons interact with a ground state molecule and form a virtual state. The scattered photons come from molecules that end up in excited vibrational states of the ground state, thereby explaining why they are lower in energy than the incident photons. For anti-Stokes lines, the incident photons interact with a molecule that is vibrationally excited. The virtual state produced by this interaction has more energy than the virtual state produced when the incident photon interacted with a ground state molecule. The scattered photons come from molecules that end up in the ground state, thereby explaining why they are higher in energy than the incident photons. It is important to recognize that, while the processes in Figure \(4\) responsible for Raman scatter might look similar to the process of fluorescence, the process in Raman spectroscopy involves a modulated scatter that is different from fluorescence. How do we know this? One reason is that Raman scatter occurs when the incident radiation has energy well away from any absorption band of the molecule. Therefore, the molecule is not excited to some higher electronic state but instead exists in a virtual state that corresponds to a high energy vibrational state of the ground state. Another is that Raman scatter has a lifetime of 10-14 second, which is much faster than fluorescent emission. Which set of lines, Stokes or anti-Stokes, is weaker? The anti-Stokes lines will be much weaker than the Stokes lines because there are many more molecules in the ground state than in excited vibrational states. What effect would raising the temperature have on the intensity of Stokes and anti-Stokes lines? Raising the temperature would decrease the population of the ground state and increase the population of higher energy vibrational states. Therefore, with increased temperature, the intensity of the Stokes lines would decrease and the intensity of the anti-Stokes lines would increase. However, the Stokes lines would still have higher intensity than the anti-Stokes lines. Because scatter occurs in all directions, the scattered photons are measured at 90o to the incident radiation. Also, Raman scatter is generally a rather unfavorable process resulting in a weak signal. What would be the ideal source to use for measuring Raman spectra? The more incident photons sent in to the sample, the more chance there is to produce molecules in the proper virtual state to produce Raman scattering. Since the signal is measured over no background, this suggests that we want a high power source. That means that a laser would be preferable as a source for measuring Raman spectra. The highly monochromatic emission from a laser also means that we can more accurately measure the frequency of the Stokes lines in the resulting spectrum. Also an array detector is preferable as it enables the simultaneous measurement of all of the scattered radiation. The molecule carbon tetrachloride (CCl4) has three Raman-active absorptions that occur at 218, 314 and 459 cm-1 away from the laser line. Draw a representation of the Raman spectrum of CCl4 that includes both the Stokes and anti-Stokes lines. The spectrum in Figure \(5\) shows a representation of the complete Raman spectrum for carbon tetrachloride and includes the Stokes and anti-Stokes lines. The laser line undergoes an elastic scattering known as Rayleigh scatter and a complete spectrum has a peak at the laser line that is far more intense than the Raman scatter. Note that the anti-Stokes lines are lower in intensity and higher in energy than the Stokes lines. Note as well that the two spectra appear as mirror images of each other with regards to the placement of the bands at 218, 314 and 459 cm-1 away from the Rayleigh scatter peak. The energy level diagram in Figure \(6\) shows the origin of all of the lines and inspection of it should rationalize why the placement of the Stokes and anti-Stokes lines are mirror images of each other. The relative intensity of the three Stokes lines depends on the probability of each scatter process and is something we could not readily predict ahead of time. Why do the anti-Stokes lines of carbon tetrachloride have the following order of intensity: 219 > 314 > 459 cm-1? The intensity of the three anti-Stokes lines drops going from the 218 to 314 to 459 cm-1 band. Anti-Stokes scatter requires an interaction of the incident photon with vibrationally excited molecules. Heat in the system causes some molecules to be vibrationally excited. The drop in intensity is predictable because, as the vibrational levels increase in energy, they would have lower populations and therefore fewer molecules to produce Raman scatter at that transition. Raman spectroscopy is an important tool used in the characterization of many compounds. As we have already seen, because the selection rules for Raman (change in polarizability) are different than infrared (change in the dipole moment) spectroscopy, there are some vibrations that are active in one technique but not the other. Water is a weak Raman scatterer and, unlike infrared spectroscopy, where water has strong absorptions, water can be used as a solvent. Glass cells can be used with the visible laser radiation, which is more convenient that the salt plates that need to be used in infrared spectroscopy. Because Raman spectroscopy involves the measurement of vibrational energy states with visible light, it is especially useful for measurements of vibrational processes that occur in the far IR portion of the spectrum. Finally, since Raman spectroscopy involves a scattering process, it can be used for remote monitoring such as atmospheric monitoring. A pulsed laser can be passed through the atmosphere or effluent from a smoke stack and Raman scattered radiation measured by remote detectors. One disadvantage of Raman spectroscopy is that Raman scatter is an unfavorable process and the signals are weak compared to many other spectroscopic methods. There are two strategies that have been found to significantly increase the probability of Raman scatter and lower the detection limits. One is a technique known as surface-enhanced Raman spectroscopy (SERS). It is observed that compounds on surfaces consisting of roughened silver, gold or copper have much higher probability of producing Raman scatter. The other involves the use of resonance Raman spectroscopy. If the molecule is excited using a laser line close to an electronic absorption band, large enhancements in the Raman bands of symmetrical vibrations occur. As noted earlier, the lifetime of 10-14 second of Raman scatter indicates that the increased signal is not from a fluorescent transition.
textbooks/chem/Analytical_Chemistry/Molecular_and_Atomic_Spectroscopy_(Wenzel)/5%3A_Raman_Spectroscopy.txt
Learning Objectives After completing this unit the student will be able to: • Compare and contrast the advantages of flame, furnace and inductively coupled plasma atomization sources. • Justify why continuum radiation sources are usually not practical to use for atomic absorption spectroscopy. • Describe the design of a hollow cathode lamp and justify the reasons for a hollow cathode configuration and low pressure of argon filler gas. • Devise an instrumental procedure to account for flame noise in atomic absorption spectroscopy. • Devise an instrumental procedure to account for molecular absorption and scatter from particulate matter in atomic absorption spectroscopy. • Describe three possible strategies that can be used to overcome the problem of nonvolatile metal complexes. • Devise a procedure to overcome excessive ionization of an analyte. • Devise a procedure to account for matrix effects. It is likely that most people studying chemistry have seen a demonstration where a solution of a metal salt was sprayed into a Bunsen burner and gave off a color that depended on the particular metal in the salt. Metal salts are used to create the different colors observed in firework displays. Analysis of the emission from the flame using a device called a spectroscope would further show the characteristic line emission spectrum of the metal species. The atomic emission observed in the flame involves a process whereby the metal ions in the salt are converted into neutral, excited atoms. These atoms then emit electromagnetic radiation corresponding to valence electron transitions. 6: Atomic Spectroscopy Earlier we discussed the difference between atomic spectra, which only consist of electronic transitions and therefore appear as sharp lines, and molecular spectra, which because of the presence of lower energy vibrational and rotational energy states appear as a broad continuum. Provided we have atoms present in a sample, it is possible to analyze them spectroscopically using either absorption or emission measurements. One problem is that most samples we analyze do not consist of atoms but instead consist of molecules with covalent or ionic bonds. Therefore, performing atomic spectroscopy on most samples involves the utilization of an atomization source, which is a device that has the ability to convert molecules to atoms. It is also important to recognize that the absorption or emission spectrum of a neutral atom will be different than that of its ions (e.g., Cr0, Cr3+, Cr6+ all have different lines in their absorption or emission spectra). Atomic absorbance measurements are performed on neutral, ground-state atoms. Atomic emission measurements can be performed on either neutral atoms or ions, but are usually performed on neutral atoms as well. It is important to recognize that certain metal species exist in nature in various ionic forms. For example, chromium is commonly found as its +3 or +6 ion. Furthermore, Cr3+ is relatively benign, whereas Cr6+ is a carcinogen. In this case, an analysis of the particular chromium species might be especially important to determine the degree of hazard of a sample containing chromium. The methods we will describe herein cannot be used to distinguish the different metal species in samples. They will provide a measurement of the total metal concentration. Metal speciation would require a pre-treatment step involving the use of suitable chemical reagents that selectively separate one species from the other without altering their distribution. Metal speciation is usually a complex analysis process and it is far more common to analyze total metal concentrations. Many environmental regulations that restrict the amounts of metals in samples (e.g., standards for drinking water, food products and sludge from wastewater treatment plants) specify total metal concentrations instead of concentrations of specific species. The measurement of atomic absorption or emission requires selection of a suitable wavelength. Just like the selection of the best wavelength in molecular spectroscopic measurements, provided there are no interfering substances, the optimal wavelength in atomic spectroscopic measurements is the wavelength of maximum absorbance or emission intensity.
textbooks/chem/Analytical_Chemistry/Molecular_and_Atomic_Spectroscopy_(Wenzel)/6%3A_Atomic_Spectroscopy/6.1%3A_Introduction_to_Atomic_Spectroscopy.txt
There are a variety of strategies that can be used to create atoms from molecular substances. The three main methods involve the use of a flame, a device known as a graphite furnace or a plasma. These three atomization methods are commonly used with liquid samples. While there are various plasma devices that have been developed, only the most common one – the inductively coupled plasma – will be discussed herein. Some specialized techniques that have been designed for especially important elements (e.g., mercury, arsenic) will be described as well. Since many samples do not come in liquid form (e.g., soils, sludges, foods, plant matter), liquid samples suitable for introduction into flames, furnace or plasma instruments are often obtained by digestion of the sample. Digestion usually involves heating the sample in concentrated acids to solubilize the metal species. Digestion can be done in an appropriate vessel on a hotplate or using a microwave oven. Microwave digesters are specialized instruments designed to measure the temperature and pressure in sealed chambers so that the digestion is completed under optimal conditions. In some cases it is desirable to measure a sample its solid form. There are arc or spark sources that can be used for the analysis of solid samples. 6.2: Atomization Sources As alluded to earlier, flames can be used as an atomization source for liquid samples. The sample is introduced into the flame as an aerosol mist. The process of creating the aerosol is referred to as nebulization. Common nebulizer designs include pneumatic and ultrasonic devices, the details of which we will not go into here. The most common flame atomization device, which is illustrated in Figure $1$, is known as a laminar flow or pre-mix burner. Note the unusual design of the burner head, which instead of having the shape of a common Bunsen burner, has a long, thin flame that is 10 cm long. Radiation from the course passes through the 10 cm distance of the flame. Often the monochromator is placed after the flame and before the detector. If atomic emission is being measured, there is no light source. The burner design provides a much longer path length to increase the sensitivity of the method. A flame requires a fuel and oxidant. In the laminar flow burner, the fuel and oxidant are pre-mixed at the bottom of a chamber. The force created by the flowing gases draws sample up through a thin piece of tubing where it is nebulized into the bottom of the chamber. The chamber has a series of baffles in it that creates an obstructed pathway up to the burner head. The purpose of the baffles is to allow only the finest aerosol particles to reach the flame. Larger particles strike the baffles, collect and empty out by the drain tube. Even using the best nebulizers that have been developed, only about 2% of the sample actually makes it through the baffles and to the flame. The remaining 98% empties out the drain. At first it might seem counterintuitive to discard 98% of the sample and instead seem preferable to introduce the entire sample into the flame, but we must consider what happens to an aerosol droplet after it is created and as it enters the flame. Remembering that the solution has molecules but we need atoms, there are several steps required to complete this transformation. The first involves evaporating the solvent (Equation \ref{eq1}). Many metal complexes form hydrates and the next step involves dehydration (Equation \ref{eq2}). The metal complexes must be volatilized (Equation \ref{eq3}) and then decomposed (Equation \ref{eq4}). Finally, the metal ions must be reduced to neutral atoms (Equation \ref{eq5}). Only now are we able to measure the absorbance by the metal atoms. If the measurement involves atomic emission, then a sixth step (Equation \ref{eq6}) involves the excitation of the atoms. \begin{align} \ce{ML(aq)} &= \ce{ML^.xH_2O (s)} \label{eq1}\[4pt] \ce{ML^.xH_2O (s)} &= \ce{ML(s)} \label{eq2}\[4pt] \ce{ML(s)} &= \ce{ML(g)} \label{eq3}\[4pt] \ce{ML(g)} &= \ce{ M+ + L- } \label{eq4}\[4pt] \ce{M+ + e-} &= \ce{M} \label{eq5}\[4pt] \ce{M + heat} &= \ce{M^{*}} \label{eq6}\ \end{align} \nonumber The problem with large aerosol droplets is that they will not make it through all of the necessary steps during their lifetime in the flame. These drops will contribute little to the signal, but their presence in the flame will create noise and instability in the flame that will compromise the measurement. Hence, only the finest aerosol droplets will lead to atomic species and only those are introduced into the flame. The various steps outlined in Equations \ref{eq1}-\ref{eq6} also imply that there will be a distinct profile to the flame. Profiles result because of the efficiency with which neutral and excited atoms are formed in a flame. Therefore, a specific section of the flame will have the highest concentration of ground state atoms for the metal being analyzed. The absorbance profile that shows the concentration of ground state atoms in the flame is likely to be different than the emission profile that shows the concentration of excited state atoms in the flame. Figure $2$ shows representative absorption profiles for chromium, magnesium and silver. Magnesium shows a peak in its profile. The increase in the lower part of the flame occurs because exposure to the heat creates more neutral ground state atoms. The decrease in the upper part of the flame occurs due to the formation of magnesium oxide species that do not absorb the atomic line. Silver is not as easily oxidized and its concentration continually increases the longer the sample is exposed to the heat of the flame. Chromium forms very stable oxides and the concentration of ground state atoms decreases the longer it is exposed to the heat of the flame. When performing atomic absorbance or emission measurements using a flame atomization source, it is important to measure the section of the flame with the highest concentration. There are controls in the instrument to raise and lower the burner head to insure that the light beam passes through the optimal part of the flame. An important factor in the characteristics of a flame is the identity of the fuel and oxidant. Standard Bunsen burner flames use methane as the fuel and air as the oxidant and have a temperature in the range of 1,700-1,900oC. A flame with acetylene as the fuel and air as the oxidant has a temperature in the range of 2,100-2,400oC. For most elements, the methane/air flame is too cool to provide suitable atomization efficiencies for atomic absorbance or emission measurements, and an acetylene/air flame must be used. For some elements, the use of a flame with acetylene as the fuel and nitrous oxide (N2O) as the oxidant is recommended. The acetylene/nitrous oxide flame has a temperature range of about 2,600-2,800oC. There are standard reference books on atomic methods that specify the type of flame that is best suited for the analysis of particular elements. It is also important to recognize that some elements do not atomize well in flames. Flame and other atomization methods are most suitable for the measurement of metals. Non-metallic elements rarely atomize with enough efficiency to permit analysis of trace levels. Metalloids such as arsenic and selenium have intermediate atomization efficiencies and may require specialized atomization methods for certain samples with trace levels of the elements. Mercury is another atom that does not atomize well and often requires the use of a specialized atomization procedure. Flame methods are usually used for atomic absorbance measurements because most elements do not produce high enough concentrations of excited atoms to facilitate sensitive detection based on atomic emission. Alkali metals can be measured in a flame by atomic emission. Alkaline earth metals can possibly be measured by flame emission as well provided the concentration is high enough.
textbooks/chem/Analytical_Chemistry/Molecular_and_Atomic_Spectroscopy_(Wenzel)/6%3A_Atomic_Spectroscopy/6.2%3A_Atomization_Sources/6.2A%3A_Flames.txt
The graphite furnace, which is pictured in Figure $3$, is a small, hollow graphite tube about 2 Inches long by ¼ inch in diameter with a hole in the top. Graphite furnaces are used for atomic absorbance measurements. Radiation from the source shines through the tube to the detector. A small volume of sample (typically $0.5$ to $10 \mu l$) is introduced through the hole into the tube either through the use of a micropipette or a spray system. The entire furnace system is maintained under an argon atmosphere. After introduction of the sample into the furnace, a three step heating process is followed. The first step (heating to about 100oC) evaporates the solvent. The second (heating to about 800oC) ashes the sample to a metal power or metal oxide. The third (heating to between 2,000-3,000oC) atomizes the sample. The first two steps are on the order of seconds to a minute. The third step occurs over a few milliseconds to seconds. The atomization step essentially creates a “puff” of gas phase atoms in the furnace and the absorbance is measured during this time, yielding a signal similar to what is shown in Figure $4$. This “puff” of atoms only occurs over a second or so before the sample is swept from the furnace. The area under the curve is integrated and related back to the concentration through the use of a standard curve. What are the relative advantages and disadvantages of using a flame or furnace as an atomization source? Sample size: One obvious difference is the amount of sample needed for the analysis. Use of the flame requires establishing a steady state system in which sample is introduced into the flame. A flame analysis usually requires about 3-5 ml of sample for the measurement. Triplicate measurements on a furnace require less than 50 ul of sample. In cases where only small amounts of sample are available, the furnace is the obvious choice. Sensitivity: The furnace has a distinct advantage over the flame with regards to the sensitivity and limits of detection. One reason is that the entire sample is put into the furnace whereas only 2% of the sample makes it into the flame. Another is that the furnace integrates signal over the “puff” of atoms whereas the flame involves establishment of a steady state reading. A disadvantage of the flame is that atoms only spend a brief amount of time (about 10-4 seconds) in the optical path. Finally, for certain elements, the atomization efficiency (what percentage of the elements end up as ground state atoms suitable for absorption of energy) is higher for the furnace than the flame. Reproducibility: The flame has a distinct advantage over the furnace in terms of reproducibility of measurements. Remember that more reproducible measurements mean that there is better precision. One concern is whether the amount of sample being introduced to the atomization source is reproducible. Even though we often use micropipettes and do not question their accuracy and reproducibility, they can get out of calibration and have some degree of irreproducibility from injection to injection. Introduction of the sample into the flame tends to be a more reproducible process. Another concern with atomic methods is the presence of matrix effects. The matrix is everything else in the sample besides the species being analyzed. Atomic methods are highly susceptible to matrix effects. Matrix effects can enhance or diminish the response in atomic methods. For example, when using a flame, the response for the same concentration of a metal in a sample where water is the solvent may be different when compared to a sample with a large percentage of alcohol as the solvent (e.g., a hard liquor). One difference is that alcohol burns so it may alter the temperature of the flame. Another is that alcohol has a different surface tension than water so the nebulization efficiency and production of smaller aerosol particles may change. Another example of a matrix effect would be the presence of a ligand in the sample that leads to the formation of a non-volatile metal complex. This complex may not be as easy to vaporize and then atomize. While it is somewhat sample dependent, matrix effects are more variable with a furnace than the flame. An issue that comes up with the furnace that does not exist in the flame is the condition of the interior walls of the furnace. These walls “age” as repeated samples are taken through the evaporation/ash/atomize steps and the atomization efficiency changes as the walls age. The furnace may also exhibit memory effects from run to run because not all of the material may be completely removed from the furnace. Evaporation of the solvent in the furnace may lead to the formation of salt crystals that rupture with enough force to spew material out the openings in the furnace during the ashing step. This observation is why some manufacturers have developed spray systems that spread the sample in a thinner film over more of the interior surface than would occur if adding a drop from a micropipette. These various processes that can occur in the furnace often lead to less reproducibility and reduced precision (relative precision on the order of 5-10%) when compared to flame (relative precision of 1% or better) atomization. 6.2C: Specialized Atomization Methods There are a few elements where the atomization efficiencies with other sources are diminished to the point that trace analysis sometimes requires specialized procedures. The most common element where this is done is mercury. Mercury is important because of its high toxicity. The procedure is referred to as a cold vapor method. One design of a cold vapor system consists of a closed loop where there is a pump to circulate air flow, a reaction vessel, and a gas cell. The sample is placed in the reaction vessel and all of the mercury is first oxidized to the +2 state through the addition of strong acids. When the oxidation is complete, tin(II)chloride is added as a reducing agent to reduce the mercury to neutral mercury atoms. Mercury has sufficient vapor pressure at room temperature that enough atoms enter the gas phase and distribute throughout the system including the gas cell. A mercury hollow cathode lamp shines radiation through the gas cell and absorbance by atomic mercury is measured. Two other toxic elements that are sometimes measured using specialized techniques are arsenic and selenium. In this process, sodium borohydride is added to generate arsine (AsH3) and selenium hydride (SeH2). These compounds are volatile and are introduced into the flame. The volatile nature of the complexes leads to a much higher atomization efficiency. Commercial vendors sell special devices that have been developed for the cold vapor or hydride generation processes.
textbooks/chem/Analytical_Chemistry/Molecular_and_Atomic_Spectroscopy_(Wenzel)/6%3A_Atomic_Spectroscopy/6.2%3A_Atomization_Sources/6.2B%3A_Electrothermal_Atomization__Graphite_Furnace.txt
A plasma is a gaseous mixture in which a significant proportion of the gas-phase species are ionized. An illustration of an inductively coupled plasma (ICP) is shown in Figure \(5\). The device consists of a quartz tube (about ¾ inch in diameter), the end of which is wrapped in a high power radiofrequency (RF) induction coil. Argon gas flows down the quartz tube at a high rate (about 15 liters/minute). A current of electricity is run through the RF coil, which produces a magnetic field inside the end of the quartz tube. Sparking the argon creates some Ar+ ions, which are now paramagnetic and absorb energy from the magnetic field. The argon ions absorb enough energy that a plasma is created in the area of the tube covered by the RF induction coil. The nature of the magnetic field causes the plasma to flow in a closed annular path (basically a donut shape). What is especially impressive is that enough energy is absorbed from the magnetic field to heat the plasma up to a temperature of about 6,000 K. As a comparison, this temperature is about the same as the temperature of the surface of the sun. The hot temperature means that new argon flowing into the plasma is ionized, which maintains the plasma. The plasma is kept from melting the walls of the quartz tube by an additional tangential flow of argon along the walls of the tube. Finally, the sample is nebulized and sprayed as an aerosol mist into the center of the plasma. An ICP offers several advantages over flame and furnace atomization sources. One is that it is so hot that it leads to a more complete atomization and leads to the formation of many excited state atoms. Because sufficient numbers of atoms are excited, they can be detected by emission instead of absorbance. The illustration in Figure \(5\) shows the plume that forms in an ICP above the RF coil. Above the plasma is a zone in which argon regeneration occurs. A continuum background emission is given off in this zone. Above this zone in the plume, there are excited atoms that emit the characteristic lines of each particular element in the sample. In our discussion of fluorescence spectroscopy, we learned that emission methods have an inherent sensitivity advantage over absorbance methods. This occurs because emission entails measuring a small signal over no background and absorbance entails measuring a small difference between two large signals. This same sensitivity advantage exists for measurements of atomic emission over atomic absorbance. Light emitted by atoms in the plume can be measured either radially (off to the side of the plume) or axially (looking down into the plume. Axial measurements are often more sensitive because of the increase in path length. However, in some cases, depending on the element profile in the plasma, radial measurements may be preferable. Instruments today often allow for either axial or radial measurements. A second advantage of an ICP is that all of the elements can be measured simultaneously. All metals in the sample will be atomized at the same time and all are emitting light. Some instruments measure elements in a sequential arrangement. In this case, the operator programs in the elements to be measured, and the monochromator moves one-by-one through the specific wavelengths necessary for the measurement of each element. Other instruments use an array detector with photoactive pixels that can measure all of the elements at once. Array instruments are preferable as the analysis will be faster and less sample is consumed. Figure \(6\) shows the printout of the pixels on a array detector that include and surround the lead emission that occurs at 220.353 nm. The peak due to the lead emission from the four different samples is apparent. Also note that there is a background emission on the neighboring pixels and the intensity of this background emission must be subtracted from the overall emission occurring at the lead wavelength. An observation with emission spectroscopy to be aware of is the possibility of self-absorption. We already discussed this in the unit on fluorescence spectroscopy. Self-absorption refers to the situation in which an excited state atom emits a photon that is then absorbed by another atom in the ground state. If the photon was headed toward the detector, then it will not be detected. Self-absorption becomes more of a problem at higher concentrations as the emitted photons are more likely to encounter a ground state atom. The presence of self-absorption can lead to a diminishment of the response in a calibration curve at high concentrations as shown in Figure \(7\). Atomic emission transitions always correspond with absorption transitions for the element being analyzed so the likelihood of observing self-absorption is higher in atomic emission spectroscopy than in fluorescence spectroscopy. For a set of samples with unknown concentrations of analyte, it may be desirable to test one or two after dilution to insure that the concentration decreases by a proportional factor and that the samples are not so high in concentration to be out in the self-absorption portion of the standard curve. Another advantage is that the high number of Ar+ ions and free electrons suppress the ionization of other elements being measure, thereby increasing the number of neutral atoms whose emission is being measured. The argon used to generate the plasma is chemically inert compared to the chemical species that make up a flame, which increases the atomization efficiency. The inductively coupled plasma tends to be quite stable and reproducible. The combination of high temperature with chemically inert environmental reduces matrix effects in the plasma relative to other atomization sources, but it does not eliminate them and matrix effects must always be considered. Some elements (e.g., mercury, arsenic, phosphorus) that are impractical to analyze on a flame or furnace instrument without specialized atomization techniques can often be measured on an ICP. A final advantage of the plasma is that there are now methods to introduce the atoms into a mass spectrometer (MS). The use of the mass spectrometer may further reduce certain matrix effects. Also, mass spectrometry usually provides more sensitive detection than emission spectroscopy. 6.2E: Arcs and Sparks Arc and spark devices can be used as atomization sources for solid samples. Figure \(8\) illustrates the setup for an arc device. A high voltage applied across a gap between two conducting electrodes causes an arc or spark to form. As the electrical arc or spark strikes the positively charged electrode, it can create a “puff” of gas phase atoms and emission from the atoms can be measured. The arc also creates a plasma between the two electrodes. Depending on the nature of the solid material to be measured, it can either be molded into an electrode or coated onto a carbon electrode.
textbooks/chem/Analytical_Chemistry/Molecular_and_Atomic_Spectroscopy_(Wenzel)/6%3A_Atomic_Spectroscopy/6.2%3A_Atomization_Sources/6.2D%3A_Inductively_Coupled_Plasma.txt
While ICP devices do offer certain advantages over flame atomic absorption (AA) spectrophotometers, flame AAs are still widely used for measurement purposes. They are cheaper to purchase and operate than an ICP and, for someone only needing to measure a few specific elements on a regular basis, a flame AA may be the better choice. There are a variety of instrumental design features on AA spectrophotometers that are worth consideration. One of these concerns the radiation source. Atomic absorption spectrophotometers require a separate source lamp, called a hollow cathode lamp, for each individual element that you wish to measure. An illustration of a hollow cathode lamp is shown in Figure \(9\). The hollow cathode is coated with the element you wish to measure. The interior is filled with a relatively low pressure (1 Torr) of an inert gas such as argon or helium. A voltage is applied across the anode and cathode. The filler gas (e.g., argon) is ionized to Ar+ at the anode. The Ar+ ions are drawn toward the cathode and when they strike the surface, sputter off some of the coated atoms into the gas phase. In the sputtering process, some of the atoms are excited and emit the characteristic lines of radiation of the atoms. Hollow cathode lamps cost about \$200 a piece, so buying lamps for many elements can get a bit expensive. Why is the cathode designed with a hollow configuration? There are two reasons for the hollow cathode design. One is that the configuration helps to focus the light beam allowing a higher intensity of photons to be directed toward the flame or furnace. The second is that it helps prolong the lifetime of the lamp. It is desirable to have sputtered atoms coat back onto the cathode, since it is only those atoms that can be excited by collisions with the Ar+ ions. Over time the number of atoms coated onto the cathode will diminish and the intensity of the lamp will decrease. The lamps also have an optimal current at which they should be operated. The higher the current, the more Ar+ ions strike the cathode. While a higher current will provide a higher intensity, it will also reduce lamp lifetime. (Note: There is another reason not to use high currents that we will explore later after developing some other important concepts about the instrument design). The lifetime of a hollow cathode lamp run at the recommended current is about 500 hours. The need to use a separate line source for each element raises the following question. Why is it apparently not feasible to use a broadband continuum source with a monochromator when performing atomic absorption spectroscopy? One thing you might consider is whether continuum lamps have enough power in the part of the electromagnetic spectrum absorbed by elements. In what part of the electromagnetic spectrum do most atoms absorb (or emit) light? Recollecting back to the emission of metal salts in flames, or the light given off in firework displays, it turns out that atoms emit, and hence absorb, electromagnetic radiation in the visible and ultraviolet portions of the spectrum. Do powerful enough continuum sources exist in the ultraviolet and visible region of the spectrum? Yes. We routinely use continuum sources to measure the ultraviolet/visible spectrum of molecules at low concentrations, so these sources certainly have enough power to measure corresponding concentrations of atomic species. Another thing to consider is the width of an atomic line. What are two contributions to the broadening of atomic lines? (Hint: We went over both of these earlier in the course). Earlier in the course we discussed collisional and Doppler broadening as two general contributions to line broadening in spectroscopic methods. When these contributions to line broadening are considered, the width of an atomic line is observed to be in the range of 0.002-0.005 nm. Using information about the width of an atomic line, explain why a continuum source will not be suitable for measuring atomic absorption. The information provided above indicates that atomic lines are extremely narrow. If we examine the effective bandwidth of a common continuum ultraviolet/visible source/monochromator system, it will be a wavelength packet on the order of 1 nm wide. Figure \(10\) superimposes the atomic absorption line onto the overall output from a continuum source. What should be apparent is that the reduction in power due to the atomic absorbance is only a small fraction of the overall radiation emitted by the continuum source. In fact, it is such a small portion that it is essentially non-detectable and lost in the noise of the system. What is the problem with reducing the slit width of the monochromator to get a narrower line? The problem with reducing the slit width is that it reduces the number of photons or source power reaching the sample. Reducing the slit width on a continuum source to a level that would provide a narrow enough line to respond to atomic absorption would reduce the power so that it would not be much above the noise. Therefore, hollow cathode lamps, which emit intense narrow lines of radiation specific to the element being analyzed, are needed for atomic absorption measurements. With this understanding we can ask why the hollow cathode lamp has a low pressure of argon filler gas. The pressure of the argon is low to minimize collisions of argon atoms with sputtered atoms. Collisions of excited state sputtered atoms with argon atoms will lead to broadening of the output of the hollow cathode lamp and potentially lead to the same problem described above with the use of a continuum source. A low pressure of argon in the lamp insures that the line width from the hollow cathode lamp is less than the line width of the absorbing species.
textbooks/chem/Analytical_Chemistry/Molecular_and_Atomic_Spectroscopy_(Wenzel)/6%3A_Atomic_Spectroscopy/6.3%3A_Instrument_Design_Features_of_Atomic_Absorption_Spectrophotometers/6.3A%3A_Source_Design.txt
Background signal from the flame is measured at the detector and is indistinguishable from the source power. Flame noise in the form of emission from the flame or changes in the flame background as a sample is introduced can cause a significant interference in atomic methods. Can you design a feature that could be incorporated into a flame atomic absorption spectrophotometer to account for flame noise? We can account for flame noise and changes in the flame noise by using a device called a chopper. A chopper is a spinning wheel that alternately lets source light through to the flame and then blocks the source light from reaching the flame. Figure \(11\) illustrates several chopper designs. Figure \(12\) shows the output from the detector when using a chopper. When the chopper blocks the source, the detector only reads the background flame noise. When the chopper lets the light through, both flame noise and source noise is detected. The magnitude of Po and P is shown on the diagram. By subtracting the combined source/flame signal from only the flame background it is possible to measure the magnitudes of Po and P and to determine whether the introduction of the sample is altering the magnitude of the flame background. 6.3C: Spectral Interferences Particulate matter in a flame will scatter light from the hollow cathode lamp. Some metals are prone to forming solid refractory oxides in the flame that scatter radiation. Organic matter in a flame may lead to carbonaceous particles that scatter radiation. This is a problem since the detector cannot distinguish the difference between light that is scattered and light that is absorbed. Similarly, molecular species in a flame exhibit broadband absorption of light. Figure \(13\) shows a plot of an atomic absorption line superimposed over molecular absorption. As with scattered radiation, the detector cannot distinguish broadband absorption from molecular species from line absorption by atomic species. Can you design a feature that could be incorporated into an atomic absorption spectrophotometer than can account for both scattered light and light absorbed by molecular species? To address this question, we need to think back to the previous discussion of the source requirement for atomic absorption spectrophotometers. Earlier we saw that it was not possible to use a continuum source with a monochromator since the atomic absorption was so negligible as to be non-detectable. However, a continuum source will measure molecular absorption and will respond to any scattered radiation. The answer is to alternately send the output from the hollow cathode lamp and a continuum source (the common one used in AA instruments is a deuterium lamp) to the flame. The output of the hollow cathode lamp will be diminished by atomic absorption, molecular absorption and scatter. The continuum lamp will only be diminished by molecular absorption and scatter, since any contribution from atomic absorption is negligible. By comparing these, it is possible to correct the signal measured when the hollow cathode lamp passes through the flame for scattered radiation and molecular absorption. In atomic absorption spectroscopy, this process is referred to as background correction. An alternative way of getting a broadened source signal to pass through the flame is known as the Smith-Hieftje method (named after the investigators who devised this method). The Smith-Hieftje method only uses a hollow cathode lamp. Earlier, when we discussed hollow cathode lamps, we learned that the argon pressure inside the lamp was kept low to avoid collisional broadening. We also learned that the current was not set to a high value because it would sputter off too many atoms and shorten the lamp lifetime. Another observation when running a hollow cathode lamp at a high current is that the lamp emission lines broaden. This occurs because, at a high current, so many atoms get sputtered off into the hollow cathode that they collide with each other and broaden the wavelength distribution of the emitted light. The Smith-Hieftje method relies on using a pulsed lamp current. For most of the time, the lamp is run at its optimal current and emits narrow lines that would diminish when passing through the flame due to atomic absorption, molecular absorption and scatter. For a brief pulse of time, the current is set to a very high value such that the lamp emits a broadened signal. When this broadened signal passes through the flame, atomic absorption is negligible and only molecular absorption and scatter decrease the intensity of the beam. A third strategy is to use what is known as the “two-line” method. This can be used in a situation where you have a source that emits two narrow atomic lines, one of which is your analysis wavelength and the other of which is close by. Looking back at Figure \(13\), the analysis wavelength is diminished in intensity by atomic absorption, molecular absorption and scattering. A close by line does not have any atomic absorption and only is reduced in intensity by molecular absorption and scattering. While it might at first seem difficult to see how it is possible to get nearby atomic lines for many elements, there is something known as the Zeeman Effect that can be used for this purpose. Without going into the details of the Zeeman Effect, what is important to know is that exposing an atomic vapor to a strong magnetic field causes a slight splitting of the energy levels of the atom causing a series of closely spaced lines for each electronic transition. The neighboring lines are about 0.01 nm from each other, making them ideal for monitoring background molecular absorption and scatter. Corrections using the Zeeman Effect are more reliable than those using a continuum source. The magnetic field can be applied either to the hollow cathode lamp or the atomization source. The method is useful in flame and graphite furnace measurements.
textbooks/chem/Analytical_Chemistry/Molecular_and_Atomic_Spectroscopy_(Wenzel)/6%3A_Atomic_Spectroscopy/6.3%3A_Instrument_Design_Features_of_Atomic_Absorption_Spectrophotometers/6.3B%3A_Interferences_of_Flame_Noise.txt
It is also possible to have chemical processes that interfere with atomic absorption and emission measurements. It is important to realize that the chemical interferences described herein can potentially occur in flame, furnace and plasma devices. One example of a chemical interference occurs for metal complexes that have low volatility. These are often difficult to analyze at trace concentrations because the atomization efficiency is reduced to unacceptably low levels. Can you devise a strategy or strategies for eliminating the problem of a non-volatile metal complex? One possibility is to use a higher temperature flame. Switching from an acetylene/air flame to an acetylene/nitrous oxide flame may overcome the volatility limitations of the metal complex and produce sufficient atomization efficiencies. Another strategy is to add a chemical that eliminates the undesirable metal-ligand complex. One possibility is to add a ligand that preferentially binds to the metal to form a more volatile complex. This is referred to as a protecting agent. The sensitivity of calcium measurements is reduced by the presence of aluminum, silicon, phosphate and sulfate. Ethylenediaminetetraacetic acid (EDTA) complexes with the calcium and eliminates these interferences. The other strategy is to add another metal ion that preferentially binds to the undesirable ligand to free up the desired metal. This is known as a releasing agent. The presence of phosphate ion decreases the sensitivity of measurements of calcium. Excess strontium or lanthanum ions will complex with the phosphate and improve the sensitivity of the calcium measurement. Another potential problem that can occur in flames and plasmas is to have too high a concentration of the analyte metal exist in an ionic form. Since neutral atoms are usually being measured (sometimes when using an ICP it may actually be preferable to measure emission from an ionic species), the presence of ionic species reduces the sensitivity and detection limits. Can you devise a strategy to overcome unwanted ionization of the analyte? One possibility might be to use a cooler atomization source, although there are limitations on the range to which this is feasible. The RF power used in an inductively coupled plasma does influence the temperature of the plasma, and there are recommendations for specific elements about the recommended source power. Similarly, changes in the fuel/oxidant ratio cause changes in the temperature of a flame. A more common strategy is to add something to the sample known as an ionization suppression agent. An ionization suppressor is something that is easily ionized. Common ionization suppressors would include alkali metals such as potassium. Thinking of Le Chatlier’s principle, ionization of the suppressor forms more electrons and greater charges of positive ions that suppress the ionization of the analyte species. 6.4B: Accounting for Matrix Effects Flame noise, spectral interferences and chemical interferences are all examples of matrix effects. Atomic methods are among the most sensitive of all analysis methods to matrix effects. The previous sections have described ways of trying to account for the possibility of some types of matrix effects. Even with these methods, there is still the possibility that some aspect of the matrix (remember that the matrix is everything except what is being analyzed) either enhances or decreases the signal measured at the detector. A concern is that standard solutions often have a different matrix than the unknowns that are being analyzed. Devise a general method that can be used to account for the presence of unknown matrix effects. A process called standard addition can often be used to assess whether a sample has a matrix effect. If the sample does have a matrix effect, the standard addition procedure will provide a more accurate measurement of the concentration of analyte in the sample than the use of a standard curve. The process involves adding a series of small increments of the analyte to the sample and measuring the signal. The assumption is that the additional analyte experiences the same matrix effects as the species already in the sample. The additional increments are kept small to minimize the chance that they swamp out the matrix and no longer experience the same matrix effects. The signal for each increment is plotted against the concentration that was added as shown in Figure \(1\). Included in Figure \(1\) are plots for two different samples, both of which have the exact same concentration of analyte. One of the samples has a matrix that enhances the signal relative to the other. An examination of the plots shows that the sample with an enhancing matrix produces a linear plot with a higher slope than the linear plot obtained for the other sample. The plot is then extrapolated back to the X-intercept, which indicates the concentration of analyte that would need to be added to the matrix to obtain the signal measured in the original sample. The experimental steps involved in conducting a standard addition are more complex than those involving the use of a standard curve. If someone is testing a series of samples with similar properties that have similar matrices, it is desirable to use the standard addition procedure on one or a few samples and compare the concentration to that obtained using a standard curve. If the two results are similar, then it says that the matrix effects are minimal and the use of a standard curve is justified.
textbooks/chem/Analytical_Chemistry/Molecular_and_Atomic_Spectroscopy_(Wenzel)/6%3A_Atomic_Spectroscopy/6.4%3A_Other_Considerations/6.4A%3A_Chemical_Interferences.txt
The solving of the kinetic problem consists of several steps. At the first step the inverse kinetic problem is solved by evaluating of Arrhenius parameters. However, the solution requires a series of experiments with different heating rates and determination of optimal experimental conditions: $\ce{A_{s} -> B_{s} + C_{g}} \label{1.1}$ where As is the initial powdered solid reagent forming a flat layer, Bs is the solid reaction product located on the grains of the initial reagent, and Cg is the gaseous reaction product released into the environment. This process is referred to as quasi-one-stage, for reason that the reactions of the type of Equation \ref{1.1} have at least three stages and comprise the chemical reaction stage, heat and mass transfer. However, depending on the experimental conditions, one of the stages can be limiting. In our case, we believe the chemical reaction to be the rate-limiting stage. Let us assume that the dependence of this process on time and temperature is a single-mode thermoanalytical curve. For such a process, the change in the reaction rate as a function of temperature can be described as follows: $-\frac{d \alpha}{d t}=A e^{\frac{-E}{R T}} f(\alpha) \label{1.2}$ where A and E are the Arrhenius parameters, T is the temperature, and f(α) is some function of the conversion of the reaction characterizing its mechanism. The conversion $\alpha$, according to Equation \ref{1.1}, is defined as the fraction of the initial reagent As that has reacted by time ti and changes from 0 to 1. It is worth noting that the conversion $\alpha$ can be calculated from both the TG and DSC data, as well as from differential thermogravimetry (DTG) data. The so-called non-isothermal kinetic techniques are widely used owing to the apparent simplicity of processing experimental data according to the formal kinetic model described by Equation \ref{1.2}. Equation \ref{1.2} characterizes a single measurement curve. As applied to experimental TA data, the kinetic model can be represented in the following form: $-\frac{\mathrm{d} \alpha}{\mathrm{d} T_{T=T_{i}}}=\frac{A}{\beta_{i}} \exp \left(-\frac{E}{R T}\right) \alpha_{i}^{m}\left(1-\alpha_{i}\right)^{n} \label{1.3}$ where (1 - αi) is the experimentally measured degree of reaction incompleteness, Ti is the current temperature in kelvins (K), is the instantaneous heating rate (in our experiment, βi = β = constant), А is the preexponential factor, Е is the activation energy, m and n are the kinetic equation parameter [1]. Equation \ref{1.3} is easily linearized: $\ln \left(-\frac{\mathrm{d} \alpha}{\mathrm{d} T}_{T=T_{i}}\right)=\ln \left(\frac{A}{\beta_{i}}\right)-\frac{E}{R T_{i}}+m \cdot \ln \alpha_{i}+n \cdot \ln \left(1-\alpha_{i}\right) \label{1.4}$ that is, it reduces to a linear least-squares problem. The least-squares problem for Equation \ref{1.4} reduces to solving the following set of equations: $C \vec{x}=\vec{b} \label{1.5}$ where C is the matrix of coefficients of Equation \ref{1.4} (Ci1 = 1/βi, Ci2 = -1/Ti, Ci3=lnαi, Ci4 = ln(1-α)), and is the vector of the sought parameters. Since the experimental data are random numbers, they are determined with a certain error. Solving the problem in Equation \ref{1.5} involves certain difficulties. The 1/T value changes insignificantly in the temperature range of reaction, therefore first and second column of matrix C are practically identical (up to a constant multiplier), and as a result matrix С is almost degenerate. Detailed description of this problem one can found in . To minimize uncertainty one should provide several experiments with different heating rates and calculate kinetic parameters using all experimental data. The above arguments suggest only that the calculation of Arrhenius parameters using a single thermoanalytical curve is incorrect. The Netzsch Termokinetics Software uses a different approach, when the Arrhenius parameters are estimated using model-free calculation methods according to a series of experiments at different heating rates (see below). The inverse kinetic problem thus solved does not guarantee the adequate description of experimental data. To verify the adequacy of the solution, it is necessary to solve the direct problem, that is, to integrate Equation \ref{1.3} and compare the calculated and experimental dependences in the entire range of heating rates. This procedure has been implemented in the NETZSCH Termokinetics, Peak Separation, and Thermal Simulation program packages with the help of NETZSCH. These programs and their applications are discussed in detail below. Joint processing of experimental results obtained at different heating rates necessitates the constancy of the mechanism of a process, that is, the constancy of the type of f(α) function at different heating rates. Whether this condition is met can be verified by affine transformation of experimental curves, that is, by using reduced coordinates. To do this, one should select variables on the abscissa and ordinate so that each of them would change independently of the process under consideration. In addition, it is desirable that the relationship between the selected variables and experimental values be simple. These requirements are met by reduced coordinates. A reduced quantity is defined as the ratio of some variable experimental quantity to another experimental quantity of the same nature. As one of the variables, the conversion α is used, which is defined as the fraction of the initial amount of the reagent that has converted at a given moment of time. In heterogeneous kinetics, this variable is, as a rule, the conversion of the initial solid reagent. If it is necessary to reflect the relationship between the conversion and time or temperature in the thermoanalytical experiment (at various heating rates), then the α is used as the ordinate and the reduced quantity equal to the ratio of the current time t or temperature T corresponding to this time to the time tα* it takes to achieve the desired conversion. For example, if the time or temperature required to achieve 50 or 90% conversion (α* = 0.5 or 0.9) is selected, the reduced quantities will be t/t0.5 (T/T0.5) or t/t0.9 (T/T0.9). The above formalism pertains to the chemical stage of a heterogeneous process [4,5]. However, it should be taken into account that in general case heterogeneous process may involve heat and mass transfer, that is, the process may be never strictly one-step. The multistage character of a process significantly complicates the solution of the kinetic problem. In this case, a set of at least three differential equations with partial derivatives should be solved, which often makes the problem unsolvable. At the same time, experimental conditions can be found under which one of the stages, most frequently, the chemical stage, would be the rate-limiting stage of the process. Such experimental conditions are found in a special kinetic experiment.
textbooks/chem/Analytical_Chemistry/Non-Isothermal_Kinetic_Methods_(Arhangel'skii_et_al.)/1.01%3A_Empirical_Relationships_and_Specifics_of_Calculation_Methods_Used_for_Solving_Non-Isothermal_Kinetic_Problems.txt
2.1 Heat Transfer Conditions As is known, the thermoanalytical experiment is carried out under variable temperature conditions, most frequently at a constant heating rate. Thereby, a so-called quasi-stationary temperature gradient appears in the bottom-heated sample. The temperature at every point of a thermally inert cylindrical sample with radius R and height H≤4R is described by the following equation: $T_{i}\left(r_{i}, t\right)=T_{0}+\beta t-\frac{\left(\beta R^{2}-r^{2}\right)}{4 a}\left[1+\frac{2 \lambda}{h R}-\frac{r^{2}}{R^{2}}\right] \label{2.1}$ where Ti(ri,t) is the temperature at the ith point of the sample, T0 is the starting temperature of the experiment, β is the temperature change rate dT/dt = constant, t is time, R is the radius of the cylindrical sample, ri is the radius vector of a point of the sample, a is the thermal diffusivity, is the thermal conductivity, and h is the heat emission coefficient in the sample–holder system. Equation \ref{2.1}, which is an analytical representation of the solution to the heat transfer equation under certain assumptions, shows that a so-called quasi-stationary temperature regime is established in a sample, corresponding to a parabolic temperature field in the sample–holder system identical at any moment in time before the onset of thermal processes. Hence, the “conversion field” has the same shape, that is, each point of the sample is in its own state differerent from a neighboring one. Thus, different processes can occur at different points of the sample. In a chemical reaction accompanied by heat release or absorption (exo- and endothermic reactions), the temperature field can change significantly and temperature gradients can be as large as several tens of kelvins. To avoid this, conditions should be created under which the temperature gradients in the reacting system would not exceed the quasi-stationary gradient within the error of determination. This requirement is fulfilled under heat dilution conditions when the temperature field and heat exchange conditions are dictated by the thermophysical properties of the sample holder. This occurs in studying small amounts of a substance when the sample holder is made up of a metal with high heat conductance and its weight significantly exceeds the weight of the sample. Under these conditions, a so-called degenerate regime is realized, and heat exchange conditions have little effect on the kinetics of the process. 2.2 Mass Transfer Conditions The mathematical description of mass transfer events accompanying heterogeneous processes is beyond the scope of this section. Rather, the aim is to show, at the qualitative level, how they can be experimentally affected. Let us consider the simplest heterogeneous process described by Equation 1.1. In this process, several possible diffusion steps can be discerned. First, this is diffusion of gaseous products through the solid surface–environment interface. This mass transfer step can be controlled by purging the reaction volume with an inert gas. Figure 2.1 shows the results of studying the dehydration of CuSO4·5H2O. The curve reflecting a three-stage process was obtained under dynamic environment conditions. The air flow rate was 40 mL/min. The curve indicates the loss of five water molecules; the water is removed stepwise, two molecules at a time in 40–180°C region and the fifth molecule is released at 210–270°C. The TG curve of a two-stage process pertains to the process in a static atmosphere. Figure 2.1 demonstrates that the process in a static atmosphere has another mechanism as compared with the process in an open crucible in the air flow. It is believed that diffusion hindrances arising in a static atmosphere are responsible for a significant effect of the back reaction. Since the CuSO4·5H2O dehydration is a reversible process, its kinetics changes noticeably. If changing the flow rate does not change the process rate, this step of mass transfer has no effect on the overall rate. Second, the mass transfer in the porous medium of the initial reagent is worthy of consideration. The simplest way to verify the significance of this step is to carry out a series of experiments with a sample of different thickness at the same external surface area. If the change in the layer height has no effect on the process rate, the diffusion in the porous reagent can be approximately considered to have little effect on the process as a whole. The step of mass transfer in the layer of the solid reaction product is the hardest to identify. A possible way to reveal diffusion limitations at this step is to determine how the composition changes in different parts of the solid reagent at various depths. However, this procedure is rather laborious and necessitates the use of appropriate analytical methods and a special sample preparation. To determine the role of diffusion limitations in the product layer, it is common practice to compare the propagation kinetics of the interface measured at different conversions. If diffusion hindrances exist, the Arrhenius parameters decrease with an increase in the product layer thickness. If the temperature coefficient of the reaction rate E/R (in general case it is better to use E/R instead of E, because E is measured in J/mol while for thermoanalytical data of any material or blend using of “mol” unit is meaningless) remains constant at different conversions, it can be stated that this type of diffusion is not a rate-limiting stage. Thus, diffusion hindrances can manifest themselves at different steps of the process under consideration and can depend on both the design of equipment and the nature of substances involved in the process. A conclusion that can be drawn from the above is that to mitigate a noticeable effect of transfer processes of experimental results, small amounts of the initial reagent (a few milligrams) with minimal porosity or lower heating rates should be used. In addition, it is important that the sample is placed on a rather large surface and that purge gases at a rather high flow rate are used. 2.3 Nucleation If our experiment is carried out under conditions such that transfer phenomena have no effect on the shape of thermoanalytical curves, the reaction can be thought of, to a first approximation, as a quasi-one-stage process representing the chemical transformations of reaction 1.1. However, the experimental results depend also on a change in the morphology of the initial reagent, i.e., on the formation of the reaction product, first of all, on the reagent surface. In this case, the conversion kinetics is dominated by the nucleation of the new phase and the subsequent growth of its nuclei. For heterogeneous processes, we are usually not aware of what atomic or molecular transformations lead to the nucleation of the product phase, so that the process is represented by a set of some formally geometric transformations. Non-isothermal kinetics is aimed at finding the forms of functions and their parameters describing these transformations. Nucleation is related to the chemical stage of the process. However, because of the complexity and diversity of nucleation processes, we believe it is necessary to briefly dwell on this phenomenon, without going into theoretical descriptions of different steps of these processes. Consideration focuses on the manifestations of nucleation processes in the thermoanalytical experiment and on the proper design of the latter. In the case of heterogeneous topochemical processes, the interface between the initial solid reagent and the solid reaction product is most frequently formed via nucleation processes. The reaction can simultaneously begin over the entire surface. In addition, nucleation can occur at separate sites of the surface, or by the branched chain mechanism, or another one. Possible mechanisms of these processes have been well documented (see, e.g., [1,4,5]). Here, we do not intend to go into details of all possible mechanisms; we will consider these phenomena in more detail when describing NETZSCH software. For carrying out a kinetic experiment and obtaining reproducible results, it is necessary to standardize the surface of the initial reagent and create a definite amount of nuclei prior to the kinetic experiment. In the framework of thermoanalytical study, we can measure only the conversion (determination of the conversion or the overall reaction rate). Here, we do not consider the use of other physical methods for determining the number of nuclei on the reagent surface, for example, direct nucleus counting under the microscope. In thermal analysis, the most accessible and efficient method is natural nucleation under standard conditions. In this method, prior to the kinetic experiment, a noticeable amount of the initial reagent is heat treated up to a certain conversion. As a rule, the conversion amounts to several percent. The method is based on the fact that the last nucleation stages have little effect on the development of the reaction interface, since a large part of potential centers has already been activated. The sample thus standardized is used in all kinetic experiments, that is, at different heating rates. Using the CuSO4·5H2O dehydration as an example, let us consider how the thermoanalytical curves change after natural nucleation under standard conditions as compared with the nonstandardized sample. Figure 2.2 shows the TG and weight loss rate (DTG) curves for the initial untreated copper sulfate pentahydrate (curve 2) and for the sample subjected to natural nucleation under standard conditions (curve 1). To this end, the powder of the initial reagent was heated at T = 70 °C until 10% H2O was lost. As is seen, the shapes of the TG and DTG curves of the treated reagent differ from those of the initial reagent. Hence, the dehydration kinetics changes. Thus, using non-isothermal kinetics methods necessitates carrying out a special experiment involving a series of runs at various heating rates using the methods of separation of the rate-limiting stage, and small amounts of the solid reagent, purge gases, crucibles of appropriate size, and so forth.
textbooks/chem/Analytical_Chemistry/Non-Isothermal_Kinetic_Methods_(Arhangel'skii_et_al.)/1.02%3A_Kinetic_Experiment_and_Separation_Methods.txt
The NETZSCH software suite intended for use in kinetic calculations from thermoanalytical data is based on the above principles. It naturally has specific features and computational procedure. Let us consider the software design philosophy and operational principles. In this workbook, Netzsch Proteus® 4.8.3 and Netzsch Thermokinetics® 3.0 versions are used to demonstrate main operating procedures. As Netzsch continuously improve software, we strongly recommend using actual versions of Proteus® and Thermokinetics®. New versions of software always include basic procedures presented in this workbook, as well as new useful functions. 3.1 Inverse Kinetic Problem The inverse kinetic problem is solved with the use of model-free Friedman [6] methods and Ozawa–Flynn–Wall [7,8] methods. The model-free methods (Friedman analysis, Ozawa–Flynn–Wall analysis, evaluation according to ASTM E698) are applied to non-isothermal kinetic analysis when experimental data is represented as the set of measurements at different heating rates. The model-free methods provide information on kinetic parameters, such as the activation energy and preexponential factor, without determining a concrete kinetic model. The Arrhenius parameters obtained by these methods are used as starting approximations in solving the direct kinetic problem. This solution makes it possible to find the type of function approximating the experimental data and to refine the Arrhenius parameters. In thermal analysis, the concept of conversion is used. NETZSCH ThermoKinetics software operates with partial mass loss (for thermogravimetry), and partial area (for DSC, DTA and mass spectrometry) rather than with the common term conversion degree. For integral measurements (thermogravimetry, dilatometry), the measured curve is converted to the plot of conversion αi versus time ti by Equation 3.1: $\alpha_{i}=\frac{m\left(t_{s}\right)-m\left(t_{i}\right)}{m\left(t_{s}\right)-m\left(t_{f}\right)} \label{3.1}$ where (ts) is the signal at the starting moment of time ts, m(ti) is the signal at the ith moment of time ti, and m(tf) is the signal at the final moment of time tf. For differential measurements (DSC, DTA, mass spectrometry), the conversion is calculated by Equation 3.2: $\alpha_{i}=\frac{\int_{t_{s}}^{t_{i}}[S(t)-B(t)] d t}{\int_{t_{s}}^{t_{f}}[S(t)-B(t)] d t} \label{3.2}$ where S(t) is the signal at the moment of time t and B(t) is the baseline at the moment of time t. 3.1.1 Friedman Method The Friedman method is a differential one, where the initial experimental parameter is the instantaneous rate i/dti. Providing several measurements at different heating rates one can plot a linear dependence of the logarithm of rate on inverse temperature for given αi. As we notice above, Equation 1.4 can be easily linearized for any f(α) a linear dependence of the logarithm of rate on inverse temperature for given αi. In the Friedman method, the slope m = E/R of this line is found. Thus, the activation energy for each conversion rate can be calculated from the slope of the ln(dαi/dT) vs 1/T curve. The conversion rate on the left-hand side of the equation is found directly from the initial measured curve (e.g., thermogravimetric) by its differentiation with respect to time. This procedure is performed with the NETZSCH Proteus software used for processing the experimental data. $\ln \left(-\frac{d \alpha}{d T}\right)_{T=T_{i}}=\ln \left(\frac{A}{\beta_{i}}\right)-\frac{E}{R T_{i}}+\ln \left(f\left(\alpha_{i}\right)\right) \label{3.3}$ The second Arrhenius parameter, the logarithm of the preexponential factor, is also calculated from Equation 3.3. Thus, the software allows the calculation of both Arrhenius parameters, the activation energy, and the logarithm of the preexponential factor. The calculation results are given in the tabulated form as the dependence of Arrhenius parameters on the conversion, as well as in the graphical form. 3.1.2 Ozawa–Flynn–Wall Method The Ozawa method uses the integral dependence for solving Equation 1.3. Integration of the Arrhenius equation leads to Equation 3.4: (3.4) If T0 is lower than the temperature at which the reaction occurs actively, the lower integration limit can be taken as zero, T0 = 0, and after integration, Equation 3.5 takes the form of Equation 3.6: (3.5) (3.6) Analytical calculation of the integral in Equation 3.6 is impossible, therefore, it is determined as follows: Using the DOYLE approximation [7] (ln p(z) = –5.3305 + 1.052z), we reduce Equation 3.5 to Equation 3.7: (3.7) It follows from Equation 3.7 that, for a series of measurements with different heating rates at the fixed conversion value α=αk, the plot of the dependence If E, αk, and zi are known, lnA can be calculated by Equation 3.9: The presence of several extreme points on the experimental TA curves is unambiguous evidence of the multistep character of the process. In this case, the use of the NETZSCH Peak Separation program makes it possible to separate individual stages and to estimate the Arrhenius parameters for each stage. The Peak Separation program is discussed below. 3.2 Direct Kinetic Problem The direct kinetic problem is solved by the linear least-squares method for one-stage reactions or by the nonlinear least-squares method for multistage processes. For one-stage reactions, it is necessary to choose the type of function that best approximates (from the statistical viewpoint) the experimental curves for all heating rates used. The NETZSCH Thermokinetics software includes a set of basic equations describing the macrokinetics of processes to be analyzed [10]. Each stage of a process can correspond to one (or several) of the equations listed in Table 3.1. The type of f(α) function depends on the nature of the process and is usually selected a priori. For the users convenience, the notation of parameters and variables in Table 3.1 is the same as in the Thermokinetics software. Here, the p parameter corresponds to the conversion, p = α, and e = 1 – α. If the type of function corresponding to the process under consideration is unknown, the program performs calculations for the entire set of functions presented in Table 3.1. Then, on the basis of statistical criteria, the function is selected that best approximates the experimental data. This approach is a formal statistical-geometric method and, to a first approximation, the type of function approximating the experimental curves for all heating rates has no physical meaning. Even for quasi-one-stage processes where the chemical conversion stage has been separated, the equations presented in Table 3.1 can be correlated with a change in the morphology of the initial reagent, but no unambiguous conclusions can be drawn about the types of chemical transformations responsible for the nucleation of the reaction product. It often occurs that several functions adequately describe the experiment according to statistical Model abbreviation f(e,p) Reaction type F1 e First order F2 e2 Second order Fn en nth order D1 0.5/1-e One-dimensional diffusion D2 -1/ln(e) Two-dimensional diffusion D3 1.5e1/3/(e-1/3-1) Jander three-dimensional diffusion D4 1.5/(e-1/3-1) Ginstling–Brounshtein three-dimensional diffusion R2 2e1/2 Reaction on the two-dimensional interface R3 3e2/3 Reaction on the three-dimensional interface B1 ep Autocatalysis according to the Prout–Tompkins equation Bna enpa nth order autocatalysis according to the Prout–Tompkins equation C1-X e(1+KcatX) First-order autocatalysis, X is the product in a complex model, often X = s Cn-X en(1+KcatX) nth autocatalysis A2 2e(-ln(e))1/2 Two-dimensional nucleation (Avrami–Erofeev) A3 3e(-ln(e))2/3 Three-dimensional nucleation (Avrami–Erofeev)f An ne(-ln(e))(n-1)/n n-Dimensional nucleation (Avrami–Erofeev) 3.1 Reaction types and corresponding type of function f(α) in Equation 1.2) [10]. criteria. The choice of the function is based on the search for the physical meaning of the resulting relation. In this context, some a priori ideas are used concerning the mechanisms of possible processes in the system under consideration. This can be literature data, results of other physicochemical studies, or general considerations based on the theories of heterogeneous processes. However, similar kinetic analysis provides a better insight into the effect of various external factors on the change in the morphology of the initial reagent and on the course of the process as a whole. Let us consider in detail the procedure of kinetic analysis based on the thermoanalytical experimental data for the dehydration of calcium oxalate monohydrate (CaC2O4 ∙ H2O).
textbooks/chem/Analytical_Chemistry/Non-Isothermal_Kinetic_Methods_(Arhangel'skii_et_al.)/1.03%3A_NETZSCH_ThermoKinetics_Software.txt
4.1 Dehydration of Calcium Oxalate Monohydrate Let us consider the dehydration as the reaction that occurs by the scheme , that is, Figure 4.1 shows the TG curves of the dehydration of calcium oxalate monohydrate. The CaC2O4 · H2O dehydration was studied on a Netzsch TG 209 F 3 Tarsus thermo-microbalance. The experiment was run at three heating rates: 5, 7.5, and 10 K/min. Three measurements were taken at each heating rate, other conditions being identical. Standard aluminum crucibles without lids were used as holders. The process was carried out in a dry air flow at a rate of 200 mL/min. The initial reagent was freshly precipitated calcium oxalate with a particle size of 15–20 μm. Weighed portions of the reagent were 5–6 mg for each heating rate. 4.2 Computational Procedure. Solution of the Inverse and Direct Kinetic Problems. Quasi-One-Stage Process As follows from Figure 4.1, the dehydration in the given temperature range can be considered a quasi-one-stage reaction at all heating rates used. The experimental data obtained on NETZSCH equipment are processed with the NETZSCH Proteus software. For further work with the experimental data using the NETZSCH Thermokinetics software, it is necessary to export data from the Proteus program in the tabulated form (measured signal as a function of temperature or time) as an ASCII file. To do this, the user must select the desired curve and click the Extras Export data button in the Proteus toolbar. The user must enter the lower and upper limits of the data range to be exported. To correctly specify the limits, the derivative of the selected curve is used. The left- and right-hand limits are chosen in the ranges where the derivative becomes zero (Figure 4.2). Remember that the derivative of the selected curve can be obtained by clicking the corresponding icon in the NETZSCH Proteus program window. 4.3 Analysis of Computation Results Let us consider the computation results obtained by the linear regression method for the CaC2O4 · H2O dehydration (Figure 4.19). Figure 4.19 presents the Arrhenius parameters and the form and characteristics of the function best fitting the experimental results (from the statistical viewpoint). For the reaction under consideration, the best fitting function is the Prout–Tompkins equation with autocatalysis (the Bna code), which is indicated at the top left of the table. However, before discussing the meaning of the results obtained, let us consider the F-test: Fit Quality window (Figure 4.20). 4.3.1 The F-test: Fit quality and F-test Step significance windows present the statistical analysis of the fit quality for different models. This allows us to determine using the statistical methods which of the models provides the best fit for the experimental data. To perform such an analysis, Fisher’s exact test is used. In general, Fisher’s test is a variance ratio which makes it possible to verify whether the difference between two independent estimates of the variance of some data samples is significant. To do this, the ratio of these two variances is compared with the corresponding tabulated value of the Fisher distribution for a given number of degrees of freedom and significance level. If the ratio of two variances exceeds the corresponding theoretical Fisher test value, the difference between the variances is significant. In the Thermokinetics software, Fisher’s test is used for comparing the fit qualities ensured by different models. The best-fit model, that is, the model with the minimal sum of squared deviations, is taken as a reference (conventionally denoted as model 1). Then, each model is compared to the reference model. If the Fisher test value does not exceed the critical value, the difference between current model 2 and reference model 1 is insignificant. There is no reason to then believe that model 1 provides a more adequate description of the experiment in comparison to model 2. The Fexp value is estimated by means of Fisher’s test: $F_{e x p}=\frac{L S Q_{1} / f_{1}}{L S Q_{2} / f_{2}} \label{4.2}$ The Fexp value is compared with the Fisher distribution Fcrit(0.95) for the significance level of 0.95 and the corresponding number of degrees of freedom. Figure 4.21). In the Const. column, the option ‘false’ is set for the parameters that should be varied and the option ‘true’ is chosen for the parameters that remain constant. The three columns to the right of this column are intended for imposing constraints on the selected values. The computation results are presented in Figure 4.25. The following conclusions can be drawn from Table 4.1: first, the autocatalysis parameters for the Bna and CnB functions are almost zero, that is, all is reduced to the Fn function. Second, the error of the Arrhenius parameters for Fn are minimal. Hence, the calcium oxalate dehydration is better described by the nth-order function. The reaction order can be considered to be 1/3, that is, the process is described by the “contracting sphere equation.” This means that the sample consists of spherical particles of the same size and that the dehydration is a homothetic process, that is, the particles during decomposition undergo a self-similar decrease in size. Such a mechanism is inherent in thermolysis of inorganic crystal hydrates. Thus, the problem of CaC2O4·H2O dehydration macrokinetics can be thought of as solved. 4.1 Kinetic parameters of the CaC2O4∙H2O dehydration process Function code log A E, kJ/mol Reaction order n log K cat 1 Exp a1 Bna 6.8±0.6 73±6 0.34±0.25 0.06±0.14 CnB 6.8±1 74±8 0.41±0.48 -0.45±1.9 Fn 7.0±0.1 75.2±0.8 0.34±0.25 The project created in the NETZSCH Thermokinetics software is saved by clicking on the common button (Figure 4.27).
textbooks/chem/Analytical_Chemistry/Non-Isothermal_Kinetic_Methods_(Arhangel'skii_et_al.)/1.04%3A_Kinetic_Analysis_Based_on_Thermogravimetry_Data.txt
The procedure of kinetic analysis of a reaction based on DSC experiment data can be exemplified by the curing of an epoxy resin. The curing reaction involves the opening of the epoxy ring by amine and is accompanied by an exotherm, which is recorded on a differential scanning calorimeter. The fraction of the cured resin is directly proportional to the evolved heat quantity. The knowledge of how the conversion (that is, the fraction of the reacted resin) depends on time and temperature provided by DSC enables one, in studying actual epoxy binders, to optimize the conditions of their treatment and the forming of products, for example, a polymer composite material. It is worth noting that curing occurs without weight change. TG measurements are therefore inapplicable in this case. It is well known [11], that, depending on the composition of reagents and process conditions, curing would occur as a one-stage as well as a two-stage process. In the present section both variants are considered. Let us consider a classical system consisting of an epoxy diane resin based on 4,4’-dihydroxydiphenylpropane (bisphenol A) (Figure 5.1) and a curing agent, metaphenylenediamine (Figure 5.2). The curing of this system was studied on a Netzsch DSC-204 Phenix analyzer. Measurements were taken at five heating rates: 2.5, 5, 7.5, 10, and 15 K/min. Samples were placed in Netzsch aluminum crucibles with a lid. A hole was preliminarily made in the lid. The process was carried out in an argon flow at a flow rate of 100 mL/min. A mixture of the resin components were freshly prepared before taking measurements. The samples were 5–5.5 mg for each of the heating rates. 5.1 Computation Procedure. Solution of the Inverse and Direct Kinetic Problems. Quasi-One-Stage Process Experimental data acquired using the NETZSCH equipment is processed with the NETZSCH Proteus program. Figure 5.3 demonstrates that the curing of an epoxy resin in the given temperature range can be considered quasi-one-stage at all heating rates used. The procedure of kinetic analysis for DSC data is analogous to that for TG measurements described above. Here, we focus on the differences between these procedures. First of all, when loading data in the NETZSCH Thermokinetics software, the user select Differential Scanning Calorimetry as the type of measurement (Figure 5.4). The DSC results not only provide information on the transformation of an analyte, but also depend on the heat exchange conditions in the analyzer–sample system. Because of the thermal resistance between the sample and the sensor, the thermal flow released as a result of processes in the sample is smeared in time (Figure 5.3). To correctly perform kinetic analysis, the true signal shape should be found, that is, data should be corrected for the time constant of the instrument and for thermal resistance. These corrections can be applied with the DSC Correction routine. As a rule, the melting peak of pure metal (indium in this case) serves as the calibration measurement for determining the correction parameters. This metal is chosen because it melts in the same temperature range in which the epoxy resin is cured. The path to the file with preliminarily calculated correction parameters is specified in the same window (Figure 5.6). The data is loaded by clicking the Load ASCII file icon (see Figure 4.9), analogously to the procedure with TG data. If necessary, the user corrects the evaluation range limits (Figure 5.7). For further calculation, the type of baseline should be selected (Figure 5.8). In this case, we use a linear baseline. The loaded data is checked and model-free analysis is performed (Figure 5.9). The resulting activation energies and preexponential factors are used as a zero approximation in solving the direct kinetic problem. 5.2 Analysis of Computation Results Let us consider, first of all, the computation results (Figure 5.11) obtained by the linear regression method under the assumption of a one-stage process. Figure 5.11 presents the Arrhenius parameters, the form of function, and its characteristics best fitting the experimental results (from the statistical viewpoint). The calculation was performed for all models. As expected, however, the only relevant model turned out to be the model of reaction with autocatalysis described by the Prout–Tompkins equation (Bna code) (Figure 5.12), which is indicated at the top left of the table showed in Figure 5.11. Refinement of the model parameters by the nonlinear regression method led to results analogous to those in Figure 5.11. The statistical characteristics of the model are shown in Figure 5.13. With the inclusion of the Durbin–Watson test value, the resulting model parameters are as follows: E = 47 ± 3 kJ/mol, log A = 3.7 ± 0.4, n = 1.2 ± 0.15, a1 = 0.50 ± 0.05. As is known, the epoxy resin curing reaction occurs due to the autocatalysis mechanism [12]. The reaction is promoted by hydroxyl groups formed upon cross-linking [11]. At the early step of the reaction, the reaction rate increases with an increase in the conversion because of the catalytic effect of the reaction product. With a lapse of time, the amount of the starting reagent decreases, and so does the reaction rate. Thus, the plot of the reaction rate versus time passes through a maximum. In the simplest case, this mechanism is described by the Prout–Tompkins function, which also follows from our preliminary calculation. However, the curing process is known to comprise at least two independent steps. This function is therefore often unable to adequately describe the curing process. It is more preferable to use the following equation for calculation of the curing process [11]: (5.1) In the software, this equation is represented as a model with two parallel reactions: a reaction with autocatalysis described by the Prout–Tompkins equation (Bna) and an nth order equation (Fn). Model with two parallel reactions: a reaction with autocatalysis described by the Prout–Tompkins equation and an nth order reaction (Fn). The kinetic parameters corresponding to a two-step model with two parallel reactions are presented in Figure 5.14. However, the difference between the fits provided by the one- and two-stage models is statistically insignificant, as follows from Figure 5.15. In both cases, the Fexp value does not exceed the critical value. We may assume that the one- and two-step processes are equiprobable. This phenomenon can be explained by the presence of hydroxyl groups in the initial resin: even at a zero conversion rate, catalytic sites exist, and the autocatalytic mechanism prevails during the entire course of the reaction. Table 5.1 presents the calculation results for both models. As is seen, the errors of the Fn function parameters for the two-stage process are significant, whereas the average parameter values do not differ from those of the Bna function. Hence, we can state that the above calculation does not confirm the occurrence of the two-stage curing process. Function code log A E, kJ/mol Reaction order n Exp a1 Bna 7.7±0.4 47±3 1.2±0.15 0.50±0.05 Bna 3.75±0.7 47±6 1.3±0.2 0.6±0.2 Fn 2.4±5 48±26 0.4±0.6 5.1 Parameters for the one- and two-stage models. 5.3 Plotting the Conversion-Time Curves The above calculation can be successfully used for selecting optimal conditions for the curing process (temperature and time). Let us consider how to obtain the conversion versus time plot (at a given temperature) from the calculated data. To do this, we open the Predictions toolbar. The curves are shown in Figure 5.16. In this case, calculations are performed assuming the kinetic control of the reaction. In reality, once a certain curing degree has been achieved, the initially liquid sample is converted to the viscous flow and then to the solid state. The diffusion of resin and curing agent molecules thereby slows down, and the process becomes diffusion-controlled. Thus, the kinetic calculation should be completed with measurements of the rheological properties of the system. The resulting relationships show that the resin is completely cured within 16 min at 153 °C.
textbooks/chem/Analytical_Chemistry/Non-Isothermal_Kinetic_Methods_(Arhangel'skii_et_al.)/1.05%3A_Kinetic_Analysis_Based_on_Differential_Scanning_Calorimetry_Data.txt
For multistage process, it is recommended to perform kinetic analysis using the following algorithm. Kinetic analysis is performed separately for each of the stages of the process as for a one-stage reaction, and the corresponding kinetic model and kinetic parameters are determined. Then, kinetic analysis is performed for the entire process using the data obtained for the model of each stage. In so doing, the kinetic parameters of each stage are refined. Thus, when the process is studied as a whole, there is already no need to try different kinetic models since they have been chosen for each stage. However, for most multistage processes, the effects overlap. To separate quasi-one-stage processes, the Peak Separation software can be used. In general case reaction steps are dependent, so the separation procedure is formal. Obtained peaks does not refer to the single reactions and may be used just for the initial approximation of the Arrenius parameters. In the case of the independent reactions separated peaks refer to the single reactions and thus may be used for calculation of the Arrenius parameters. 6.1 Peak Separation Software In multistage processes with competing or parallel reactions, separate stages, as a rule, overlap, which can lead to considerable errors in the calculated Arrhenius parameters and to an incorrect choice of the scheme of the processes. This in turn will lead to significant errors of the nonlinear regression method because of the nonlinearity of the problem. To solve this problem, the experimental curve is represented by a superposition of separate one-stage processes. This procedure in implemented in the NETZSCH Peak Separation software (Figure 6.1). The NETZSCH Peak Separation software [13] fits experimental data by a superposition of separate peaks, each of which can be described by one of the following functions: 1. Gaussian function, 2. Cauchy function, 3. pseudo-Voigt function (the sum of the Gaussian and Cauchy functions with corresponding weights), 4. Fraser–Suzuki function (the asymmetric Gaussian function), 5. Pearson function (monotonic transformation from the Gaussian to the Cauchy function), 6. modified Laplace function. In thermal analysis, chemical reaction steps are described in most cases by the Fraser–Suzuki function (asymmetric Gaussian function). For other processes, e.g., polymer melting, the modified Laplace function must be used. Figure 6.1 shows the decomposition of a multimodal curve with the use of this function and its analytical representation. The Fraser–Suzuki function (asymmetric Gaussian function) and its graphical representation are given below. $A_{F \text { raser }}=0.5 \cdot \sqrt{\pi / \ln 2} \cdot A m p l \cdot \exp \left[\frac{A s y m^{2}}{4 \ln 2}\right]$ The software outputs the following parameters: 1. the optimal curve parameters and their standard deviations, 2. statistical parameters characterizing the fit quality (correlation coefficient and so on.), 3. the initial and simulated curves with the graphs of separate peaks, 4. calculated areas under separate peaks and their contribution to the total area under the curve. 6.2 Multiple Step Reaction Analysis as Exemplified by the Carbonization of Oxidized PAN Fiber First, multimodal curves are decomposed into separate components and parameters of each process are found by the above procedures, assuming that all processes are quasi-one-stage reactions. Then, the phenomenon is described as a whole. For multiple step processes, the program suggests a list of some schemes of similar transformations, and appropriate schemes are selected from this list. The schemes are presented in Figure 1 of the Appendix. The choice of the corresponding scheme is based on some a priori ideas about the character of stages of the process under consideration. As an example of the kinetic analysis of multistage processes, let us consider the carbonization of oxidized polyacrylonitrile fiber yielding carbon fiber. The results of thermogravimetric analysis are convenient to use as input data (Figure 6.3). Then, the resulting peaks are sorted on the basis of the number of stage and the inverse kinetic problem is solved. As a result, we obtain the type of model for the given stage and approximate Arrhenius parameters. As a rule, several models of comparable statistical significance satisfy the solution of this problem. If no information is available on the true mechanism of the process, both variants are used for solving the direct kinetic problem. The estimated parameters and types of models calculated at this step of calculations are listed in Table 6.1, and statistical data of the calculation is presented in Table 6.3. In Table 6.1, the “apparent” activation energy is expressed in kelvins since the notion of mole has no physical meaning for fiber. The Ea/R constant is the temperature coefficient of the reaction rate. N Kinetic model Ea/R*103K t•S log A t•S n t•S 1 Avrami–Erofeev equation 25.0 0.6 14.8 0.4 0.28 0.3 x 10-3 2 nth order equation 17.2 0.8 7.2 6.5 3.0 0.13 3 Avrami–Erofeev equation 41.1 2.3 16.2 1.0 0.29 0.1 x 10-3 4 Avrami–Erofeev equation 54.8 1.3 16.0 0.4 0.32 3.3 x 10-3 6.1 Estimated kinetic models and parameters of separate stages of the carbonization process. N Kinetic model EaR*103K t•S log A t•S n t•S 1 Avrami–Erofeev equation 26.1 0.6 15.7 0.4 0.25 4.2*10-3 2 nth order equation 35.5 2.1 18.8 1.2 1.98 0.14 3 Avrami–Erofeev equation 38.3 2.1 16.7 1.0 0.28 0.1*10-3 4 Avrami–Erofeev equation 56.8 4.7 16.8 1.6 0.19 0.1*10-3 6.2 Calculated kinetic models and parameters of separate stages of the carbonization process. Parameter Stage 1 Stage 2 Stage 3 Stage 4 Least-squares value 22.3 32.0 2.6 7.1 Correlation coefficient 0.9988 0.9926 0.9956 0.9979 Average difference 2.2 x 10-3 21.0 x 10-3 8.6 x 10-3 14.5 x 10-3 Durbin–Watson test value 2.8 x 10-3 1.4 x 10-3 1.6 x 10-3 1.4 x 10-3 Durbin–Watson ratio 19.0 26.9 25.0 27.0 6.3 Statistical data of the calculation. Parameter Value Least-squares value 1448.7 Correlation coefficient 0.9998 Average difference 0.21 Durbin–Watson test value 5.09 x 10-3 Durbin–Waton ratio 14.4 6.4 Statistical data of the calculation. N Kinetic model EaR*103K t•S log A t•S n t•S 1 Avrami–Erofeev equation 24 3 11 1.5 0.23 1.2 x 10-2 2 nth order equation 34 7 10 3 5 1 3 Avrami–Erofeev equation 63 5 25 5 0.02 0.1 4 Avrami–Erofeev equation 99 230 53 116 0.8 10 6.5 Calculated kinetic models and parameters of separate stages of the carbonization process for the successive-parallel scheme. According to the authors [14] carbonization process may be represented by a set of successive-parallel processes. Let us consider this situation (the qffc scheme in Figure 3 of the Appendix). The results shown in Figure 6.3 and summarized in Table 6.5 have been obtained with the use of the same set of functions. The F test value for the latter scheme with respect to the former one is Fexp = 58.6, Ftheor = 1.11. Comparison of the calculation results for each scheme reliably shows that, from the formal kinetic viewpoint, the scheme of successive stages best describes thermoanalytical data as a whole. Thus, formal kinetic analysis makes it possible to choose the type of function best fitting the experimental curves (TG, DTG, DSC), evaluate kinetic parameters, determine the number of stages and their sequence (successive, parallel, etc.). In addition, the statistically optimal kinetic parameters allow one to model temperature change conditions resulting in constant weight loss or enthalpy change rate and obtain dependences of these characteristics under isothermal conditions or in other temperature programs.
textbooks/chem/Analytical_Chemistry/Non-Isothermal_Kinetic_Methods_(Arhangel'skii_et_al.)/1.06%3A_Analysis_of_Multistage_Processes.txt
The purpose of elemental analysis is to determine the quantity of a particular element within a molecule or material. • 1.1: Introduction to Elemental Analysis Elemental analysis can be subdivided in two ways: Qualitative: determining what elements are present or the presence of a particular element. Quantitative: determining how much of a particular or each element is present. In either case elemental analysis is independent of structure unit or functional group. • 1.2: Spot Tests Spot tests  are simple chemical procedures that uniquely identify a substance. They can be performed on small samples, even microscopic samples of matter with no preliminary separation. The first report of a spot test Hugo Shiff for the detection of uric acid. In a typical spot test, a drop of chemical reagent is added to a drop of an unknown mixture. If the substance under study is present, it produces a chemical reaction characterized by one or more unique observables, e.g., a color change. • 1.3: Introduction to Combustion Analysis Combustion analysis is a standard method of determining a chemical formula of a substance that contains hydrogen and carbon. First, a sample is weighed and then burned in a furnace in the presence of excess oxygen. All of the carbon is converted to carbon dioxide, and the hydrogen is converted to water in this way. Each of these are absorbed in separate compartments, which are weighed before and after the reaction. From these measurements, the chemical formula can be determined. • 1.4: Introduction to Atomic Absorption Spectroscopy There are many applications of atomic absorption spectroscopy (AAS) due to its specificity. These can be divided into the broad categories of biological analysis, environmental and marine analysis, and geological analysis. • 1.5: ICP-AES Analysis of Nanoparticles ICP-AES is a spectral technique that is used to both determine the presence of metal analyte and the concentrations thereof. The ICP-AES method is introduced and a practical example is presented. This will help the reader to use this method for their own research work. • 1.6: ICP-MS for Trace Metal Analysis Inductively coupled plasma mass spectroscopy (ICP-MS) is an analytical technique for determining trace multi-elemental and isotopic concentrations in liquid, solid, or gaseous samples. It combines an ion-generating argon plasma source with the sensitive detection limit of mass spectrometry detection. Although ICP-MS is used for many different types of elemental analysis, including pharmaceutical testing and reagent manufacturing, this module will focus on mineral and water studies. • 1.7: Ion Selective Electrode Analysis Ion selective electrode (ISE) is an analytical technique used to determine the activity of ions in aqueous solution by measuring the electrical potential. ISE has many advantages compared to other techniques. Based on these advantages, ISE has wide variety of applications, which is reasonable considering the importance of measuring ion activity. • 1.8: A Practical Introduction to X-ray Absorption Spectroscopy X-ray absorption spectroscopy is a technique that uses synchrotron radiation to provide information about the electronic, structural, and magnetic properties of certain elements in materials. This information is obtained when X-rays are absorbed by an atom at energies near and above the core level binding energies of that atom. Therefore, a brief description about X-rays, synchrotron radiation and X-ray absorption is provided prior to a description of sample preparation for powdered materials. • 1.9: Neutron Activation Analysis (NAA) Neutron activation analysis (NAA) is a non-destructive analytical method commonly used to determine the identities and concentrations of elements within a variety of materials. Unlike many other analytical techniques, NAA is based on nuclear rather than electronic transitions. In NAA, samples are subjected to neutron radiation (i.e., bombarded with neutrons), which causes the elements in the sample to capture free neutrons and form radioactive isotopes. • 1.10: Total Carbon Analysis An introductory module to the theory and application of Carbon Analysis: discusses techniques used to measure Total Organic Carbon, Total Inorganic Carbon, and Total Carbon, and the importance of such techniques. • 1.11: Fluorescence Spectroscopy Atomic fluorescence spectroscopy (AFS) is a method that was invented by Winefordner and Vickers in 1964 as a means to analyze the chemical concentration of a sample. The idea is to excite a sample vapor with the appropriate UV radiation, and by measuring the emitting radiation, the amount of the specific element being measured could be quantified. • 1.12: An Introduction to Energy Dispersive X-ray Spectroscopy Energy-dispersive X-ray spectroscopy (EDX or EDS) is an analytical technique used to probe the composition of a solid materials. Several variants exist, but the all rely on exciting electrons near the nucleus, causing more distant electrons to drop energy levels to fill the resulting “holes.” • 1.13: X-ray Photoelectron Spectroscopy X-Ray photoelectron spectroscopy (XPS), also known as electron spectroscopy for chemical analysis (ESCA), is one of the most widely used surface techniques in materials science and chemistry. It allows the determination of atomic composition of the sample in a non-destructive manner, as well as other chemical information, such as binding constants, oxidation states and speciation. • 1.14: Auger Electron Spectroscopy Auger electron spectroscopy (AES) is one of the most commonly employed surface analysis techniques. It uses the energy of emitted electrons to identify the elements present in a sample, similar to X-ray photoelectron spectroscopy (XPS). The main difference is that XPS uses an X-ray beam to eject an electron while AES uses an electron beam to eject an electron. • 1.15: Rutherford Backscattering of Thin Films One of the main research interests of the semiconductor industry is to improve the performance of semiconducting devices and to construct new materials with reduced size or thickness that have potential application in transistors and microelectronic devices. However, the most significant challenge regarding thin film semiconductor materials is measurement. • 1.16: An Accuracy Assessment of the Refinement of Crystallographic Positional Metal Disorder in Molecular Solid Solutions Crystallographic positional disorder is evident when a position in the lattice is occupied by two or more atoms; the average of which constitutes the bulk composition of the crystal. If a particular atom occupies a certain position in one unit cell and another atom occupies the same position in other unit cells, the resulting electron density will be a weight average of the situation in all the unit cells throughout the crystal. • 1.17: Principles of Gamma-ray Spectroscopy and Applications in Nuclear Forensics Gamma-ray (γ-ray) spectroscopy is a quick and nondestructive analytical technique that can be used to identify various radioactive isotopes in a sample. In gamma-ray spectroscopy, the energy of incident gamma-rays is measured by a detector. 01: Elemental Analysis The purpose of elemental analysis is to determine the quantity of a particular element within a molecule or material. Elemental analysis can be subdivided in two ways: • Qualitative: determining what elements are present or the presence of a particular element. • Quantitative: determining how much of a particular or each element is present. In either case elemental analysis is independent of structure unit or functional group, i.e., the determination of carbon content in toluene (\(\ce{C6H5CH3}\)) does not differentiate between the aromatic \(sp^2\) carbon atoms and the methyl \(sp^3\) carbon. Elemental analysis can be performed on a solid, liquid, or gas. However, depending on the technique employed the sample may have to be pre-reacted, e.g., by combustion or acid digestion. The amounts required for elemental analysis range from a few gram (g) to a few milligram (mg) or less. Elemental analysis can also be subdivided into general categories related to the approach involved in determining quantities. • Classical analysis relies on stoichiometry through a chemical reaction or by comparison with known reference sample. • Modern methods rely on nuclear structure or size (mass) of a particular element and are generally limited to solid samples. Many classical methods they can be further classified into the following categories: • Gravimetric in which a sample is separated from solution as a solid as a precipitate and weighed. This is generally used for alloys, ceramics, and minerals. • Volumetric is the most frequently employed involves determination of the volume of a substance that combines with another substance in known proportions. This is also called titrimetric analysis and is frequently employed using a visual end point or potentiometric measurement. • Colorimetric (spectroscopic) analysis requires the addition of an organic complex agent. This is commonly used in medical laboratories as well as in the analysis of industrial wastewater treatment. The biggest limitation in classical methods is most often due to sample manipulation rather than equipment error, i.e., operator error in weighing a sample or observing an end point. In contrast, the errors in modern analytical methods are almost entirely computer sourced and inherent in the software that analyzes and fits the data. 1.02: Spot Tests Spot tests (spot analysis) are simple chemical procedures that uniquely identify a substance. They can be performed on small samples, even microscopic samples of matter with no preliminary separation. The first report of a spot test was in 1859 by Hugo Shiff for the detection of uric acid. In a typical spot test, a drop of chemical reagent is added to a drop of an unknown mixture. If the substance under study is present, it produces a chemical reaction characterized by one or more unique observables, e.g., a color change. Detection of Chlorine A typical example of a spot test is the detection of chlorine in the gas phase by the exposure to paper impregnated with 0.1% 4-4'bis-dimethylamino-thiobenzophenone (thio-Michler's ketone) dissolved in benzene. In the presence of chlorine the paper will change from yellow to blue. The mechanism involves the zwitterionic form of the thioketone This, in turn, undergoes an oxidation reaction and subsequent disulfide coupling Bibliography • L. Ben-Dor and E. Jungreis, Microchimica Acta, 1964, 52, 100. • F. Feigl, Spot Tests in Organic Analysis, 7th Ed. Elsevier, New York, 2012 • N. MacInnes, A. R. Barron, R. S. Soman, and T. R. Gilbert, J. Am. Ceram. Soc., 1990, 73, 3696. • H. Schi , Ann. Chim. Acta, 1859, 109, 67.
textbooks/chem/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/01%3A_Elemental_Analysis/1.01%3A_Introduction_to_Elemental_Analysis.txt
Applications of Combustion Analysis Combustion, or burning as it is more commonly known, is simply the mixing and exothermic reaction of a fuel and an oxidizer. It has been used since prehistoric times in a variety of ways, such as a source of direct heat, as in furnaces, boilers, stoves, and metal forming, or in piston engines, gas turbines, jet engines, rocket engines, guns, and explosives. Automobile engines use internal combustion in order to convert chemical into mechanical energy. Combustion is currently utilized in the production of large quantities of $\ce{H2}$. Coal or coke is combusted at 1000C in the presence of water in a two-step reaction. The first step shown in involved the partial oxidation of carbon to carbon monoxide. $\ce{C(g) + H2O(g) -> CO(g) + H2(g)} \nonumber$ The second step involves a mixture of produced carbon monoxide with water to produce hydrogen and is commonly known as the water gas shift reaction. $\ce{CO(g) + H2O(g) → CO2(g) + H2(g)} \nonumber$ Although combustion provides a multitude of uses, it was not employed as a scientific analytical tool until the late 18th century. History of Combustion In the 1780's, Antoine Lavoisier (figure $1$ ) was the first to analyze organic compounds with combustion using an extremely large and expensive apparatus (figure $2$ ) that required over 50 g of the organic sample and a team of operators. The method was simplified and optimized throughout the 19th and 20th centuries, first by Joseph Gay- Lussac (Figure $3$), who began to use copper oxide in 1815, which is still used as the standard catalyst. William Prout (Figure $4$) invented a new method of combustion analysis in 1827 by heating a mixture of the sample and $\ce{CuO }$ using a multiple-flame alcohol lamp (Figure $5$) and measuring the change in gaseous volume. In 1831, Justus von Liebig (Figure $6$)) simplified the method of combustion analysis into a "combustion train" system (Figure $7$) and Figure $8$)) that linearly heated the sample using coal, absorbed water using calcium chloride, and absorbed carbon dioxide using potash (KOH). This new method only required 0.5 g of sample and a single operator, and Liebig moved the sample through the apparatus by sucking on an opening at the far right end of the apparatus. Jean-Baptiste André Dumas (Figure $9$)) used a similar combustion train to Liebig. However, he added a U-shaped aspirator that prevented atmospheric moisture from entering the apparatus (Figure $10$)). In 1923, Fritz Pregl (Figure $11$)) received the Nobel Prize for inventing a micro-analysis method of combustion. This method required only 5 mg or less, which is 0.01% of the amount required in Lavoisier's apparatus. Today, combustion analysis of an organic or organometallic compound only requires about 2 mg of sample. Although this method of analysis destroys the sample and is not as sensitive as other techniques, it is still considered a necessity for characterizing an organic compound. Categories of combustion Basic flame types There are several categories of combustion, which can be identified by their flame types (Table $1$). At some point in the combustion process, the fuel and oxidant must be mixed together. If these are mixed before being burned, the flame type is referred to as a premixed flame, and if they are mixed simultaneously with combustion, it is referred to as a nonpremixed flame. In addition, the ow of the flame can be categorized as either laminar (streamlined) or turbulent (Figure $12$). Table $1$: Types of combustion systems with examples. Adapted from J. Warnatz, U. Maas, and R. W. Dibble, Combustion: Physical and Chemical Fundamentals, Modeling and Simulation, Experiments, Pollutant Formation, 3rd Ed., Springer, Berlin (2001). Fuel/oxidizer mixing Fluid motion Examples Premixed Turbulent Spark-ignited gasoline engine, low NOx stationary gas turbine Premixed Laminar Flat flame, Bunsen flame (followed by a nonpremixed candle for Φ>1) Nonpremixed Turbulent Pulverized coal combustion, aircraft turbine, diesel engine, H2/O2 rocket motor Nonpremixed Laminar Wood fire, radiant burners for heating, candle The amount of oxygen in the combustion system can alter the ow of the flame and the appearance. As illustrated in Figure $13$, a flame with no oxygen tends to have a very turbulent flow, while a flame with an excess of oxygen tends to have a laminar flow. Stoichiometric combustion and calculations A combustion system is referred to as stoichiometric when all of the fuel and oxidizer are consumed and only carbon dioxide and water are formed. On the other hand, a fuel-rich system has an excess of fuel, and a fuel-lean system has an excess of oxygen (Table $2$). Table $2$: Examples of stoichiometric, fuel-rich, and fuel-lean systems. Combustion type Reaction example Stoichiometric $\ce{2H2 + O2 -> 2H2O}$ Fuel-rich ($\ce{H2}$ left over) $\ce{3H2 +O2 ->2H2O+H2}$ Fuel-lean ($\ce{O2}$ left over) $\ce{CH4 +3O2 ->2H2O+CO2 +O2}$ If the reaction of a stoichiometric mixture is written to describe the reaction of exactly 1 mol of fuel ($\ce{H2}$ in this case), then the mole fraction of the fuel content can be easily calculated as follows, where $ν$ denotes the mole number of $\ce{O2}$ in the combustion reaction equation for a complete reaction to $\ce{H2O}$ and $\ce{CO2}$, $x_{\text{fuel, stoich}} = \dfrac{1}{1+v} \nonumber$ For example, in the reaction $\ce{H2 + 1/2 O2 → H2O2 + H2} \nonumber$ we have $v = \frac{1}{2}$, so the stoichiometry is calculated as $x_{\ce{H2}, \text{stoich}}= \dfrac{1}{1+0.5} = 2/3 \nonumber$ However, as calculated this reaction would be for the reaction in an environment of pure oxygen. On the other hand, air has only 21% oxygen (78% nitrogen, 1% noble gases). Therefore, if air is used as the oxidizer, this must be taken into account in the calculations, i.e. $x_{\ce{N2}} = 3.762 (x_{\ce{O2}}) \nonumber$ The mole fractions for a stoichiometric mixture in air are therefore calculated in following way: $x_{\text{fuel, stoich}} = \dfrac{1}{1+v(4.762)} \label{eq:xfuel}$ $x_{\ce{O2},\text{stoich}} = v(x_{\text{fuel, stoich}}) \nonumber$ $x_{\ce{N2},\text{stoich}} = 3.762(x_{\ce{O2}, \text{stoich}}) \nonumber$ Example $1$: Calculate the fuel mole fraction ($x_{\text{fuel}}$) for the stoichiometric reaction: $\ce{CH4 + 2O2} + (2 \times 3.762)\ce{N2 → CO2 + 2H2O} + (2 \times 3.762)\ce{N2} \nonumber$ Solution In this reaction $ν$ = 2, as 2 moles of oxygen are needed to fully oxidize methane into $\ce{H2O}$ and $\ce{CO2}$. $x_{\text{fuel, stoich}} = \dfrac{1}{1+2 \times 4.762} = 0.09502 = 9.502~\text{mol} \% \nonumber$ Exercise $1$ Calculate the fuel mole fraction for the stoichiometric reaction: $\ce{C3H8 + 5O2} + (5 \times 3.762)\ce{N2 → 3CO2 + 4H2O} + (5 \times 3.762)\ce{N2} \nonumber$ Answer The fuel mole fraction is 4.03% Premixed combustion reactions can also be characterized by the air equivalence ratio, $\lambda$: $\lambda = \dfrac{x_{\text{air}}/x_{\text{fuel}}}{x_{\text{air, stoich}}/x_{\text{fuel,stoich}}} \nonumber$ The fuel equivalence ratio, $Φ$, is the reciprocal of this value $Φ = 1/\lambda \nonumber$ Rewriting \ref{eq:xfuel} in terms of the fuel equivalence ratio gives: $x_{\text{fuel}} = \frac { 1 } { 1 + v( 4.672 / \Phi ) } \nonumber$ $x_{\text{air}} = 1 - x_{\text{fuel}} \nonumber$ $x_{\ce{O2}} = x_{\text{air}}/4.762 \nonumber$ $x_{\ce{N2}} = 3.762(x_{\ce{O2}}) \nonumber$ The premixed combustion processes can also be identified by their air and fuel equivalence ratios (Table $3$ ). Table $3$: Identification of combustion type by Φ and λ values. Type of combustion Φ λ Rich >1 <1 Stoichiometric =1 =1 Lean <1 >1 With a premixed type of combustion, there is much greater control over the reaction. If performed at lean conditions, then high temperatures, the pollutant nitric oxide, and the production of soot can be minimized or even avoided, allowing the system to combust efficiently. However, a premixed system requires large volumes of premixed reactants, which pose a fire hazard. As a result, nonpremixed combusted, while not being efficient, is more commonly used. Instrumentation Though the instrumentation of combustion analysis has greatly improved, the basic components of the apparatus (Figure 1.14) have not changed much since the late 18th century. The sample of an organic compound, such as a hydrocarbon, is contained within a furnace or exposed to a ame and burned in the presence of oxygen, creating water vapor and carbon dioxide gas (Figure $15$). The sample moves first through the apparatus to a chamber in which$\ce{H2O}$ is absorbed by a hydrophilic substance and second through a chamber in which $\ce{CO2}$ is absorbed. The change in weight of each chamber is determined to calculate the weight of $\ce{H2O}$ and $\ce{CO2}$. After the masses of $\ce{H2O}$ and $\ce{CO2}$ have been determined, they can be used to characterize and calculate the composition of the original sample. Calculations and determining chemical formulas Hydrocarbons Combustion analysis is a standard method of determining a chemical formula of a substance that contains hydrogen and carbon. First, a sample is weighed and then burned in a furnace in the presence of excess oxygen. All of the carbon is converted to carbon dioxide, and the hydrogen is converted to water in this way. Each of these are absorbed in separate compartments, which are weighed before and after the reaction. From these measurements, the chemical formula can be determined. Generally, the following reaction takes place in combustion analysis: $\ce{C_{a}H_{b} + O2(xs) → aCO2 + b/2 H2O} \nonumber$ Example $2$: After burning 1.333 g of a hydrocarbon in a combustion analysis apparatus, 1.410 g of $\ce{H2O}$ and 4.305 g of $\ce{CO2}$ were produced. Separately, the molar mass of this hydrocarbon was found to be 204.35 g/mol. Calculate the empirical and molecular formulas of this hydrocarbon. Step 1: Using the molar masses of water and carbon dioxide, determine the moles of hydrogen and carbon that were produced. $1.410~\text{g}~\ce{H2O} \times \dfrac{1~\text{mol}~\ce{H2O}}{18.015~\text{g}~\ce{H2O}} \times \dfrac{2~\text{mol H}}{1~\text{mol}~\ce{H2O}} = 0.1565~\text{mol H} \nonumber$ $4.3051~\text{g}~\ce{CO2} \times \dfrac{1~\text{mol}~\ce{CO2}}{44.010~\text{g}~\ce{CO2}} \times \dfrac{1~\text{mol C}}{1~\text{mol}~\ce{CO2}} = 0.09782 ~\text{mol C} \nonumber$ Step 2: Divide the larger molar amount by the smaller molar amount. In some cases, the ratio is not made up of two integers. Convert the numerator of the ratio to an improper fraction and rewrite the ratio in whole numbers as shown $\frac { 0.1565~\mathrm { mol~H } } { 0.09782~\mathrm{ mol~C} } = \frac { 1.600~\mathrm { mol~H } } { 1~\mathrm { mol~C } } = \frac { 16 / 10~\mathrm { mol~H } } { 1~\mathrm { mol~C } } = \frac { 8 / 5~\mathrm { mol~H } } { 1~\mathrm { mol~C } } = \frac { 8~\mathrm { mol~H } } { 5~\mathrm { mol~C } } \nonumber$ Therefore, the empirical formula is $\ce{C5H8}$. Step 3: To get the molecular formula, divide the experimental molar mass of the unknown hydrocarbon by the empirical formula weight. $\frac { \text { Molar mass } } { \text { Empirical formula weight } } = \frac { 204.35~\mathrm { g } / \mathrm { mol } } { 68.114~\mathrm { g } / \mathrm { mol } } = 3 \nonumber$ Therefore, the molecular formula is $\ce{(C5H8)3}$ or $\ce{C15H24}$. Exercise $2$ After burning 1.082 g of a hydrocarbon in a combustion analysis apparatus, 1.583 g of $\ce{H2O}$ and 3.315 g of $\ce{CO2}$ were produced. Separately, the molar mass of this hydrocarbon was found to be 258.52 g/mol. Calculate the empirical and molecular formulas of this hydrocarbon. Answer The empirical formula is $\ce{C3H7}$, and the molecular formula is $\ce{(C3H7)6}$ or$\ce{ C18H42}$. Compounds containing carbon, hydrogen, and oxygen Combustion analysis can also be utilized to determine the empiric and molecular formulas of compounds containing carbon, hydrogen, and oxygen. However, as the reaction is performed in an environment of excess oxygen, the amount of oxygen in the sample can be determined from the sample mass, rather than the combustion data Example $3$: A 2.0714 g sample containing carbon, hydrogen, and oxygen was burned in a combustion analysis apparatus; 1.928 g of $\ce{H2O}$ and 4.709 g of $\ce{CO2}$ were produced. Separately, the molar mass of the sample was found to be 116.16 g/mol. Determine the empirical formula, molecular formula, and identity of the sample. Step 1: Using the molar masses of water and carbon dioxide, determine the moles of hydrogen and carbon that were produced. $1.928~\text{g}~\ce{H2O} \times \dfrac{1~\text{mol}~\ce{H2O}}{18.015~\text{g}~\ce{H2O}} \times \dfrac{2~\text{mol H}}{1~\text{mol}~\ce{H2O}} = 0.2140~\text{mol H} \nonumber$ $4.709~\text{g}~\ce{CO2} \times \dfrac{1~\text{mol}~\ce{CO2}}{44.010~\text{g}~\ce{CO2}} \times \dfrac{1~\text{mol C}}{1~\text{mol}~\ce{CO2}} = 0.1070 ~\text{mol C} \nonumber$ Step 2: Using the molar amounts of carbon and hydrogen, calculate the masses of each in the original sample. $0.2140~\mathrm { mol~H } \times \frac { 1.008~\mathrm { g~H } } { 1~\mathrm { mol~H } } = 0.2157~\mathrm { g~H } \nonumber$ $0.1070 ~\mathrm{mol~C} \times \frac { 12.011~\mathrm{g~C} } { 1~\mathrm{ mol~C} } = 1.285~\mathrm{g~C} \nonumber$ Step 3: Subtract the masses of carbon and hydrogen from the sample mass. Now that the mass of oxygen is known, use this to calculate the molar amount of oxygen in the sample. $2.0714 \mathrm{ g~sample } - 0.2157~\mathrm{ g~H} - 1.285~\mathrm{ g~C} = 0.5707~\mathrm{ g~O } \nonumber$ $0.5707 ~\mathrm{mol~O} \times \frac { 1~\mathrm{mol~O} } { 16.00~\mathrm{ g~O} } = 0.03567~\mathrm{g~O} \nonumber$ Step 4: Divide each molar amount by the smallest molar amount in order to determine the ratio between the three elements. $\frac { 0.03567~\mathrm { mol~O } } { 0.03567 } = 1.00~\mathrm { mol~O } = 1~\mathrm { mol~O } \nonumber$ $\frac { 0.1070~\mathrm { mol~C } } { 0.03567 } = 3.00 \mathrm { mol~C } = 3~\mathrm { mol~C } \nonumber$ $\frac { 0.2140~\mathrm { mol~H } } { 0.03567 } = 5.999~\mathrm { mol~H } = 6~\mathrm { mol~H } \nonumber$ Therefore, the empirical formula is $\ce{C3H6O}$. Step 5: To get the molecular formula, divide the experimental molar mass of the unknown hydrocarbon by the empirical formula weight. $\frac { \text { Molar mass } } { \text { Empirical formula weight } } = \frac { 116.16~\mathrm { g /mol } } { 58.08~\mathrm { g /mol } } = 2 \nonumber$ Therefore, the molecular formula is $\ce{(C3H6O)2}$ or $\ce{C6H12O2}$. Structure of possible compounds with the molecular formula $\ce{C6H12O2}$: (a) butylacetate, (b) sec-butyl acetate, (c) tert-butyl acetate, (d) ethyl butyrate, (e) haxanoic acid, (f) isobutyl acetate, (g) methyl pentanoate, and (h) propyl proponoate. Exercise $3$ A 4.846 g sample containing carbon, hydrogen, and oxygen was burned in a combustion analysis apparatus; 4.843 g of $\ce{H2O}$ and 11.83 g of $\ce{CO2}$ were produced. Separately, the molar mass of the sample was found to be 144.22 g/mol. Determine the empirical formula, molecular formula, and identity of the sample. Answer The empirical formula is $\ce{C4H8O}$, and the molecular formula is ($\ce{C4H8O)2}$ or $\ce{C8H16O2}$. Structure of possible compounds with the molecular formula $\ce{C8H16O2}$: (a) octanoic acid (caprylic acid), (b) hexyl acetate, (c) pentyl proponate, (d) 2-ethyl hexanoic acid, (e) valproic acid (VPA), (f) cyclohexanedimethanol (CHDM), and (g) 2,2,4,4-tetramethyl-1,3-cyclobutandiol (CBDO). Binary compounds By using combustion analysis, the chemical formula of a binary compound containing oxygen can also be determined. This is particularly helpful in the case of combustion of a metal which can result in potential oxides of multiple oxidation states. Example $4$: A sample of iron weighing 1.7480 g is combusted in the presence of excess oxygen. A metal oxide ($\ce{Fe_{x}O_{y})}$ is formed with a mass of 2.4982 g. Determine the chemical formula of the oxide product and the oxidation state of Fe. Step 1: Subtract the mass of Fe from the mass of the oxide to determine the mass of oxygen in the product. $2.4982~\mathrm { g~Fe } _ { \mathrm { x } } \mathrm { O } _ { \mathrm { y } } - 1.7480~\mathrm { g~Fe } = 0.7502~\mathrm { g~O } \nonumber$ Step 2: Using the molar masses of Fe and O, calculate the molar amounts of each element. $1.7480 \mathrm { g~Fe } \times \frac { 1 \text { mol Fe } } { 55.845 \text { g Fe } } = 0.031301 \text { mol Fe } \nonumber$ $0.7502~\text { g } \times \frac { 1 \text { mol O }} { 16.00~\text { g O} } = 0.04689~\text { mol O } \nonumber$ Step 3: Divide the larger molar amount by the smaller molar amount. In some cases, the ratio is not made up of two integers. Convert the numerator of the ratio to an improper fraction and rewrite the ratio in whole numbers as shown. $\frac { 0.031301~\text{ mol Fe } } { 0.04689~\mathrm { mol~O } } = \frac { 0.6675~\mathrm { mol~Fe } } { 1~\mathrm { mol~O } } = \frac{ \frac{2}{3} \mathrm { mol~Fe } } { 1~\mathrm { mol~O } } = \frac { 2~\mathrm { mol~Fe } } { 3~\mathrm { mol~O } } \nonumber$ Therefore, the chemical formula of the oxide is $\ce{Fe2O3}$, and Fe has a 3+ oxidation state. Exercise $4$ A sample of copper weighing 7.295 g is combusted in the presence of excess oxygen. A metal oxide ($\ce{Cu_{x}O_{y}}$) is formed with a mass of 8.2131 g. Determine the chemical formula of the oxide product and the oxidation state of Cu. Answer The chemical formula is $\ce{Cu2O}$, and Cu has a 1+ oxidation state.. Bibliography • J. A. Dumas, Ann. Chem. Pharm., 1841, 38, 141. • H. Goldwhite, J. Chem. Edu., 1978, 55, 366. • A. Lavoisier, Traité Élémentaire de Chimie, 1789, 2, 493. • J. Von Liebig, Annalen der Physik und Chemie, 1831, 21, 1. • A. Linan and F. A. Williams, Fundamental Aspects of Combustion, Oxford University Press, New York (1993). • J. M. McBride, "Combustion Analysis," Chemistry 125, Yale University. • W. Prout, Philos. T. R. Soc. Lond., 1827, 117, 355. • D. Shriver and P. Atkins, Inorganic Chemistry, 5th Ed., W. H. Freeman and Co., New York (2009). • W. Vining et. al., General Chemistry, 1st Ed., Cengage, Brooks/Cole Cengage Learning, University of Massachusetts Amherst (2014). • J. Warnatz, U. Maas, and R. W. Dibble, Combustion: Physical and Chemical Fundamentals, Modeling and Simulation, Experiments, Pollutant Formation, 3rd Ed., Springer, Berlin (2001)
textbooks/chem/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/01%3A_Elemental_Analysis/1.03%3A_Introduction_to_Combustion_Analysis.txt
Brief overview of atomic absorption spectroscopy History of atomic absorption spectroscopy The earliest spectroscopy was first described by Marcus Marci von Kronland in 1648 by analyzing sunlight as is passed through water droplets and thus creating a rainbow. Further analysis of sunlight by William Hyde Wollaston (Figure $1$) led to the discovery of black lines in the spectrum, which in 1820 Sir David Brewster (Figure $2$) explained as absorption of light in the sun’s atmosphere. Robert Bunsen (Figure $3$) and Gustav Kirchhoff (Figure $4$) studied the sodium spectrum and came to the conclusion that every element has its own unique spectrum that can be used to identify elements in the vapor phase. Kirchoff further explained the phenomenon by stating that if a material can emit radiation of a certain wavelength, that it may also absorb radiation of that wavelength. Although Bunsen and Kirchoff took a large step in defining the technique of atomic absorption spectroscopy (AAS), it was not widely utilized as an analytical technique except in the field of astronomy due to many practical difficulties. In 1953, Alan Walsh (Figure $5$) drastically improved the AAS methods. He advocated AAS to many instrument manufacturers, but to no avail. Although he had improved the methods, he hadn’t shown how it could be useful in any applications. In 1957, he discovered uses for AAS that convinced manufactures market the first commercial AAS spectrometers. Since that time, AAS's popularity has fluctuated as other analytical techniques and improvements to the methods are made. Theory of atomic absorption spectroscopy In order to understand how atomic absorption spectroscopy works, some background information is necessary. Atomic theory began with John Dalton (Figure $6$) in the 18th century when he proposed the concept of atoms, that all atoms of an element are identical, and that atoms of different elements can combine to form molecules. In 1913, Niels Bohr (Figure $7$) revolutionized atomic theory by proposing quantum numbers, a positively charged nucleus, and electrons orbiting around the nucleus in the what became known as the Bohr model of the atom. Soon afterward, Louis deBroglie (Figure $8$) proposed quantized energy of electrons, which is an extremely important concept in AAS. Wolfgang Pauli (Figure $9$) then elaborated on deBroglie’s theory by stating that no two electrons can share the same four quantum numbers. These landmark discoveries in atomic theory are necessary in understanding the mechanism of AAS. Atoms have valence electrons, which are the outermost electrons of the atom. Atoms can be excited when irradiated, which creates an absorption spectrum. When an atom is excited, the valence electron moves up an energy level. The energies of the various stationary states, or restricted orbits, can then be determined by these emission lines. The resonance line is then defined as the specific radiation absorbed to reach the excited state. The Maxwell-Boltzmann equation gives the number of electrons in any given orbital. It relates the distribution to the thermal temperature of the system (as opposed to electronic temperature, vibrational temperature, or rotational temperature). Plank proposed radiation emitted energy in discrete packets (quanta), $E= h \nu \nonumber$ which can be related to Einstein’s equation $E=mc^2 \label{eq:mc2}$ Both atomic emission and atomic absorption spectroscopy can be used to analyze samples. Atomic emission spectroscopy measures the intensity of light emitted by the excited atoms, while atomic absorption spectroscopy measures the light absorbed by atomic absorption. This light is typically in the visible or ultraviolet region of the electromagnetic spectrum. The percentage is then compared to a calibration curve to determine the amount of material in the sample. The energy of the system can be used to find the frequency of the radiation, and thus the wavelength through the combination of equations \ref{eq:mc2} and \ref{eq:ncl}. $\nu = c/\lambda \label{eq:ncl}$ Because the energy levels are quantized, only certain wavelengths are allowed and each atom has a unique spectrum. There are many variables that can affect the system. For example, if the sample is changed in a way that increases the population of atoms, there will be an increase in both emission and absorption and vice versa. There are also variables that affect the ratio of excited to unexcited atoms such as an increase in temperature of the vapor. Applications of Atomic Absorption Spectroscopy There are many applications of atomic absorption spectroscopy (AAS) due to its specificity. These can be divided into the broad categories of biological analysis, environmental and marine analysis, and geological analysis. Biological analysis Biological samples can include both human tissue samples and food samples. In human tissue samples, AAS can be used to determine the amount of various levels of metals and other electrolytes, within tissue samples. These tissue samples can be many things including but not limited to blood, bone marrow, urine, hair, and nails. Sample preparation is dependent upon the sample. This is extremely important in that many elements are toxic in certain concentrations in the body, and AAS can analyze what concentrations they are present in. Some examples of trace elements that samples are analyzed for are arsenic, mercury, and lead. An example of an application of AAS to human tissue is the measurement of the electrolytes sodium and potassium in plasma. This measurement is important because the values can be indicative of various diseases when outside of the normal range. The typical method used for this analysis is atomization of a 1:50 dilution in strontium chloride ($\ce{SrCl2}$) using an air-hydrogen flame. The sodium is detected at its secondary line (330.2 nm) because detection at the first line would require further dilution of the sample due to signal intensity. The reason that strontium chloride is used is because it reduces ionization of the potassium and sodium ions, while eliminating phosphate’s and calcium’s interference. In the food industry, AAS provides analysis of vegetables, animal products, and animal feeds. These kinds of analyses are some of the oldest application of AAS. An important consideration that needs to be taken into account in food analysis is sampling. The sample should be an accurate representation of what is being analyzed. Because of this, it must be homogenous, and many it is often needed that several samples are run. Food samples are most often run in order to determine mineral and trace element amounts so that consumers know if they are consuming an adequate amount. Samples are also analyzed to determine heavy metals which can be detrimental to consumers. Environmental and marine analysis Environmental and marine analysis typically refers to water analysis of various types. Water analysis includes many things ranging from drinking water to waste water to sea water. Unlike biological samples, the preparation of water samples is governed more by laws than by the sample itself. The analytes that can be measured also vary greatly and can often include lead, copper, nickel, and mercury. An example of water analysis is an analysis of leaching of lead and zinc from tin-lead solder into water. The solder is what binds the joints of copper pipes. In this particular experiment, soft water, acidic water, and chlorinated water were all analyzed. The sample preparation consisted of exposing the various water samples to copper plates with solder for various intervals of time. The samples were then analyzed for copper and zinc with air-acetylene flame AAS. A deuterium lamp was used. For the samples that had copper levels below 100 µg/L, the method was changed to graphite furnace electrothermal AAS due to its higher sensitivity. Geological analysis Geological analysis encompasses both mineral reserves and environmental research. When prospecting mineral reserves, the method of AAS used needs to be cheap, fast, and versatile because the majority of prospects end up being of no economic use. When studying rocks, preparation can include acid digestions or leaching. If the sample needs to have silicon content analyzed, acid digestion is not a suitable preparation method. An example is the analysis of lake and river sediment for lead and cadmium. Because this experiment involves a solid sample, more preparation is needed than for the other examples. The sediment was first dried, then grounded into a powder, and then was decomposed in a bomb with nitric acid ($\ce{HNO3}$) and perchloric acid ($\ce{HClO4}$). Standards of lead and cadmium were prepared. Ammonium sulfate ($\ce{[NH4][SO4]}$]) and ammonium phosphate ($\ce{[NH4][3PO4]}$]) were added to the samples to correct for the interferences caused by sodium and potassium that are present in the sample. The standards and samples were then analyzed with electrothermal AAS. Instrumentation Atomizer In order for the sample to be analyzed, it must first be atomized. This is an extremely important step in AAS because it determines the sensitivity of the reading. The most effective atomizers create a large number of homogenous free atoms. There are many types of atomizers, but only two are commonly used: flame and electrothermal atomizers. Flame atomizer Flame atomizers (Figure $10$) are widely used for a multitude of reasons including their simplicity, low cost, and long length of time that they have been utilized. Flame atomizers accept an aerosol from a nebulizer into a flame that has enough energy to both volatilize and atomize the sample. When this happens, the sample is dried, vaporized, atomized, and ionized. Within this category of atomizers, there are many subcategories determined by the chemical composition of the flame. The composition of the flame is often determined based on the sample being analyzed. The flame itself should meet several requirements including sufficient energy, a long length, non-turbulent, and safe. Electrothermal atomizer Although electrothermal atomizers were developed before flame atomizers, they did not become popular until more recently due to improvements made to the detection level. They employ graphite tubes that increase temperature in a stepwise manner. Electrothermal atomization first dries the sample and evaporates much of the solvent and impurities, then atomizes the sample, and then rises it to an extremely high temperature to clean the graphite tube. Some requirements for this form of atomization are the ability to maintain a constant temperature during atomization, have rapid atomization, hold a large volume of solution, and emit minimal radiation. Electrothermal atomization is much less harsh than the method of flame atomization. Radiation source The radiation source then irradiates the atomized sample. The sample absorbs some of the radiation, and the rest passes through the spectrometer to a detector. Radiation sources can be separated into two broad categories: line sources and continuum sources. Line sources excite the analyte and thus emit its own line spectrum. Hollow cathode lamps and electrodeless discharge lamps are the most commonly used examples of line sources. On the other hand, continuum sources have radiation that spreads out over a wider range of wavelengths. These sources are typically only used for background correction. Deuterium lamps and halogen lamps are often used for this purpose. Spectrometer Spectrometers are used to separate the different wavelengths of light before they pass to the detector. The spectrometer used in AAS can be either single-beam or double-beam. Single-beam spectrometers only require radiation that passes directly through the atomized sample, while double-beam spectrometers (Figure $12$), as implied by the name, require two beams of light; one that passes directly through the sample, and one that does not pass through the sample at all. (Insert diagrams) The single-beam spectrometers have less optical components and therefore suffer less radiation loss. Double-beam monochromators have more optical components, but they are also more stable over time because they can compensate for changes more readily. Obtaining Measurements Sample preparation Sample preparation is extremely varied because of the range of samples that can be analyzed. Regardless of the type of sample, certain considerations should be made. These include the laboratory environment, the vessel holding the sample, storage of the sample, and pretreatment of the sample. Sample preparation begins with having a clean environment to work in. AAS is often used to measure trace elements, in which case contamination can lead to severe error. Possible equipment includes laminar flow hoods, clean rooms, and closed, clean vessels for transportation of the sample. Not only must the sample be kept clean, it also needs to be conserved in terms of pH, constituents, and any other properties that could alter the contents. When trace elements are stored, the material of the vessel walls can adsorb some of the analyte leading to poor results. To correct for this, perfluoroalkoxy polymers (PFA), silica, glassy carbon, and other materials with inert surfaces are often used as the storage material. Acidifying the solution with hydrochloric or nitric acid can also help prevent ions from adhering to the walls of the vessel by competing for the space. The vessels should also contain a minimal surface area in order to minimize possible adsorption sites. Pretreatment of the sample is dependent upon the nature of the sample. See Table $1$ for sample pretreatment methods. Table $1$ Sample pretreatment methods for AAS. Sample Examples Pretreatment method Aqueous solutions Water, beverages, urine, blood Digestion if interference causing substituents are present Suspensions Water, beverages, urine, blood Solid matter must either be removed by filtration, centrifugation or digestion, and then the methods for aqueous solutions can be followed Organic liquids Fuels, oils Either direct measurement with AAS or diltion with organic material followed by measurement with AAS, standards must contain the analyte in the same form as the sample Solids Foodstuffs, rocks Digestion followed by electrothermal AAS Calibration curve In order to determine the concentration of the analyte in the solution, calibration curves can be employed. Using standards, a plot of concentration versus absorbance can be created. Three common methods used to make calibration curves are the standard calibration technique, the bracketing technique, and the analyte addition technique. Standard calibration technique This technique is the both the simplest and the most commonly used. The concentration of the sample is found by comparing its absorbance or integrated absorbance to a curve of the concentration of the standards versus the absorbances or integrated absorbances of the standards. In order for this method to be applied the following conditions must be met: • Both the standards and the sample must have the same behavior when atomized. If they do not, the matrix of the standards should be altered to match that of the sample. • The error in measuring the absorbance must be smaller than that of the preparation of the standards. • The samples must be homogeneous. The curve is typically linear and involves at least five points from five standards that are at equidistant concentrations from each other (Figure $13$). This ensures that the fit is acceptable. A least means squares calculation is used to linearly fit the line. In most cases, the curve is linear only up to absorbance values of 0.5 to 0.8. The absorbance values of the standards should have the absorbance value of a blank subtracted. Bracketing Technique The bracketing technique is a variation of the standard calibration technique. In this method, only two standards are necessary with concentrations $c_1$ and $c_2$. They bracket the approximate value of the sample concentration very closely. Applying Equation \ref{bracketing } to determines the value for the sample, where $c_x$ and $A_x$ are the concentration and adsorbance of the unknown, and $A_1$ and $A_2$ are the adsorbance for $c_1$ and $c_2$, respectively. $c _ { x } = \frac { \left( A _ { x } - A _ { 1 } \right) \left( c _ { 1 } - c _ { 2 } \right) } { A _ { 2 } - A _ { 1 } } + c _ { 1 } \label{bracketing }$ This method is very useful when the concentration of the analyte in the sample is outside of the linear portion of the calibration curve because the bracket is so small that the portion of the curve being used can be portrayed as linear. Although this method can be used accurately for nonlinear curves, the further the curve is from linear the greater the error will be. To help reduce this error, the standards should bracket the sample very closely. Analyte Addition Technique The analyte addition technique is often used when the concomitants in the sample are expected to create many interferences and the composition of the sample is unknown. The previous two techniques both require that the standards have a similar matrix to that of the sample, but that is not possible when the matrix is unknown. To compensate for this, the analyte addition technique uses an aliquot of the sample itself as the matrix. The aliquots are then spiked with various amounts of the analyte. This technique must be used only within the linear range of the absorbances. Measurement Interference Interference is caused by contaminants within the sample that absorb at the same wavelength as the analyte, and thus can cause inaccurate measurements. Corrections can be made through a variety of methods such as background correction, addition of chemical additives, or addition of analyte. Table $2$: Examples of interference in AAS. Interference type Cause of interference Result Example Correction measures Atomic line overlap Spectral profile of two elements are within 0.01 nm of each other Higher experimental absorption value than the real value Very rare, with the only plausable problem being that of copper (324.754 nm) and europium (324.753 nm) Typically doesn't occur in practical situations, so there is no established correction method Molecular band and line overlap Spectral profile of an element overlaps with molecular band Higher experimental absorption value than the real value Calcium hydroxide and barium at 553.6 nm in a air-acetylene flame Background correction Ionization (vapor-phase or cation enhancement) atoms are ionized at the temperature of the flame/furnace, which decreases the amount of free atoms Lower experimental absorption value than real value Problems commonly occur with cesium, potassium, and sodium Add an ionization suppressor (or buffer) to both the sample and the standards Light scattering Solid particles scatter the beam of light lowering the intensity of the beam entering the monochromater Higher experimental absorption value than the real value High in samples with many refractory elements, highest at UV wavelengths (add specific example) Matrix modifaction and/or background correction Chemical The chemical being analyzed is contained withing a compound in the analyte that is not atomized Lower experimental absorption value than real value Calcium and phosphate ions form calcium phosphate which is then converted to calcium pyrophosphate which is stable in high heat Increase the temperature of the flame if flame AAS is being used, use a releasing chemical, or standard addition for electrothermal AAS Physical If physical properties of the sample and the standards are different, atomization can be affected thus affecting the number of free atom population Can vary in either direction depending upon the conditions Viscosity differences, surface tension differences, etc Alter the standards to have similar physical properties to the samples Volitalization In electrothermal atomization, interference will occur if the rate of volatilization is not the same for the sample as for the standard, which is often caused by a heavy matrix Can vary in either direction depending upon the conditions Chlorides are very volatile, so they need to be converted to a less volatile form. Often this is done by the addition of nitrate or slufate. Zinc and lead are also highly problamatic Change the matrix by standard addition, or selectively volatileze components of the matrix Bibliography • L. Ebon, A. Fisher and S. J. Hill, An Introduction to Analytical Atomic Spectrometry, Ed. E. H. Evans, Wiley, New York (1998). • B. Welz and M. Sperling, Atomic Absorption Spectrometry, 3rd Ed, Wiley-VCH, New York (1999). • J. W. Robinson, Atomic Spectroscopy, 2nd Ed. Marcel Dekker, Inc., New York (1996). • K. S. Subramanian, Water Res., 1995, 29, 1827. • M. Sakata and O. Shimoda, Water Res., 1982, 16, 231. • J. C. Van Loon, Analytical Atomic Absorption Spectroscopy Selected Methods, Academic Press, New York (1980).
textbooks/chem/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/01%3A_Elemental_Analysis/1.04%3A_Introduction_to_Atomic_Absorption_Spectroscopy.txt
What is ICP-AES? Inductively coupled plasma atomic emission spectroscopy (ICP-AES) is a spectral method used to determine very precisely the elemental composition of samples; it can also be used to quantify the elemental concentration with the sample. ICP-AES uses high-energy plasma from an inert gas like argon to burn analytes very rapidly. The color that is emitted from the analyte is indicative of the elements present, and the intensity of the spectral signal is indicative of the concentration of the elements that is present. A schematic view of a typical experimental set-up is shown here. How does ICP-AES work? ICP-AES works by the emission of photons from analytes that are brought to an excited state by the use of high-energy plasma. The plasma source is induced when passing argon gas through an alternating electric field that is created by an inductively couple coil. When the analyte is excited the electrons try to dissipate the induced energy moving to a ground state of lower energy, in doing this they emit the excess energy in the form of light. The wavelength of light emitted depends on the energy gap between the excited energy level and the ground state. This is specific to the element based on the number of electrons the element has and electron orbital’s are filled. In this way the wavelength of light can be used to determine what elements are present by detection of the light at specific wavelengths. As a simple example consider the situation when placing a piece of copper wire into the flame of a candle. The flame turns green due to the emission of excited electrons within the copper metal, as the electrons try to dissipate the energy incurred from the flame, they move to a more stable state emitting energy in the form of light. The energy gap between the excited state to the ground state ($ΔE$ dictates the color of the light or wavelength of the light, Equation \ref{eq:DeltaE}, where $h$ is Plank's constant (6.626×10-34 m2kg/s), and $\nu$ is the frequency of the emitted light. $\Delta E = h \nu \label{eq:DeltaE}$ The wavelength of light is indicative of the element present. If another metal is placed in the flame such as iron a different color flame will be emitted because the electronic structure of iron is different from that of copper. This is a very simple analogy for what is happening in ICP-AES and how it is used to determine what elements are present. By detecting the wavelength of light that is emitted from the analyte one can deduce what elements are be present. Naturally if there is a lot of the material present then there will be an accumulative effect making the intensity of the signal large. However, if there were very little materials present the signal would be low. By this rationale one can create a calibration curve from analyte solutions of known concentrations, whereby the intensity of the signal changes as a function of the concentration of the material that is present. When measuring the intensity from a sample of unknown concentration the intensity from this sample can be compared to that from the calibration curve, so this can be used to determine the concentration of the analytes within the sample. ICP-AES of Nanoparticles to Determine Elemental Composition As with any sample being studied by ICP-AES nanoparticles need to be digested so that all the atoms can be vaporized in the plasma equally. If a metal containing nanoparticle were not digested using a strong acid to bring the metals atoms into solution, the form of the particle could hinder some of the material being vaporized. The analyte would not be detected even though it is present in the sample and this would give an erroneous result. Nanoparticles are often covered with a protective layer of organic ligands and this must be removed also. Further to this the solvent used for the nanoparticles may also be an organic solution and this should be removed as it too will not be miscible in the aqueous medium. Several organic solvents have low vapor pressures so it is relatively easy to remove the solvent by heating the samples, removing the solvent by evaporation. To remove the organic ligands that are present on the nanoparticle, choric acid can be used. This is a very strong acid and can break down the organic ligands readily. To digest the particles and get the metal into solution concentrated nitric acid is often used. A typical protocol may use 0.5 mL of concentrated nanoparticle solution and digest this with 9.5 mL of concentrated nitric acid over the period of a few days. After which 0.5 mL of the digested solution is placed in 9.5 mL of nanopure water. The reason why nanopure water is used is because DI water or regular water will have some amount of metals ions present and these will be detected by the ICP-AES measurement and will lead to figures that are not truly representative of the analyte concentration alone. This is especially pertinent when there is a very a low concentration of metal analyte to be detected, and is even more a problem when the metal to be detected is commonly found in water such as iron. Once the nanopure water and digested solution are prepared then the sample is ready for analysis. Another point to consider when doing ICP-AES on nanoparticles to determine chemical compositions, includes the potential for wavelength overlap. The energy that is released in the form of light is unique to each element, but elements that are very similar in atomic structure will have emission wavelengths that are very similar to one another. Consider the example of iron and cobalt, these are both transition metals and sit right beside each other on the periodic table. Iron has an emission wavelength at 238.204 nm and cobalt has an emission wavelength at 238.892 nm. So if you were to try determine the amount of each element in an alloy of the two you would have to select another wavelength that would be unique to that element, and not have any wavelength overlap to other analytes in solution. For this case of iron and cobalt it would be wiser to use a wavelength for iron detection of 259.940 nm and a wavelength detection of 228.616 nm. Bearing this in mind a good rule of thumb is to try use the wavelength of the analyte that affords the best detection primarily. But if this value leads to a possible wavelength overlap of within 15 nm wavelength with another analyte in the solution then another choice should be made of the detection wavelength to prevent wavelength overlap from occurring. Some people have also used the ICP-AES technique to determine the size of nanoparticles. The signal that is detected is determined by the amount of the material that is present in solution. If very dilute solutions of nanoparticles are being analyzed, particles are being analyzed one at a time, i.e., there will be one nanoparticle per droplet in the nebulizer. The signal intensity would then differ according to the size of the particle. In this way the ICP-AES technique could be used to determine the concentration of the particles in the solution as well as the size of the particles. Calculations for ICP Concentrations In order to performe ICP-AES stock solutions must be prepared in dilute nitric acid solutions. To do this a concentrated solution should be diluted with nanopure water to prepare 7 wt% nitric acid solutions. If the concentrated solution is 69.8 wt% (check the assay amount that is written on the side of the bottle) then the amount to dilute the solution will be as such: • The density ($d$) of $\ce{HNO3}$ is 1.42 g/mL • Molecular weight ($M_W$) of $\ce{HNO3}$ is 63.01 Concentrated percentage 69.8 wt% from assay. First you must determine the molarity of the concentrated solution: $\text { Molarity } = \left[ ( \% ) ( \mathrm { d } ) / \left( \mathrm { M } _ { \mathrm { W } } \right) \right] \times 10 \label{eq:molarity}$ For the present assay amount, the figure will be calculated as follows $\mathrm { M } = [ ( 69.8 ) ( 1.42 ) / ( 63.01 ) ] \times 10 \nonumber$ $\therefore \mathrm { M } = 15.73 \nonumber$ This is the initial concentration $C_I$. To determine the molarity of the 7% solution we again use Equation \ref{eq:molarity} to find the final concentration $C_F$. $\mathbf { M } = [ ( 7 ) ( 1.42 ) / ( 63.01 ) ] \times 10 \nonumber$ $\therefore M = 1.58 \nonumber$ We use these figures to determine the amount of dilution required to dilute the concentrated nitric acid to make it a 7% solution. $\text { mass } _ { 1 } \times \text { concentration } _ { 1 } = \text { mass } _ { \mathrm { F } }\times \text { concentration } _ { \mathrm { F } } \nonumber$ Now as we are talking about solutions the amount of mass will be measured in mL, and the concentration will be measured as a molarity. MI and MF have been calculated above. $\mathrm { mL } _ { 1 } * \mathrm { C } _ { 1 } = \mathrm { mL } _ { \mathrm { F } } * \mathrm { C } _ { \mathrm { F } } \label{eq:MV}$ $\therefore \mathrm { mL } _ { 1 } = \left[ \mathrm { mL } _ { \mathrm { F } } * \mathrm { C } _ { \mathrm { F } } \right]/ \mathrm { C } _ { 1 } \nonumber$ In addition, the amount of dilute solution will be dependent on the user and how much is required by the user to complete the ICP analysis, for the sake of argument let’s say that we need 10 mL of dilute solution, this is mLF: $\mathrm { mL } _ { 1 } = [ 10 * 1.58 ] / 15.73 \nonumber$ $\therefore \mathrm { mL } _ { 1 } = 10.03 \mathrm { mL } \nonumber$ This means that 10.03 mL of the concentrated nitric acid (69.8%) should be diluted up to a total of 100 mL with nanopure water. Now that you have your stock solution with the correct percentage then you can use this solution to prepare your solutions of varying concentration. Let’s take the example that the stock solution that you purchase from a supplier has a concentration of 100 ppm of analyte, which is equivalent to 1 μg/mL. In order to make your calibration curve more accurate it is important to be aware of two issues. Firstly, as with all straight-line graphs, the more points that are used then the better the statistics is that the line is correct. But, secondly, the more measurements that are used means that more room for error is introduced to the system, to avoid these errors from occurring one should be very vigilant and skilled in the use of pipetting and diluting of solutions. Especially when working with very low concentration solutions a small drop of material making the dilution above or below the exactly required amount can alter the concentration and hence affect the calibration deleteriously. The premise upon which the calculation is done is based on Equation \ref{eq:MV}, whereby C refers to concentration in ppm, and mL refers to mass in mL. The choice of concentrations to make will depend on the samples and the concentration of analyte within the samples that are being analyzed. For first time users it is wise to make a calibration curve with a large range to encompass all the possible outcomes. When the user is more aware of the kind of concentrations that they are producing in their synthesis then they can narrow down the range to fit the kind of concentrations that they are anticipating. In this example we will make concentrations ranging from 10 ppm to 0.1 ppm, with a total of five samples. In a typical ICP-AES analysis about 3 mL of solution is used, however if you have situations with substantial wavelength overlap then you may have chosen to do two separate runs and so you will need approximately 6 mL solution. In general it is wise to have at least 10 mL of solution to prepare for any eventuality that may occur. There will also be some extra amount needed for samples that are being used for the quality control check. For this reason 10 mL should be a sufficient amount to prepare of each concentration. We can define the unknowns in the equation as follows: • $C_I$ = concentration of concentrated solution (ppm) • $C_F$ = desired concentration (ppm) • $M_I$ = initial mass of material (mL) • $M_F$ = mass of material required for dilution (mL) The methodology adopted works as follows. Make the high concentration solution then take from that solution and dilute further to the desired concentrations that are required. Let's say the concentration of the stock solution from the supplier is 100 ppm of analyte. First we should dilute to a concentration of 10 ppm. To make 10 mL of 10 ppm solution we should take 1 mL of the 100 ppm solution and dilute it up to 10 mL with nanopure water, now the concentration of this solution is 10 ppm. Then we can take from the 10 ppm solution and dilute this down to get a solution with 5 ppm. To do this take 5 mL of the 10 ppm solution and dilute it to 10 mL with nanopure water, then you will have a solution of 10 mL that is 5 ppm concentration. And so you can do this successively taking aliquots from each solution working your way down at incremental steps until you have a series of solutions that have concentrations ranging from 10 ppm all the way down to 0.1 ppm or lower, as required. ICP-AES at work While ICP-AES is a useful method for quantifying the presence of a single metal in a given nanoparticle, another very important application comes from the ability to determine the ratio of metals within a sample of nanoparticles. In the following examples we can consider the bi-metallic nanoparticles of iron with copper. In a typical synthesis 0.75 mmol of $\ce{Fe(acac)3}$ is used to prepare iron-oxide nanoparticle of the form $\ce{Fe3O4}$. It is possible to replace a quantity of the $\ce{Fe^{n+}}$ ions with another metal of similar charge. In this manner bi-metallic particles were made with a precursor containing a suitable metal. In this example the additional metal precursor will be $\ce{Cu(acac)2}$. Keep the total metal concentration in this example is 0.75 mmol. So if we want to see the effect of having 10% of the metal in the reaction as copper, then we will use 10% of 0.75 mmol, that is 0.075 mmol $\ce{Cu(acac)2}$, and the corresponding amount of iron is 0.675 mmol $\ce{Fe(acac)3}$. We can do this for successive increments of the metals until you make 100% copper oxide particles. Subsequent $\ce{Fe}$ and $\ce{Cu}$ ICP-AES of the samples will allow the determination of $\ce{Fe:Cu}$ratio that is present in the nanoparticle. This can be compared to the ratio of $\ce{Fe}$ and $\ce{Cu}$that was applied as reactants. The graph shows how the percentage of $\ce{Fe}$ in the nanoparticle changes as a function of how much $\ce{Fe}$ is used as a reagent. Determining Analyte Concentration Once the nanoparticles are digested and the ICP-AES analysis has been completed you must turn the figures from the ICP-AES analysis into working numbers to determine the concentration of metals in the solution that was synthesized initially. Let's first consider the nanoparticles that are of one metal alone. The figure given by the analysis in this case is given in units of mg/L, this is the value in ppm's. This figure was recorded for the solution that was analyzed, and this is of a dilute concentration compared to the initial synthesized solution because the particles had to be digested in acid first, then diluted further into nanopure water. As mentioned above in the experimental 0.5 mL of the synthesized nanoparticles were first digested in 9.5 mL of concentrated nitric acid. Then when the digestion was complete 0.5 mL of this solution was dissolved in 9.5 mL of nanopure water. This was the final solution that was analyzed using ICP, and the concentration of metal in this solution will be far lower than that of the original solution. In this case the amount of analyte in the final solution being analyzed is 1/20th that of the total amount of material in the solution that was originally synthesized. Calculating Concentration in ppm Let us take an example that upon analysis by ICP-AES the amount of $\ce{Fe}$ detected is 6.38 mg/L. First convert the figure to mg/mL, $6.38~\mathrm { mg } / \mathrm { L } \times 1 / 1000~\mathrm { L } / \mathrm { mL } = 6.38`\mathrm { x } 10 ^ { - 3 }~\mathrm { mg } / \mathrm { mL } \nonumber$ The amount of material was diluted to a total volume of 10 mL. Therefore we should multiply this value by 10 mL to see how much mass was in the whole container. $6.38 \times 10 ^ { - 3 }~\mathrm { mg } / \mathrm { mL } \times 10~\mathrm { mL } = 6.38 \times 10 ^ { - 2 }~\mathrm { mg } \nonumber$ This is the total mass of iron that was present in the solution that was analyzed using the ICP device. To convert this amount to ppm we should take into consideration the fact that 0.5 mL was initially diluted to 10 mL, to do this we should divide the total mass of iron by this amount that it was diluted to. $6.38 \times 10 ^ { - 2 }~\mathrm { mg } / 0.5~\mathrm { mL } = 0.1276~\mathrm { mg } / \mathrm { mL } \nonumber$ This was the total amount of analyte in the 10 mL solution that was analyzed by the ICP device, to attain the value in ppm it should be mulitplied by a thousand, that is then 127.6 ppm of $\ce{Fe}$. Determining Concentration of Original Solution We now need to factor in the fact that there were several dilutions of the original solution first to digest the metals and then to dissolve them in nanopure water, in all there were two dilutions and each dilution was equivalent in mass. By diluting 0.5 mL to 10 mL , we are effectively diluting the solution by a factor of 20, and this was carried out twice. $0.1276~\mathrm { mg } / \mathrm { mL } \times 20 = 2.552~\mathrm { mg } / \mathrm { mL } \nonumber$ This is the amount of analyte in the solution of digested particles, to covert this to ppm we should multiply it by 1/1000 mL/L, in the following way: $2.552~\mathrm { mg } / \mathrm { mL } *\times 1 / 1000 \mathrm { mL } / \mathrm { L } = 2552~\mathrm { mg } / \mathrm { L } ^ { \mathrm { L } } \nonumber$ This is essentially your answer now as 2552 ppm. This is the amount of $\ce{Fe}$ in the solution of digested particles. This was made by diluting 0.5 mL of the original solution into 9.5 mL concentrated nitric acid, which is the same as diluting by a factor of 20. To calculate how much analyte was in the original batch that was synthesized we multiply the previous value by 20 again. This is the final amount of $\ce{Fe}$ concentration of the original batch when it was synthesized and made soluble in hexanes. $2552~\mathrm { ppm } \times 20 = 51040~\mathrm { ppm } \nonumber$ Calculating Stoichiometric Ratio Moving from calculating the concentration of individual elements now we can concentrate on the calculation of stoichiometric ratios in the bi-metallic nanoparticles. Consider the case when we have the iron and the copper elements in the nanoparticle. The amounts determined by ICP are: • Iron = 1.429 mg/L. • Copper = 1.837 mg/L. We must account for the molecular weights of each element by dividing the ICP obtained value, by the molecular weight for that particular element. For iron this is calculated by $\frac{1.429~\mathrm { mg }/ \mathrm { L }}{ 55.85} = 0.0211 \nonumber$, and thus this is molar ratio of iron. On the other hand the ICP returns a value for copper that is given by: $\frac{1.837 \mathrm { mg } / \mathrm { L } }{ 63.55} = 0.0289 \nonumber$ To determine the percentage iron we use this equation, which gives a percentage value of 42.15% Fe. $\% \text { Fe } = [ \frac{ \text { molar ratio of iron } }{\text { sum of molar ratios } } ] \times 100 \nonumber$ We work out the copper percentage similarly, which leads to an answer of 57.85% Cu. $\% \text { Cu} = [ \frac{ \text { molar ratio of copper} }{\text { sum of molar ratios } } ] \times 100 \nonumber$ In this way the percentage iron in the nanoparticle can be determined as function of the reagent concentration prior to the synthesis (Figure $2$ ). Determining Concentration of Nanoparticles in Solution The previous examples have shown how to calculate both the concentration of one analyte and the effective shared concentration of metals in the solution. These figures pertain to the concentration of elemental atoms present in solution. To use this to determine the concentration of nanoparticles we must first consider how many atoms that are being detected are in a nanoparticle. Let us consider that the $\ce{Fe3O4}$ nanoparticles are of 7 nm diameter. In a 7 nm particle we expect to find 20,000 atoms. However in this analysis we have only detected Fe atoms, so we must still account for the number of oxygen atoms that form the crystal lattice also. For every 3 Fe atoms, there are 4 O atoms. But as iron is slightly larger than oxygen, it will make up for the fact there is one less Fe atom. This is an over simplification but at this time it serves the purpose to make the reader aware of the steps that are required to take when judging nanoparticles concentration. Let us consider that half of the nanoparticle size is attributed to iron atoms, and the other half of the size is attributed to oxygen atoms. As there are 20,000 atoms total in a 7 nm particle, and then when considering the effect of the oxide state we will say that for every 10,000 atoms of Fe you will have a 7 nm particle. So now we must find out how many Fe atoms are present in the sample so we can divide by 10,000 to determine how many nanoparticles are present. In the case from above, we found the solution when synthesized had a concentration 51,040 ppm Fe atoms in solution. To determine how how many atoms this equates to we will use the fact that 1 mole of material has the Avogadro number of atoms present. $51040~\mathrm { ppm } = 51040~\mathrm { mg } / \mathrm { L } = 51.040~\mathrm { g } / \mathrm { L } \nonumber$ 1 mole of iron weighs 55.847 g. To determine how many moles we now have, we divide the values like this: $\frac{ 51.040~\mathrm{g / L} }{ 55.847~\mathrm{g} } = 0.9139~\text { mol/L } \nonumber$ The number of atoms is found by multiplying this by Avogadro’s number (6.022x1023): $( 0.9139~\text { mol/L} ) \times \left( 6.022 \times 10 ^ { 23 } \text { atoms } \right) = 5.5 \times 10 ^ { 23 }~\text { atoms/L } \nonumber$ For every 10,000 atoms we have a nanoparticle (NP) of 7 nm diameter, assuming all the particles are equivalent in size we can then divide the values. This is the concentration of nanoparticles per liter of solution as synthesized. $\left( 5.5 \times 10 ^ { 23 } \text { atoms/ L } \right) / ( 10,000 \text { atoms/NP} ) = 5.5 \times 10 ^ { 19 }~\mathrm { NP } / \mathrm { L } \nonumber$ Combined Surface Area One very interesting thing about nanotechnology that nanoparticles can be used for is their incredible ratio between the surface areas compared with the volume. As the particles get smaller and smaller the surface area becomes more prominent. And as much of the chemistry is done on surfaces, nanoparticles are good contenders for future use where high aspect ratios are required. In the example above we considered the particles to be of 7 nm diameters. The surface area of such a particle is 1.539 x10-16 m2. So the combined surface area of all the particles is found by multiplying each particle by their individual surface areas. $\left( 1.539 \times 10 ^ { - 16 } \mathrm { m } ^ { 2 } \right) \times \left( 5.5 \times 10 ^ { 19 }~mathrm { NP } / \mathrm { L } \right) = 8465~\mathrm { m } ^ { 2 } / \mathrm { L } \nonumber$ To put this into context, an American football field is approximately 5321 m2. So a liter of this nanoparticle solution would have the same surface area of approximately 1.5 football fields. That is allot of area in one liter of solution when you consider how much material it would take to line the football field with thin layer of metallic iron. Remember there is only about 51 g/L of iron in this solution! Bibliography • http://www.ivstandards.com/extras/pertable/ • A. Scheffer, C. Engelhard, M. Sperling, and W. Buscher, W. Anal. Bioanal. Chem., 2008, 390, 249. • H. Nakamuru, T. Shimizu, M. Uehara, Y. Yamaguchi, and H. Maeda, Mater. Res. Soc., Symp. Proc., 2007, 1056, 11. • S. Sun and H. Zeng, J. Am. Chem. Soc., 2002, 124, 8204. • C. A. Crouse and A. R. Barron, J. Mater. Chem., 2008, 18, 4146.
textbooks/chem/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/01%3A_Elemental_Analysis/1.05%3A_ICP-AES_Analysis_of_Nanoparticles.txt
Inductively coupled plasma mass spectroscopy (ICP-MS) is an analytical technique for determining trace multi-elemental and isotopic concentrations in liquid, solid, or gaseous samples. It combines an ion-generating argon plasma source with the sensitive detection limit of mass spectrometry detection. Although ICP-MS is used for many different types of elemental analysis, including pharmaceutical testing and reagent manufacturing, this module will focus on its applications in mineral and water studies. Although akin to ICP-AES (inductively coupled plasma atomic emission spectroscopy), ICP-MS has significant differences, which will be mentioned as well. Basic Instrumentation and Operation As shown in Figure \(1\) there are several basic components of an ICP-MS instrument, which consist of a sampling interface, a peristaltic pump leading to a nebulizer, a spray chamber, a plasma torch, a detector, an interface, and ion-focusing system, a mass-separation device, and a vacuum chamber, maintained by turbo molecular pumps. The basic operation works as follows: a liquid sample is pumped into the nebulizer to convert the sample into a spray. An internal standard, such as germanium, is pumped into a mixer along with the sample prior to nebulization to compensate for matrix effects. Large droplets are filtered out, and small droplets continue into the plasma torch, turning to ions. The mass separation device separates these ions based on their mass-to-charge ratio. An ion detector then converts these ions into an electrical signal, which is multiplied and read by computer software. The main difference between ICP-MS and ICP-AES is the way in which the ions are generated and detected. In ICP-AES, the ions are excited by vertical plasma, emitting photons that are separated on the basis of their emission wavelengths. As implied by the name, ICP-MS separates the ions, generated by horizontal plasma, on the basis of their mass-to-charge ratios (m/z). In fact, caution is taken to prevent photons from reaching the detector and creating background noise. The difference in ion formation and detection methods has a significant impact on the relative sensitivities of the two techniques. While both methods are capable of very fast, high throughput multi-elemental analysis (~10 - 40 elements per minute per sample), ICP-MS has a detection limit of a few ppt to a few hundred ppm, compared to the ppb-ppm range (~1 ppb - 100 ppm) of ICP-AES. ICP-MS also works over eight orders of magnitude detection level compared to ICP-AES’ six. As a result of its lower sensitivity, ICP-MS is a more expensive system. One other important difference is that only ICP-MS can distinguish between different isotopes of an element, as it segregates ions based on mass. A comparison of the two techniques is summarized in this table. Table \(1\): Comparison of ICP-MS and ICP-AES. ICP-MS ICP-AES Plasma Horizontal: generates cations Vertical: excites atoms, which emit photons Ion detection Mass-to-charge ratio Wavelength of emitted light Detection limit 1-10 ppt 1-10 ppb Working range 8 orders of magnitude 6 orders of magnitude Throughput 20-30 elements per minute 10-40 elements per minute Isotope detection Yes No Cost ~\$150,000 ~\$50,000 Multi-element detection Yes Yes Spectral interferences Predictable, less than 300 Much greater in number and more complicated to correct Routine accessories Electrothermal vaporization, laser ablation, high-performance liquid chromatography, etc. Rare Sample Preparation With such small sample sizes, care must be taken to ensure that collected samples are representative of the bulk material. This is especially relevant in rocks and minerals, which can vary widely in elemental content from region to region. Random, composite, and integrated sampling are each different approaches for obtaining representative samples. Because ICP-MS can detect elements in concentrations as minute as a few nanograms per liter (parts per trillion), contamination is a very serious issue associated with collecting and storing samples prior to measurements. In general, use of glassware should be minimized, due to leaching impurities from the glass or absorption of analyte by the glass. If glass is used, it should be washed periodically with a strong oxidizing agent, such as chromic acid (\(\ce{H2Cr2O7}\)), or a commercial glass detergent. In terms of sample containers, plastic is usually better than glass, polytetrafluoroethylene (PTFE) and Teflon® being regarded as the cleanest plastics. However, even these materials can contain leachable contaminants, such as phosphorus or barium compounds. All containers, pipettes, pipette tips, and the like should be soaked in 1 - 2% \(\ce{HNO3}\). Nitric acid is preferred over \(\ce{HCl}\) HCl, which can ionize in the plasma to form \(\ce{^{35}Cl^{16}O+}\) and \(\ce{^{40}Ar^{35}Cl+}\), which have the same mass-to-charge ratios as \(\ce{^{51}V+}\) and \(\ce{^{75}As+}\), respectively. If possible, samples should be prepared as close as possible to the ICP-MS instrument without being in the same room. With the exception of solid samples analyzed by laser ablation ICP-MS, samples must be in liquid or solution form. Solids are ground into a fine powder with a mortar and pestle and passed through a mesh sieve. Often the first sample is discarded to prevent contamination from the mortar or sieve. Powders are then digested with ultrapure concentrated acids or oxidizing agents, like chloric acid (\(\ce{HClO3}\)), and diluted to the correct order of magnitude with 1 - 2% trace metal grade nitric acid. Once in liquid or solution form, the samples must be diluted with 1 - 2% ultrapure \(\ce{HClO3}\) to a low concentration to produce a signal intensity lower than about 106 counts. Not all elements have the same concentration to intensity correlation; therefore, it is safer to test unfamiliar samples on ICP-AES first. Once properly diluted, the sample should be filtered through a 0.25 - 0.45 μm membrane to remove particulates. Gaseous samples can also be analyzed by direct injection into the instrument. Alternatively, gas chromatography equipment can be coupled to an ICP-MS machine for separation of multiple gases prior to sample introduction. Standards Multi- and single-element standards can be purchased commercially, and must be diluted further with 1 - 2% nitric acid to prepare different concentrations for the instrument to create a calibration curve, which will be read by the computer software to determine the unknown concentration of the sample. There should be several standards, encompassing the expected concentration of the sample. Completely unknown samples should be tested on less sensitive instruments, such as ICP-AES or EDXRF (energy dispersive X-ray fluorescence), before ICP-MS. Limitations of ICP-MS While ICP-MS is a powerful technique, users should be aware of its limitations. Firstly, the intensity of the signal varies with each isotope, and there is a large group of elements that cannot be detected by ICP-MS. This consists of H, He and most gaseous elements, C, and elements without naturally occurring isotopes, including most actinides. There are many different kinds of interferences that can occur with ICP-MS, when plasma-formed species have the same mass as the ionized analyte species. These interferences are predictable and can be corrected with element correction equations or by evaluating isotopes with lower natural abundances. Using a mixed gas with the argon source can also alleviate the interference. The accuracy of ICP-MS is highly dependent on the user’s skill and technique. Standard and sample preparations require utmost care to prevent incorrect calibration curves and contamination. As exemplified below, a thorough understanding of chemistry is necessary to predict conflicting species that can be formed in the plasma and produce false positives. While an inexperienced user may be able to obtain results fairly easily, those results may not be trustworthy. Spectral interference and matrix effects are problems that the user must work diligently to correct. Applications: Analysis of Mineral and Water Samples In order to illustrate the capabilities of ICP-MS, various geochemical applications as described. The chosen examples are representative of the types of studies that rely heavily on ICP-MS, highlighting its unique capabilities. Trace Elemental Analysis of Minerals With its high throughput, ICP-MS has made sensitive analysis of multi-element detection in rock and mineral samples feasible. Studies of trace components in rock can reveal information about the chemical evolution of the mantle and crust. For example, spinel peridotite xenoliths (Figure \(2\) ), which are igneous rock fragments derived from the mantle, were analyzed for 27 elements, including lithium, scandium and titanium at the parts per million level and yttrium, lutetium, tantalum, and hafnium in parts per billion. X-ray fluorescence was used to complement ICP-MS, detecting metals in bulk concentrations. Both liquid and solid samples were analyzed, the latter being performed using laser-ablation ICP-MS, which points out the flexibility of the technique for being used in tandem with others. In order to prepare the solution samples, optically pure minerals were sonicated in 3 M HCl, then 5% \(\ce{HF}\), then 3 M \(\ce{HCl}\) again and dissolved in distilled water. The solid samples were converted into plasma by laser ablation prior to injection into the nebulizer of the LA-ICP-MS instrument. The results showed good agreement between the laser ablation and solution methods. Furthermore, this comprehensive study shed light on the partitioning behavior of incompatible elements, which, due to their size and charge, have difficulty entering cation sites in minerals. In the upper mantle, incompatible trace elements, especially barium, niobium and tantalum, were found to reside in glass pockets within the peridotite samples. Trace Elemental Analysis of Water Another important area of geology that requires knowledge of trace elemental compositions is water analysis. In order to demonstrate the full capability of ICP-MS as an analytical technique in this field, researchers aim to use the identification of trace metals present in groundwater to determine a fingerprint for a particular water source. In one study the analysis of four different Nevada springs determined trace metal analysis in parts per billion and even parts per trillion (ng/L). Because they were present is such low concentrations, samples containing rare earth elements lutetium, thulium, and terbium were preconcentrated by a cation exchange column to enable detection at 0.05 ppt. For some isotopes, special corrections necessary to account for false positives, which are produced by plasma-formed molecules with the same mass-to-charge ratio as the isotopic ions. For instance, false positives for Sc (m/z = 45) or Ti (m/z = 47) could result from \(\ce{CO2H+}\) (m/z = 45) or \(\ce{PO+}\) (m/z = 47); and \(\ce{BaO+}\) (m/z = 151, 153) conflicts with Eu-151 and Eu-153. In the latter case, barium has many isotopes (134, 135, 136, 137, 138) in various abundances, Ba-138 comprising 71.7% barium abundance. ICP-MS detects peaks corresponding to \(\ce{BaO+}\) for all isotopes. Thus researchers were able to approximate a more accurate europium concentration by monitoring a non-interfering barium peak and extrapolating back to the concentration of barium in the system. This concentration was subtracted out to give a more realistic europium concentration. By employing such strategies, false positives could be taken into account and corrected. Additionally, 10 ppb internal standard was added to all samples to correct for changes in sample matrix, viscosity and salt buildup throughout collection. In total, 54 elements were detected at levels spanning seven orders of magnitude. This study demonstrates the incredible sensitivity and working range of ICP-MS. Determination of Arsenic Content Elemental analysis in water is also important for the health of aquatic species, which can ultimately affect the entire food chain, including people. With this in mind, arsenic content was determined in fresh water and aquatic organisms in Hayakawa River in Kanagawa, Japan, which has very high arsenic concentrations due to its hot spring source in Owakudani Valley. While water samples were simply filtered and prior to analysis, organisms required special preparation, in order to be compatible with the sampler. Organisms collected for this studied included water bug, green macroalga, fish, and crustaceans. For total As content determination, the samples were freeze-dried to remove all water from the sample in order to know the exact final volume upon resuspension. Next, the samples were ground into a powder, followed by soaking in nitric acid, heating at 110 °C. The sample then underwent heating with hydrogen peroxide, dilution, and filtering through a 0.45 μm membrane. This protocol served to oxidize the entire sample and remove large particles prior to introduction into the ICP-MS instrument. Samples that are not properly digested can build up on the plasma torch and cause expensive damage to the instrument. Since the plasma converts the sample into various ion constituents, it is unnecessary to know the exact oxidized products prior to sample introduction. In addition to total As content, the As concentration of different organic arsenic-containing compounds (arsenicals) produced in the organisms was measured by high performance liquid chromatography coupled to ICP-MS (HPLC/ICP-MS). The arsenicals were separated by HPLC before travelling into the ICP-MS instrument for As concentration determination. For this experiment, the organic compounds were extracted from biological samples by dissolving freeze-dried samples in methanol/water solutions, sonicating, and centrifuging. The extracts were dried under vacuum, redissolved in water, and filtered prior to loading. This did not account for all compounds, however, because over 50% arsenicals were nonsoluble in aqueous solution. One important plasma side product to account for was \(\ce{ArCl+}\), which has the same mass-to-charge ratio (m/z = 75) as As. This was corrected by oxidizing the arsenic ions within the mass separation device in the ICP-MS vacuum chamber to generate \(\ce{AsO+}\), with m/z 91. The total arsenic concentration of the samples ranged from 17 - 18 ppm. Bibliography • R. Thomas, Practical Guide to ICP-MS: A Tutorial for Beginners, CRC Press, Boca Raton, 2nd edn. (2008). • K. J. Stetzenbach, M. Amano, D. K. Kreamer, and V. F. Hodge. Ground Water, 1994, 32, 976. • S. M. Eggins, R. L. Rudnick, and W. F. McDonough, Earth Planet. Sci. Lett., 1998, 154, 53. • S. Miyashita, M. Shimoya, Y. Kamidate, T. Kuroiwa, O. Shikino, S. Fujiwara, K. A. Francesconi, and T. Kaise. Chemosphere, 2009, 75, 1065.
textbooks/chem/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/01%3A_Elemental_Analysis/1.06%3A_ICP-MS_for_Trace_Metal_Analysis.txt
Introduction Ion selective electrode (ISE) is an analytical technique used to determine the activity of ions in aqueous solution by measuring the electrical potential. ISE has many advantages compared to other techniques, including: 1. It is relatively inexpensive and easy to operate. 2. It has wide concentration measurement range. 3. As it measure the activity, instead of concentration, it is particularly useful in biological/medical application. 4. It is a real-time measurement, which means it can monitor the change of activity of ion with time. 5. It can determine both positively and negatively charged ions. Based on these advantages, ISE has wide variety of applications, which is reasonable considering the importance of measuring ion activity. For example, ISE finds its use in pollution monitoring in natural waters (CN-, F-, S-, Cl-, etc.), food processing (NO3-, NO2- in meat preservatives), Ca2+ in dairy products, and K+ in fruit juices, etc. Measurement setup Before focusing on how ISE works, it would be good to get an idea what ISE setup looks like and the component of the ISE instrument. Figure $1$ shows the basic components of ISE setup. It has an ion selective electrode, which allows measured ions to pass, but excludes the passage of the other ions. Within this ion selective electrode, there is an internal reference electrode, which is made of silver wire coated with solid silver chloride, embedded in concentrated potassium chloride solution (filling solution) saturated with silver chloride. This solution also contains the same ions as that to be measured. There is also a reference electrode similar to ion selective electrode, but there is no to-be-measured ion in the internal electrolyte and the selective membrane is replaced by porous frit, which allows the slow passage of the internal filling solution and forms the liquid junction with the external text solution. The ion selective electrode and reference electrode are connected by a milli-voltmeter. Measurment is accomplished simply by immersing the two electrodes in the same test solution. Theory of How ISE Works There are commonly more than one types of ions in solution. So how ISE manage to measure the concentration of certain ion in solution without being affected by other ions? This is done by applying a selective membrane at the ion selective electrode, which only allows the desired ion to go in and out. At equilibrium, there is potential difference existing between two sides of the membrane, and it is governed by the concentration of the tested solution described by Nernst equation EQ, where E is potential, E0 is a constant characteristic of a particular ISE, R is the gas constant (8.314 J/K.mol), T is the temperature (in K), n is the charge of the ion and F is Faraday constant (96,500 coulombs/mol). To make it relevant, the measured potential difference is proportional to the logarithm of ion concentration. Thus, the relationship between potential difference and ion concentration can be determined by measuring the potential of two solutions of already-known ion concentration and a plot based on the measured potential and logarithm of the ion concentration. Based on this plot, the ion concentration of an unknown solution can be known by measuring the potential and corresponding it to the plot. $E = E ^ { 0 } + ( 2.030~RT / nF ) \log C \label{eq:nernst}$ Example Application: Determination of Fluoride Ion Fluoride is added into drinking water and toothpaste to prevent dental caries and thus the determination of its concentration is of great importance to human health. Here, we will give some data and calculations to show how the concentration of fluoride ion is determined and have a glance at how relevant ISE is to our daily life. According to Nernst equation, (Equation \ref{eq:nernst}), in this case n = 1, T = 25 °C and E0, R, F are constants and thus this equation can be simplied as $E= K+S\log C \nonumber$ The first step is to obtain a calibration curve for fluoride ion and this can be done by preparing several fluoride standard solution with known concentration and making a plot of E versus log C. Table $1$: Measurement results. Data from http://zimmer.csufresno.edu/~davidz/...uorideISE.html. Concentration (mg/L) log C E (mV) 200.0 2.301 -35.6 100.0 2.000 -17.8 50.00 1.699 0.4 25.00 1.398 16.8 12.50 1.097 34.9 6.250 0.796 52.8 3.125 0.495 70.4 1.563 0.194 89.3 0.781 0.107 107.1 0.391 0.408 125.5 0.195 0.709 142.9 From the plot we can clearly identify the linear relationship between E versus log C with slope measured at -59.4 mV, which is very closed to the theoretical value -59.2 mV at 25 °C. This plot can give the concentration of any solution containing fluoride ion within the range of 0.195 mg/L and 200 mg/L by measuring the potential of the unknown solution. Limit of ISE Though ISE is a cost-effective and useful technique, it has some drawbacks that cannot be avoided. The selective ion membrane only allows the measured ions to pass and thus the potential is only determined by this particular ion. However, the truth is there is no such membrane that only permits the passage of one ion, and so there are cases when there are more than one ions that can pass the membrane. As a result, the measured potential are affected by the passage of the “unwanted” ions. Also, because of its dependence on ion selective membrane, one ISE is only suitable for one ion and this may be inconvenient sometimes. Another problem worth noticing is that ion selective measures the concentration of ions in equilibrium at the surface of the membrane surface. This does matter much if the solution is dilute but at higher concentrations, the inter-ionic interactions between the ions in the solution tend to decrease the mobility of ions and thus the concentration near the membrane would be lower than that in the bulk. This is one source of inaccuracy of ISE. To better analyze the results of ISE, we have to be aware of these inherent limitations of it. Bibliography • D. S. Papastathopoulos and M. I. Karayannis, J. Chem. Edu., 1980, 57, 904. • J. E. O'Reilly, J. Chem. Edu., 1979, 56, 279. • F. Scholz, Electroanalytical Methods: Guide to Experiments and Application, 2nd edition, Springer, Berlin (2010). • R. Greef, R. Peat, L. M. Peter, D. Pletcher, and J. Robinson, Instrumental Methods in Electrochemistry, Ellis Horwood, Chichester (1985).
textbooks/chem/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/01%3A_Elemental_Analysis/1.07%3A_Ion_Selective_Electrode_Analysis.txt
X-ray absorption spectroscopy (XAS) is a technique that uses synchrotron radiation to provide information about the electronic, structural, and magnetic properties of certain elements in materials. This information is obtained when X-rays are absorbed by an atom at energies near and above the core level binding energies of that atom. Therefore, a brief description about X-rays, synchrotron radiation and X-ray absorption is provided prior to a description of sample preparation for powdered materials. X-rays and Synchrotron Radiation X-rays were discovered by the Wilhelm Röntgen in 1895 (figure $1$). They are a form of electromagnetic radiation, in the same manner as visible light but with a very short wavelength, around 0.25 - 25 Å. As electromagnetic radiation, X-rays have a specific energy. The characteristic range is defined by soft versus hard X-rays. Soft X-rays cover the range from hundreds of eV to a few KeV, and the hard X-rays have an energy range from a few KeV up to around 100 KeV. X-rays are commonly produced by X-ray tubes, when high-speed electrons strike a metal target. The electrons are accelerated by a high voltage towards the metal target; X-rays are produced when the electrons collide with the nuclei of the metal target. Synchrotron radiation is generated when particles are moving at really high velocities and are deflected along a curved trajectory by a magnetic field. The charged particles are first accelerated by a linear accelerator (LINAC) (figure $2$); then, they are accelerated in a booster ring that injects the particles moving almost at the speed of light into the storage ring. There, the particles are accelerated toward the center of the ring each time their trajectory is changed so that they travel in a closed loop. X-rays with a broad spectrum of energies are generated and emitted tangential to the storage ring. Beamlines are placed tangential to the storage ring to use the intense X-ray beams at a wavelength that can be selected varying the set up of the beamlines. Those are well suited for XAS measurements because the X-ray energies produced span 1000 eV or more as needed for an XAS spectrum. X-ray Absorption Light is absorbed by matter through the photoelectric effect. It is observed when an X-ray photon is absorbed by an electron in a strongly bound core level (such as the 1s or 2p level) of an atom (figure $3$). In order for a particular electronic core level to participate in the absorption, the binding energy of this core level must be less than the energy of the incident X-ray. If the binding energy is greater than the energy of the X-ray, the bound electron will not be perturbed and will not absorb the X-ray. If the binding energy of the electron is less than that of the X-ray, the electron may be removed from its quantum level. In this case, the X-ray is absorbed and any energy in excess of the electronic binding energy is given as kinetic energy to a photo-electron that is ejected from the atom. When X-ray absorption is discussed, the primary concern is about the absorption coefficient, µ, which gives the probability that X-rays will be absorbed according to Beer’s Law where I0 is the X-ray intensity incident on a sample, t is the sample thickness, and I is the intensity transmitted through the sample. $I = I _ { 0 } e ^ { - \mu t } \label{eq:BeerLambert}$ The absorption coefficient, µE, is a smooth function of energy, with a value that depends on the sample density ρ, the atomic number Z, atomic mass A, and the X-ray energy E roughly as $\mu _ { E } \approx \frac { \rho Z ^ { 4 } } { A E ^ { 3 } } \nonumber$ When the incident X-ray has energy equal to that of the binding energy of a core-level electron, there is a sharp rise in absorption: an absorption edge corresponding to the promotion of this core level to the continuum. For XAS, the main concern is the intensity of µ, as a function of energy, near and at energies just above these absorption edges. An XAS measurement is simply a measure of the energy dependence of µ at and above the binding energy of a known core level of a known atomic species. Since every atom has core-level electrons with well-defined binding energies, the element to probe can be selected by tuning the X-ray energy to an appropriate absorption edge. These absorption edge energies are well-known. Because the element of interest is chosen in the experiment, XAS is element-specific. X-ray Absorption Fine Structure X-ray absorption fine structure (XAFS) spectroscopy, also named X-ray absorption spectroscopy, is a technique that can be applied for a wide variety of disciplines because the measurements can be performed on solids, gasses, or liquids, including moist or dry soils, glasses, films, membranes, suspensions or pastes, and aqueous solutions. Despites its broad adaptability with the kind of material used, there are samples which limits the quality of an XAFS spectrum. Because of that, the sample requirements and sample preparation is reviewed in this section as well the experiment design which are vital factors in the collection of good data for further analysis. Experiment Design The main information can be obtained using XAFS spectra consist in small changes in the absorption coefficient (E), which can be measured directly in a transmission mode or indirectly using a fluorescence mode. Therefore, a good signal to noise ratio is required (better than 103). In order to obtain this signal to noise ratio, an intense beam is required (on the order 1010 photons/second or better), with the energy bandwidth of 1 eV or less, and the capability of scanning the energy of the incident beam over a range of about 1 KeV above the edge in a time range of seconds or few minutes. As a result, synchrotron radiation is preferred further than other kind of X-ray sources previously mentioned. Beamline Setup Despite the setup of a synchrotron beamline is mostly done by the assistance of specialist beamline scientists, nevertheless, it is useful to understand the system behind the measurement. The main components of a XAFS beamline, as shown in figure below, are as follows: • A harmonic rejection mirror to reduce the harmonic content of the X-ray beam. • A monochromator to choose the X-ray energy. • A series of slits which defines the X-ray profile. • A sample positioning stage. • The detectors, which can be a single ionization detector or a group of detectors to measure the X-ray intensity. Slits are used to define the X-ray beam profile and to block unwanted X-rays. Slits can be used to increase the energy resolution of the X-ray incident on the sample at the expense of some loss in X-ray intensity. They are either fixed or adjustable slits. Fixed slits have a pre-cut opening of heights between 0.2 and 1.0 mm and a width of some centimeters. Adjustable slits use metal plates that move independently to define each edge of the X-ray beam. Monochromator The monochromator is used to select the X-ray energy incident on the sample. There are two main kinds of X-ray monochromators: 1. The double-crystal monochromator, which consists of two parallel crystals. 2. The channel-cut monochromator, which is a single crystal with a slot cut nearly through it. Most monochromator crystals are made of silicon or germanium and are cut and polished such that a particular atomic plane of the crystal is parallel to the surface of the crystal as Si(111), Si(311), or Ge(111). The energy of X-rays diffracted by the crystal is controlled by rotating the crystals in the white beam. Harmonic rejection mirrors The harmonic X-ray intensity needs to be reduced, as these X-rays will adversely affect the XAS measurement. A common method for removing harmonic X-rays is using a harmonic rejection mirror. This mirror is usually made of Si for low energies, Rh for X-ray energies below the Rh absorption edge at 23 keV, or Pt for higher X-ray energies. The mirror is placed at a grazing angle in the beam such that the X-rays with fundamental energy are reflected toward the sample, while the harmonic X-rays are not. Detectors Most X-ray absorption measurements use ionization detectors. These contain two parallel plates separated by a gas-filled space that the X-rays travel through. Some of the X-rays ionize the gas particles. A voltage bias applied to the parallel plates separates the gas ions, creating a current. The applied voltage should give a linear detector response for a given change in the incident X-ray intensity. There are also other kinds as fluorescence and electron yield detectors. Transmission and Fluorescence Modes X-ray Absorption measurements can be performed in several modes: transmission, fluorescence and electron yield; where the two first are the most common. The choice of the most appropriate mode to use in one experiment is a crucial decision. The transmission mode is the most used because it only implies the measure of the X-ray flux before and after the beam passes the sample. Therefore, the adsorption coefficient is defined as $\mu _ { E } = \ln \left( \frac { I _ { 0 } } { I } \right) \nonumber$ Transmission experiments are standard for hard X-rays, because the use of soft X-rays implies the use the samples thinner than 1 μm. Also, this mode should be used for concentrated samples. The sample should have the right thickness and be uniform and free of pinholes. The fluorescence mode measures the incident flux I0 and the fluorescence X-rays If that are emitted following the X-ray absorption event. Usually the fluorescent detector is placed at 90° to the incident beam in the horizontal plane, with the sample at an angles, commonly 45°, with respect to the beam, because in that position there is not interference generated because of the initial X-ray flux (I0). The use of fluorescence mode is preferred for thicker samples or lower concentrations, even ppm concentrations or lower. For a highly concentrated sample, the fluorescence X-rays are reabsorbed by the absorber atoms in the sample, causing an attenuation of the fluorescence signal, it effect is named as self-absorption and is one of the most important concerns in the use of this mode. Sample Preparation for XAS Sample Requirements Uniformity The samples should have a uniform distribution of the absorber atom, and have the correct absorption for the measurement. The X-ray beam typically probes a millimeter-size portion of the sample. This volume should be representative of the entire sample. Thickness For transmission mode samples, the thickness of the sample is really important. It supposes to be a sample with a given thickness, t, where the total adsorption of the atoms is less than 2.5 adsorption lengths, µEt ≈ 2.5; and the partial absorption due to the absorber atoms is around one absorption length ∆ µEt ≈ 1, which corresponds to the step edge. The thickness to give ∆ µEt = 1 is as $t = \frac { 1 } { \Delta \mu } = \frac { 1.66 \sum _ { i } n _ { i } M _ { i } } { \rho \sum _ { i } n _ { i } \left[ \sigma _ { i } \left( E _ { + } \right) - \sigma _ { i } \left( E _ { - } \right) \right] } \nonumber$ where ρ is the compound density, n is the elemental stoichiometry, M is the atomic mass, σE is the adsorption cross-section in barns/atom (1 barn = 10-24 cm2) tabulated in McMaster tables, and E+and E- are the just above and below the energy edge. This calculation can be accomplished using the free download software HEPHAESTUS. Total X-ray Adsorption For non-concentrate samples, the total X-ray adsorption of the sample is the most important. It should be related to the area concentration of the sample ($ρt$, in g/cm2). The area concentration of the sample multiplied by the difference of the mass adsorption coefficient ($∆µE/ρ$) give the edge step, where a desired value to obtain a good measure is a edge step equal to one, $(∆µE/ρ)ρt ≈ 1$. The difference of the mass adsorption coefficient is given by $\left( \frac { \Delta \mu _ { E } } { \rho } \right) = \sum f _ { i } \left[ \left( \frac { \Delta \mu _ { E } } { \rho } \right) _ { i , ( E_+ ) } - \left( \frac { \Delta \mu _ { E } } { \rho } \right) _ { i , \left( E _{ - } \right) } \right] \nonumber$ where $(µE/ρ)_i$ is the mass adsorption coefficient just above ($E_+$) and below ($E_-$) of the edge energy and $f_i$ is the mass fraction of the element i. Multiplying the area concentration, $ρt$, for the cross-sectional area of the sample holder, amount of sample needed is known. Sample Preparation As was described in last section, there are diluted solid samples, which can be prepared onto big substrates or concentrate solid samples which have to be prepared in thin films. Both methods are following described. Liquid and gases samples can also be measured, but the preparation of those kind of sample is not discussed in this paper because it depends in the specific requirements of each sample. Several designs can be used as long they avoid the escape of the sample and the material used as container does not absorb radiation at the energies used for the measure. Method 1 1. The materials needed are showed in this figure: Kapton tape and film, a thin spatula, tweezers, scissors, weigh paper, mortar and pestle, and a sample holder. The sample holder can be made from several materials, as polypropylene, polycarbonate or Teflon. 2. Two small squares of Kapton film are cut. One of them is placed onto the hole of the sample holder as shown figure $6$a. A piece of Kapton tape is placed onto the sample holder trying to minimize any air burble onto the surface and keeping the film as was previously placed figure $6$b. A side of the sample holder is now sealed in order to fill the hole (figure $7$). 3. Before fill the sample holder, make sure your sample is a fine powder. Use the mortar to grind the sample. 4. Fill the hole with the powder. Make sure you have extra powder onto the hole (figure $9$a). With the spatula press the powder. The sample has to be as compact as possible (figure $9$b). 5. Clean the surface of the slide. Repeat the step 2. Your sample loaded in the sample holder should look as picture below: Method 2 1. The materials needed are showed in photo: Kapton tape, tweezers, scissors, weigh paper, mortar and pestle, tape and aluminum foil. 2. Aluminum foil is placed as the work-area base. Kapton tape is place from one corner to the opposite one as shown figure $12$. Tape is put onto the extremes to fix it. In this case yellow tape was used in order to show where the tape should be placed but is better use Scotch invisible tape for the following steps. 3. The weigh paper is placed under the Kapton tape in one of the extremes. Sample is added onto that Kapton tape extreme. The function of the weigh paper is further recuperation of extra sample. 4. With one finger, the sample is dispersed along the Kapton tape, always in the same direction and taking care that the weigh paper is under the tape area is being used (figure $14$a). The finger should be slid several times making pressure in order to have a homogeneous and complete cover film (figure $14$b). 5. The final sample covered Kapton tape should look like figure $15$. Cut the extremes in order to a further manipulation of the film. 6. Using the tweezers, fold the film taking care that is well aligned and there fold is complete plane. figure $16$a shows the first folding, generating a 2 layers film. figure $16$b and figure $16$c shows the second and third folding, obtaining a 4 and 8 layers film. Sometimes a 4 layers film is good enough. You always can fold again to obtain bigger signal intensity. Bibliography • B. D. Cullity and S. R. Stock. Elements of X-ray Diffraction, Prentice Hall, Upper Saddle River (2001). • F. Hippert, E. Geissler, J. L. Hodeau, E. Lelièvre-Berna, and J. R. Regnard. Neutron and X-ray Spectroscopy, Springer, Dordrecht (2006). • G. Bunker. Introduction to XAFS: A practical guide to X-ray Absorption Fine Structure Spectroscopy, Cambridge University Press, Cambridge (2010). • S. D. Kelly, D. Hesterberg, and B. Ravel in Methods of Soil Analysis: Part 5, Mineralogical Methods, Ed. A. L. Urely and R. Drees, Soil Science Society of America Book Series, Madison (2008).
textbooks/chem/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/01%3A_Elemental_Analysis/1.08%3A_A_Practical_Introduction_to_X-ray_Absorption_Spectroscopy.txt
Introduction Neutron activation analysis (NAA) is a non-destructive analytical method commonly used to determine the identities and concentrations of elements within a variety of materials. Unlike many other analytical techniques, NAA is based on nuclear rather than electronic transitions. In NAA, samples are subjected to neutron radiation (i.e., bombarded with neutrons), which causes the elements in the sample to capture free neutrons and form radioactive isotopes, such as $^{59}_{27}\ce{Co} + ^1_0 n \rightarrow ^{60}_{27}\ce{Co} \nonumber.$ The excited isotope undergoes nuclear decay and loses energy by emitting a series of particles that can include neutrons, protons, alpha particles, beta particles, and high-energy gamma ray photons. Each element on the periodic table has a unique emission and decay path that allows the identity and concentration of the element to be determined. History Almost eighty years ago in 1936, George de Hevesy and Hilde Levi published the first paper on the process of neutron activation analysis. They had discovered that rare earth elements such as dysprosium became radioactive after being activated by thermal neutrons from a radon-beryllium (266Ra + Be) source. Using a Geiger counter to count the beta particles emitted, Hevesy and Levi were able to identify the rare earth elements by half-life. This discovery led to the increasingly popular process of inducing radioactivity and observing the resulting nuclear decay in order to identify an element, a process we now know as NAA. In the years immediately following Hevesy and Levi’s discovery, however, the advancement of this technique was restricted by the lack of stable neutron sources and adequate spectrometry equipment. Even with the development of charged-particle accelerators in the 1930s, analyzing multi-element samples remained time-consuming and tedious. The method was improved in the mid-1940s with the availability of the X-10 reactor at the Oak Ridge National Laboratory, the first research-type nuclear reactor. As compared with the earlier neutron sources used, this reactor increased the sensitivity of NAA by a factor of a million. Yet the detection step of NAA still revolved around Geiger or proportional counters; thus, many technological advancements were still to come. As technology has progressed in the recent decades, the NAA method has grown tremendously, and scientists now have a plethora of neutron sources and detectors to choose from when analyzing a sample with NAA. Sample preparation In order to analyze a material with NAA, a small sample of at least 50 milligrams must be obtained from the material, usually by drilling. It is suggested that two different samples are obtained from the material using two drill bits of different compositions. This will show any contamination from the drill bits and, thus, minimize error. Prior to irradiation, the small samples are encapsulated in vials of either quartz or high purity linear polyethylene. Instrument How it Works Neutron activation analysis works through the processes of neutron activation and radioactive decay. In neutron activation, radioactivity is induced by bombarding a sample with free neutrons from a neuron source. The target atomic nucleus captures a free neutron and, in turn, enters an excited state. This excited and therefore unstable isotope undergoes nuclear decay, a process in which the unstable nucleus emits a series of particles that can include neutrons, protons, alpha, and beta particles in an effort to return to a low-energy, stable state. As suggested by the several different particles of ionizing radiation listed above, there are many different types of nuclear decay possible. These are summarized in the figure below. An additional type of nuclear decay is that of gamma radiation (denoted as γ), a process in which the excited nucleus emits high-energy gamma ray photons. There is no change in either neutron number N or atomic number Z, yet the nucleus undergoes a nuclear transformation involving the loss of energy. In order to distinguish the higher energy parent nucleus (prior to gamma decay) from the lower energy daughter nucleus (after gamma decay), the mass number of the parent nucleus is labeled with the letter m, which means “metastable.” An example of gamma radiation with the element technetium is shown here. $^{99m}_{43}\ce{Tc} \rightarrow ^{99}_{43}\ce{Tc} + ^0_0\gamma \nonumber$ In NAA, the radioactive nuclei in the sample undergo both gamma and particle nuclear decay. The figure below presents a schematic example of nuclear decay. After capturing a free neutron, the excited 60mCo nucleus undergoes an internal transformation by emitting gamma rays. The lower-energy daughter nucleus 60Co, which is still radioactive, then emits a beta particle. This results in a high-energy 60Ni nucleus, which once again undergoes an internal transformation by emitting gamma rays. The nucleus then reaches the stable 60Ni state. Although alpha and beta particle detectors do exist, most detectors used in NAA are designed to detect the gamma rays that are emitted from the excited nuclei following neutron capture. Each element has a unique radioactive emission and decay path that is scientifically known. Thus, based on the path and the spectrum produced by the instrument, NAA can determine the identity and concentration of the element. Neutron Sources As mentioned above, there are many different neutron sources that can be used in modern-day NAA. A chart comparing three common sources is shown in the table below. Table $1$: Different neutron sources. Source type Description Example(s) Typical output Isotopic neutron sources Certain isotopes undergo spontaneous fission and release neutrons as they decay. 226Ra(Be), 124Sb(Be), 241Am(Be), 252Cf 105-107 s-1 GBq-1 or 2.2 1012 s-1 g-1 for 252Cf Particle accelerators or neutron generators Particle accelerators produce neutrons by colliding hydrogen, deuterium, and tritium with target nuclei such as deuterium, tritium, lithium, and beryllium. Acceleration of deuterium ions toward a target containing deuterium or tritium, resulting in the reactions 2H(2H,n)3He and 3H(2H,n)4He 108-1010 s-1 for the first deuterium on deuterium reactions and 109-1011 s-1 for deuterium on tritium reactions Nuclear research reactors Within nuclear reactors, large atomic nuclei absorbs neutrons and undergo nuclear fission. The nuclei split into lighter nuclei, which releases energy, radiation, and free neutrons. 235U and 239Pu 1015-1018 m-2 s-1 Gamma and Particle Detectors As mentioned earlier, most detectors used in NAA are designed to detect the gamma rays emitted from the decaying nucleus. Two widely used gamma detectors are the scintillation type and the semiconductor type. The former uses a sensitive crystal, often sodium iodide that is doped with thallium (NaI(Tl)), that emits light when gamma rays strike it. Semiconductor detectors, on the other hand, use germanium to form a diode that produces a signal in response to gamma radiation. The signal produced is proportional to the energy of the emitted gamma radiation. Both types of gamma detectors have excellent sensitivity with detection limits ranging from 0.1 to 106 nanogram element per gram sample, but semiconductor type detectors usually have superior resolution. Furthermore, particles detectors designed to detect the alpha and beta particles that are emitted in nuclear decay are also available; however, gamma detectors are favorable. Particle detectors require a high vacuum since atmospheric gases in the air can absorb and affect the emission of these particles. Gamma rays are not affected in this way. Variations/Parameters INAA versus RNAA Instrumental neutron activation analysis (INAA) is the simplest and most widely used form of NAA. It involves the direct irradiation of the sample, meaning that the sample does not undergo any chemical separation or treatment prior to detection. INAA can only be used if the activity of the other radioactive isotopes in the sample does not interfere with the measurement of the element(s) of interest. Interference often occurs when the element(s) of interest are present in trace or ultratrace amounts. If interference does occur, the activity of the other radioactive isotopes must be removed or eliminated. Radiochemical separation is one way to do this. NAA that involves sample decomposition and elemental separation is known as radiochemical neutron activation analysis (RNAA). In RNAA, the interfering elements are separated from the element(s) of interest through an appropriate separation method. Such methods include extractions, precipitations, distillations, and ion exchanges. Inactive elements and matrices are often added to ensure appropriate conditions and typical behavior for the element(s) of interest. A schematic comparison of INAA and RNAA is shown below. ENAA versus FNAA Another experimental parameter that must be considered is the kinetic energy of the neutrons used for irradiation. In epithermal neutron activation analysis (ENAA), the neutrons – known as epithermal neutrons – are partially moderated in the reactor and have kinetic energies between 0.5 eV to 0.5 MeV. These are lower-energy neutrons as compared to fast neutrons, which are used in fast neutron activation analysis (FNAA). Fast neutrons are high-energy, unmoderated neutrons with kinetic energies above 0.5 MeV. PGNAA versus DGNAA The final parameter to be discussed is the time of measurement. The nuclear decay products can be measured either during or after neutron irradiation. If the gamma rays are measured during irradiation, the procedure is known as prompt gamma neutron activation analysis (PGNAA). This is a special type of NAA that requires additional equipment including an adjacent gamma detector and a neutron beam guide. PGNAA is often used for elements with rapid decay rates, elements with weak gamma emission intensities, and elements that cannot easily be determined by delayed gamma neutron activation analysis (DGNAA) such as hydrogen, boron, and carbon. In DGNAA, the emitted gamma rays are measured after irradiation. DGNAA procedures include much longer irradiation and decay periods than PGNAA, often extending into days or weeks. This means that DGNAA is ideal for long-lasting radioactive isotopes. A schematic comparison of PGNAA and DGNAA is shown below. Examples Characterizing archaeological materials Throughout recent decades, NAA has often been used to characterize many different types of samples including archaeological materials. In 1961, the Demokritos nuclear reactor, a water moderated and cooled reactor, went critical at low power at the National Center for Scientific Research “Demokritos” (NCSR “Demokritos”) in Athens, Greece. Since then, NCSR “Demokritos” has been a leading center for the analysis of archaeological materials. Ceramics, carbonates, silicates, and steatite are routinely analyzed at NCSR “Demokritos” with NAA. A routine analysis begins by weighing and placing 130 milligrams of the powdered sample into a polyethylene vial. Two batches of ten vials, eight samples and two standards, are then irradiated in the Demokritos nuclear reactor for 45 minutes at a thermal neutron flux of 6 x 1013 neutrons cm-2 s-1. The first measurement occurs seven days after irradiation. The gamma ray emissions of both the samples and standards are counted with a germanium gamma detector (semiconductor type) for one hour. This measurement determines the concentrations of the following elements: As, Ca, K, La, Lu, Na, Sb, Sm, U, and Yb. A second measurement is performed three weeks after irradiation in which the samples and standards are counted for two hours. In this measurement, the concentrations of the following elements are determined: Ba, Ce, Co, Cr, Cs, Eu, Fe, Hf, Nd, Ni, Rb, Sc, Ta, Tb, Th, Zn, and Zr. Using the method described above, NCSR “Demokritos” analyzed 195 samples of black-on-red painted pottery from the late Neolithic age in what is now known as the Black-On-Red Pottery Project. An example of black-on-red painted pottery is shown here. This project aimed to identify production patterns in this ceramic group and explore the degree of standardization, localization, and scale of production from 14 sites throughout the Strymonas Valley in northern Greece. A map of the area of interest is provided below in figure $6$. NCSR “Demokritos” also sought to analyze the variations in pottery traditions by differentiating so-called ceramic recipes. By using NAA, NCSR “Demokritos” was able to determine the unique chemical make-ups of the many pottery fragments. The chemical patterning revealed through the analyses suggested that the 195 samples of black-on-red Neolithic pottery came from four distinct productions areas with the primary production area located in the valley of the Strymon and Angitis rivers. Although distinct, the pottery from the four different geographical areas all had common technological and stylistic characteristics, which suggests that a level of standardization did exist throughout the area of interest during the late Neolithic age. Determining elemental concentrations in blood Additionally, NAA has been used in hematology laboratories to determine specific elemental concentrations in blood and provide information to aid in the diagnosis and treatment of patients. Identifying abnormalities and unusual concentrations of certain elements in the bloodstream can also aid in the prediction of damage to the organ systems of the human body. In one study, NAA was used to determine the concentrations of sodium and chlorine in blood serum. In order to investigate the accuracy of the technique in this setting, 26 blood samples of healthy male and female donors – aged between 25 and 60 years and weighing between 50 and 85 kilograms – were selected from the Paulista Blood Bank in São Paulo. The samples were initially irradiated for 2 minutes at a neutron flux ranging from approximately 1 x 1011 to 6 x 1011 neutrons cm-2 s-1 and counted for 10 minutes using a gold activation detector. The procedure was later repeated using a longer irradiation time of 10 minutes. The determined concentrations of sodium and chlorine were then compared to standard values. The NAA analyses resulted in concentrations that strongly agreed with the adopted reference value. For example, the chlorine concentration was found to be 3.41 - 3.68 µg/µL of blood, which correlates closely to the reference value of 3.44 - 3.76 µg/µL of blood. This illustrates that NAA can accurately measure elemental concentrations in a variety of materials including blood samples. Limitations Although NAA is an accurate (~5%) and precise (<0.1%) multi-element analytical technique, it has several limitations that should be addressed. Firstly, samples irradiated in NAA will remain radioactive for a period of time (often years) following the analysis procedures. These radioactive samples require special handling and disposal protocols. Secondly, the number of the available nuclear reactors has declined in recent years. In the United States, only 31 nuclear research and test reactors are currently licensed and operating. A map of these reactors shown here. As a result of the declining number of reactors and irradiation facilities in the nation, the cost of neutron activation analysis has increased. The popularity of NAA has declined in recent decades due to both the increasing cost and the development of other successful multi-element analytical methods such as inductively coupled plasma atomic emission spectroscopy (ICP-AES). Bibliography • Z. B. Alfassi, Activation Analysis, CRC Press, Boca Raton (1990). • P. Bode, A. Byrne, Z. Chai, A. Chatt, V. Dimic, T. Z. Hossain, J. Kučera, G. C. Lalor, and R. Parthasarathy, Report of an Advisory Group Meeting Held in Vienna, 22-26 June 1998, IAEA, Vienna, 2001, 1. • V. P. Guinn, Bio. Trace Elem. Res., 1990, 26-27, 1. • L. Hamidatou, H. Slamene, T. Akhal, and B. Zouranen, Imaging and Radioanalytical Techniques in Interdisciplinary Research – Fundamentals and Cutting Edge Applications, ed. F. Kharfi, InTech, Rijeka (2013). • V. Kilikoglou, A. P. Grimanis, A. Tsolakidou, A. Hein, D. Malalmidou, and Z. Tsirtsoni, Archaeometry, 2007, 49, 301. • S. S. Nargolwalla and E. P. Przybylowicz, Activation Analysis with Neutron Generators, Wiley, New York, 39th edn. (1973). • M. Pollard and C. Heron, Archaeological Chemistry, Royal Society of Chemistry, Cambridge (1996). • B. Zamboi, L. C. Oliveira, and L. Dalaqua Jr., Americas Nuclear Energy Symposium, Miami, 2004. • Neutron Activation Analysis Online, www.naa-online.net/theory/types-of-naa/, (accessed February 2014). • Map of Research and Test Reactor Sites, www.nrc.gov/reactors/operating/map-nonpower-reactors.html, (accessed February 2014).
textbooks/chem/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/01%3A_Elemental_Analysis/1.09%3A_Neutron_Activation_Analysis_%28NAA%29.txt
Introduction Carbon is one of the more abundant elements on the planet; all living things and many non-living things have some form of carbon in them. Having the ability to measure and characterize the carbon content of a sample is of extreme value in a variety of different industries and research environments. Total carbon (TC) content is just one important piece of information that is needed by analysts concerned with the carbon content of a sample. Having the knowledge of the origin of carbon in the sample, whether it be derived from organic or inorganic material, is also of extreme importance. For example, oil companies are interested in finding petroleum, a carbon containing material derived from organic matter, knowing the carbon content and the type of carbon in a sample of interest can mean the difference between investing millions of dollars and not doing so. Regulatory agencies like the U.S. Environmental Protection Agency (EPA) is another such example, where regulation of the carbon content and character of that carbon is essential for environmental and human health. Considering the importance of identifying and quantifying the carbon content of an analyte, it may be surprising to learn that there is no one method to measure the carbon content of a sample. Unlike other techniques, no fancy instrument is required (although some exists that can be useful). In fact, methods to measure the different forms of carbon (organic or inorganic) are different themselves because they take advantage of the different properties characteristics to the carbon content you are measuring, in fact you will most likely use multiple techniques to fully characterize the carbon content of a sample, not just one. Measurements of carbon content are related, and therefore measurement of either total carbon content (TC), total inorganic carbon content (TIC) and total organic carbon content (TOC) is related to the other two by $\mathrm { TC } = \mathrm { TIC } + \mathrm { TOC }. \label{eq:TC}$ This means that measurement of two variables can indirectly give you the third, as there are only two classes of carbon: organic carbon and inorganic carbon. Herein several of the methods used in measuring the TOC, TIC and TC for samples will be outlined. Not all samples require the same kind of instruments and methods. The goal of this module is to get the reader to see the simplicity of some of these methods and understand the need for such quantification and analysis. Measurement of Total Organic Carbon (TOC) Sample and Sample Preparation The total organic carbon content for a variety of different samples can be determined; there are very few samples that cannot be measured for total carbon content. Before treatment, a sample must be homogenized, whereby a sample is mixed or broken such that a measurement done on the sample can be representative of the entire sample. For example, if our sample were a rock, we would want to make sure that the inner core of the rock, which could have a different composition than the outer surface, were being measured as well. Not homogenizing the sample would lead to inconsistent and perhaps irreproducable results. Techniques for homogenization vary wildly, depending on the sample, different techniques exist. Dissolution of Total Inorganic Carbon In order to measure the organic carbon content in a sample, the inorganic sources of carbon, which exist in the form of carbonate and bicarbonate salts and minerals, must be removed from the sample. This is typically done by treating the sample with non-oxidative acids such as H2SO4 and HCl, releasing CO2 and H2O, as shown $\ce{2HCl + CaCO3 -> CaCl2 + CO2 + H2O} \nonumber$ $\ce{HCl + NaHCO3 -> NaCl + H2O + CO2} \nonumber$ Non oxidative acids are chosen such that minimal amounts of organic carbon are affected. Although the selection of acid chosen to remove the inorganic sources of carbon is important; depending on your measurement technique, acids may interfere with the measurement. For example, in the wet measurement technique that will be discussed later, the counter ion Cl- will add systematic error to the measurement. Treatment of a sample with acid is intended to dissolve all inorganic forms of carbon in the sample. In selectively digesting and dissolving inorganic forms of carbon, be it aqueous carbonates or bicarbonates or trapped CO2, one can selectively remove inorganic sources of carbon from organic ones; thereby leaving behind, in theory, only organic carbon in the sample. It becomes apparent, in this treatment, the importance of sample homogenization. Using the rock example again. If a rock is treated with acid without homogenizing, the inorganic carbon at the surface of the sample may be dissolved. Only with homogenization can the acid dissolve in inorganic carbon on the inside of the rock. Otherwise this inorganic carbon may be interpreted as organic carbon, leading to gross errors in total organic carbon determination. Shortcomings in the Dissolution of Inorganic Carbon A large problem and a potential source of error in technique measurement are the assumptions that have to be made, particularly in the case of TOC measurement, that all of the inorganic carbon has been washed away and separated from the sample. There is no way to distinguish TOC or TIC spectroscopically, the experimenter is forced to assume that they are looking at is all organic carbon or all inorganic carbon, when in reality there may be some of both still on the sample. Quantitative Measurement of TOC Most TOC quantification methods are destructive in nature. The destructive nature of the methods means that none of the sample may be recovered. Of the methods, there are two destructive techniques that will be discussed in this module. The first is the wet method to measure TOC of solid sediment samples, and the second is a the dry combustion. Wet Methods Sample Preparation Following sample pre-treatment with inorganic acids to dissolve away any inorganic material from the sample, a known amount of potassium dichromate (K2Cr2O7) in concentrated sulfuric acid are added to the sample as per the Walkey-Black procedure, a well known wet technique. The amount of dichromate and H2SO4 added can vary depending on the expected organic carbon content of the sample, typically enough H2SO4 is added such that the solid potassium dichromate dissolves in solution.The mixture of potassium dichromate with H2SO4 is an exothermic one, meaning that heat is evolved from the solution. As the dichromate reacts according to $\ce{2Cr2O7^2- + 3C^0 + 16 H+ -> 4Cr^3+ + 3CO2 + 8H2O} \label{eq:dichromate}$ The solution will bubble away CO2. Because the only source of carbon in the sample is in theory the organic forms of carbon (assuming adequate pre-treatment of the sample to remove the inorganic forms of carbon), the evolved CO2 comes from organic sources of carbon. Elemental forms of carbon in this method present problems for oxidation of elemental carbon to CO2, meaning that not all of the carbon will be converted to CO2, which will lead to an underestimation of total organic carbon content in the quantification steps. In order to facilitate the oxidation of elemental carbon, the digestive solution of dichromate and H2SO4 is heated at 150°C for some time (~30 min, depending on total carbon content in the sample and the amount of dichromate added). It is important that the solution not be heated above 150 oC, as decomposition of the dichromate solution. Other shortcomings, in addition to incomplete digestion, exist with this method. Fe2+ and Cl- in the sample can interfere with the chromate solution, Fe2+ can be oxidized to Fe3+ and Cl- can form CrO2Cl2 leading to systematic error towards higher organic carbon content. Conversely MnO2, like dichromate, will oxidize organic carbon, thereby leading to a negative bias and an underestimation of TOC content in samples. In order to counteract these biases, several additives can be used in the pre-treatment process. Fe2+ can be oxidized with mild oxidant phosphoric acid, which will not oxidize organic carbon. Treatment of the digestive solution with AgSO2 can precipitate silver chloride. MnO2 interferences can be dealt with using FeSO4, where the oxidation power of the manganese is dealt with by taking the iron(II) sulfate to the +3 oxidation state. Any excess iron(II) can be dealt with using phosphoric acid. Quantification of TOC What follows sample treatment, where all of the organic carbon has been digested, is a titration to oxidize the excess dichromate in the sample. Comparing the excess that is titrated to the amount that was originally added to the original solution, one can do stoichiometric calculations according to Equation \ref{eq:dichromate} and calculate the amount of dichromate that oxidized the organic carbon in the sample, thereby allowing the determination of TOC in the sample. How this titration is run is up to the user. Manual, potentiometric, titrations are all available to the investigator doing the TOC measurement, as well as some others. • Manual titrations are similar to any other type of manual titration method. An indicator must be used in manual titrations, and in the case of this wet method, commercially available “ferroin” is used. Titrant is typically ferrous ammonium sulfate. Titrant is added until equivalence is reached. Indicative of reaching equivalence is color change catalyzed by the indicator. Depending on the sample measured color change may be difficult to notice. • Insertion of platinum electrodes to the sample can be used to measure conductance of sample using potentiometric tirtration. When sample reached endpoint, conductance will essentially be 0 or whatever the endpoint of the solution was set to. This method presents several advantages over manual titration methods because titration can be automated to respond to feedback from platinum electrodes so equivalence point determination is not color dependent. • Alternative to titration methods, capture of evolved CO2 presents another pheasable quantification method, as oxidized organic carbon will be evolved as CO2. CO2 can be captured on absorbent material such as ascarite or other tared absorbent, whose mass change as a result of absorbed CO2 can be measured, or the absorbed CO2 could be desorbed and quantified via IR non-dispersive cell. Disadvantages of Wet Technique Measurement of TOC via the described wet techniques is a rather crude method to measure organic carbon content in a sample. The technique relies on several assumptions that in reality are not wholly accurate, leading to TOC values that are in reality an approximate. • The treatment with acid to remove the inorganic forms of carbon assumes that all of the inorganic carbon is removed and washed away in the acid treatment, but in reality this is probably not true, as some inorganic carbon will cling to the sample and be quantified incorrectly. • In the digestion process, which assumes that all of the carbon in the sample— which is already presumed to be entirely organic carbon—is completely converted carbon dioxide, taking no account for the possible solubility of the carbon dioxide in the wet sample or incomplete oxidation of carbon in the sample. • The wet method to measure TOC relies on the use of dichromate, while a very good oxidant, is a very toxic reagent with which to analysis. TOC Measurement of Water As mentioned previously, measurement of TOC levels in water is extremely valuable to regulatory agencies concerned with water quality. The presence of organic carbon in a substance that should have no carbon is of concern. Measurement of TOC in water uses a variant of the wet method in order to avoid highly toxic oxidants: typically a persulfate salt is used as an oxidant instead of dichromate. The procedure for measuring TOC levels in water is essentially the same as in the typical wet oxidation technique. The water is first acidified to remove inorganic sources of carbon. Now because water is being measured, one cannot simply wash away the inorganic carbon. The inorganic carbon escapes from the water solution as CO2. The remaining carbon in the solution is thought to be organic. Treatment of the solution with persulfate will do nothing. Irradiation of the solution treated with persulfate with UV radiation or heating will activate a radical species. This radical species will mediate oxidation of the organic carbon to CO2, which can then be quantified by similar methods as the traditional wet oxidation technique. Dry Methods As an alternative to technique for TOC measurement, dry techniques present several advantages over wet techniques. Dry techniques frequently involve the measurement of evolved carbon from the combustion of a sample. In this section of the module, TOC measurements using dry techniques will be discussed. Sample Pre-treatment Like in the wet-oxidation case, measurement of TOC by dry techniques requires the removal of inorganic forms of carbon, and therefore samples are treated with inorganic acids to do so. The inorganic acids are washed away and theoretically only organic forms of carbon remain. Before combustion of the sample, the treated sample must be completely dried so as to remove any moisture from the sample. In the case where non-volatile organics are present, or where little concern about the escape of organic material exists (e.g., rock samples or Kerogen), sample can be placed in a 100 °C oven overnight. In the case where evolution of organic matter at slightly elevated temperatures is a problem, drying can be done under vacuum and under the presence of deterite. Volatile organics are difficult to measure using dry techniques because the sample needs to be without moisture, and removal of moisture by any technique will most likely remove volatile organics. Sample Quantification As mentioned before, quantification of TOC in the dry quantification method will proceed via complete combustion of the sample in a carbon free atmosphere (typically a pure oxygen atmosphere). Quantification of sample is performed via non-dispersive infrared detection cell. A characteristic asymmetric stretching at 2350 cm-1 can be seen for CO2. The intensity of this infrared signal CO2 is proportional to the quantity of CO2 in the sample. Therefore, in order to translate signal intensity to amount, a calibration curve is constructed from known amounts of pure calcium carbonate, looking specifically at the intensity of the CO2 peak. One may point out that calcium carbonate is an inorganic source of carbon, but it is important to note that the source of carbon has no effect on its quantification. Preparation of a calibration curve follows similar preparation as to an analyte, while no pre-treatment with acid is needed, the standards must be thoroughly dried in an oven. When a sample is ready to be analyzed, it is first weighed on some form of analytical balance, and then placed in the combustion analyzer, such as a LECO analyzer, where the oven and the non-dispersive IR cell are one machine. Combustion proceeds at temperatures in the excess of 1350 oC in a stream of pure oxygen. Comparing the intensity of your characteristic IR peak to the intensities of the characteristic IR peaks of your known standards, the TOC of the sample can be determined. By comparing the mass of the sample to the mass of carbon obtained from the analyzer, the % organic carbon in the sample can be determined according to $\% \text { TOC } = \text { mass carbon/mass sample } \nonumber$ Use of this dry technique is most common for rock and other solid samples. In the oil and gas industry, it is extremely important to know the organic carbon content of rock samples in order to ascertain production viability of a well. The sample can be loaded in the LECO combustion analyzer and pyrolyzed in order to quantify TOC. Measurement of Total Carbon (TC) As shown in Equation \ref {eq:TC} the total carbon in a sample (TC) is the sum of the inorganic forms of carbon and organic forms of carbon in a sample. It is known that no other sources of carbon contribute to the TC determination because no other sources of carbon exist. So in theory, if one could quantify the TOC by a method described in the previous section, and follow that with a measurement of the TIC in the pre-treatment acid waste, one could find the TC of a sample by summing the value obtained for TIC and the value obtained for TOC. However, in TC quantification this is hardly done: partly in order to avoid propagation of error associated with the other two methods, also cost restraints. In measuring TC of a sample, the same dry technique of combustion of the sample is used, just like in the quantification of TOC. The same analyzer used to measure TOC can handle a TC measurement. No sample pre-treatment with acid is needed, so it is important to remember that the characteristic peak of CO2 now seen is representative of the carbon of the entire sample. Now using Equation \ref {eq:TC}, the TIC carbon of the sample can be found as well. Subtraction of the TOC from the measured TC in the analyzer gives the value for TIC. Measurement of total inorganic carbon (TIC) Direct methods to measure the TIC of a sample, in addition to indirect measurement by taking advantage of Equation \ref{TC}, are possible. Typical TIC measurements are done on water samples, where the alkalinity and hardness of water is a result of inorganic carbonates, be it bicarbonate or carbonate. Treatment of these types of samples follows similar procedures to treatment of samples for organic carbon. A sample of water is acidified, such that the equilibrium, Equation \ref{eq4} obeys Le Chatelier’s principle and favors the release of CO2. The CO2 released can be measured in a variety of different ways $\ce{CO2 + H20 <=> H2CO3 <=> HCO3^{-} + H^{+}} \label{eq4}$ As with the combustion technique for measuring TC and TOC, measurement of the intensity of the characteristic IR stretch for CO2 compared to standards can be used to quantity of TIC in a sample. However, in this case, it is emission of IR radiation that is measured, not absorption. An instrument that can do such a measurement is a FIRE-TIC, meaning Flame IR emission. This instrument consists of a purge like devices connected to a FIRE detector. Summary Measurement of Carbon content is crucial for a lot of industries. In this module you have seen a variety of ways to measure Total Carbon TC, as well as the source of that carbon, whether it be organic in nature (TOC), or inorganic (TIC). This information is extremely important for several industries: from oil exploration, where information on carbon content is needed to evaluate a formation’s production viability, to regulatory agencies, where carbon content and its origin are needed to ensure quality control and public safety. TOC, TC, TIC measurements do have significant limitations. Mostly all techniques are destructive in nature, meaning that sample cannot be recovered. Further limitations include assumptions that have to be made in the measurement. In TOC measurement for example, assumptions that all TIC has been removed in pretreatments with acid have to be made, as well as that all organic carbon is completely oxidized to CO2. In TIC measurements, it is assumed that all carbon sources are removed from the sample and detected. Several things can be done to promote these conditions so as to make such assumptions valid. All measurements cost money, because TOC, TIC, and TC are all related by Equation, more frequently than not only two measurements are done, and the third value is found by using their relation to one another. Bibliography • Z. A, Wang, S. N. Chu, and K. A. Hoering, Environ. Sci. Technol., 2013, 47, 7840. • B. A. Schumacher, Methods for the determination of Total Organic Carbon (TOC) in Soils and Sediments. U.S. Environmental Protection Agency, Washington, DC, EPA/600/R-02/069 (NTIS PB2003-100822), 2002 • B.B. Bernard, H. Bernard, and J.M. Brooks: Determination of Total Carbon, Total Organic Carbon and Inorganic Carbon in Sediments, College Station, Texas, USA, DI-Brooks International and B&B Laboratories, Inc., www.tdi-bi.com/analytical_ser...environmental/ NOAA_methods/TOC.pdf (accessed October 21, 2011). • Julie, The Blogsicle. www.theblogsicle.com/?p=345 • Schlumberger Ltd., Oilfield Review Autumn 2011, Schlumberger Ltd (2011), 43. • S. W. Kubala, D. C. Tilotta, M. A. Busch, and K. W. Busch, Anal. Chem., 1989, 61, 1841. • University of Georgia School CAES CAES Publications, University of Georgia Cooperative Extension Circular 922, http://www.caes.uga.edu/publications...cfm?pk_id=7895.
textbooks/chem/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/01%3A_Elemental_Analysis/1.10%3A_Total_Carbon_Analysis.txt
Introduction Atomic fluorescence spectroscopy (AFS) is a method that was invented by Winefordner and Vickers in 1964 as a means to analyze the chemical concentration of a sample. The idea is to excite a sample vapor with the appropriate UV radiation, and by measuring the emitting radiation, the amount of the specific element being measured could be quantified. In its most basic form, AFS consists of a UV light source to excite the sample, a monochromator, a detector and a readout device (figure $1$). Cold vapor atomic fluorescence spectroscopy (CVAFS) uses the same technique as AFS, but the preparation of the sample is adapted specifically to quantify the presence of heavy metals that are volatile, such as mercury, and allows for these elements to be measured at room temperature. Theory The theory behind CVAFS is that as the sample absorbs photons from the radiation source, it will enter an excited state. As the atom falls back into the ground state from its excited vibrational state(s), it will emit a photon, which can then be measured to determine the concentration. In its most basic sense, this process is represented by \ref{1}, where PF is the power given off as photons from the sample, Pabs is the power of the radiation absorbed by the sample, and φ is the proportionality factor of the energy lost due to collisions and interactions between the atoms present, and not due to photon emission. $\text{P}_{F}\ =\ \psi \text{P}_{\text{abs}} \label{1}$ Sample Preparation For CVAFS, the sample must be digested, usually with an acid to break down the compound being tested so that all metal atoms in the sample are accessible to be vaporized. The sample is put into a bubbler, usually with an agent that will convert the element to its gaseous species. An inert gas carrier such as argon is then passed through the bubbler to carry the metal vapors to the fluorescence cell. It is important that the gas carrier is inert, so that the signal will only be absorbed and emitted by the sample in question and not the carrier gas. Atomic Fluorescence Spectroscopy Once the sample is loaded into the cell, a collimated (almost parallel) UV light source passes through the sample so that it will fluoresce. A monochromator is often used, either between the light source and the sample, or between the sample and the detector. These two different setups are referred to as excitation or emission spectrum, respectively. In an excitation spectrum, the light source is kept at a constant wavelength via the monochromator, and multiple wavelengths of emitted light are gathered, whereas in the emission spectrum, only the specified wavelength of light emitted from the sample is measured, but the sample is exposed to multiple wavelengths of light from the excitatory source. The fluorescence will be detected by a photomultiplier tube, which is extremely light sensitive, and a photodiode is used to convert the light into voltage or current, which can then in turn be interpreted into the amount of the chemical present. Detecting Mercury Using Gold Amalgamation and Cold Vapor Atomic Fluorescence Spectroscopy Introduction Mercury poisoning can damage the nervous system, kidneys, and also fetal development in pregnant women, so it is important to evaluate the levels of mercury present in our environment. Some of the more common sources of mercury are in the air (from industrial manufacturing, mining, and burning coal), the soil (deposits, waste), water (byproduct of bacteria, waste), and in food (especially seafood). Although regulation for food, water and air mercury content differs, EPA regulation for mercury content in water is the lowest, and it cannot exceed 2 ppb (27 µg/L). In 1972, J. F. Kopp et al. first published a method to detect minute concentrations of mercury in soil, water, and air using gold amalgamation and cold vapor atomic fluorescence spectroscopy. While atomic absorption can also measure mercury concentrations, it is not as sensitive or selective as cold vapour atomic fluorescence spectroscopy (CVAFS). Sample Preparation As is common with all forms of atomic fluorescence spectroscopy (AFS) and atomic absorption spectrometry (AES), the sample must be digested, usually with an acid, to break down the compounds so that all the mercury present can be measured. The sample is put in the bubbler with a reducing agent such as stannous chloride (SnCl2) so that Hg0 is the only state present in the sample. Gold Amalgam and CVAFS Once the mercury is in its elemental form, the argon enters the bubbler through a gold trap, and carries the mercury vapors out of the bubbler to the first gold trap, after first passing through a soda lime (mixture of Ch(OH)2, NaOH, and KOH) trap where any remaining acid or water vapors are caught. After all the mercury from the sample is absorbed by the first gold trap, it is heated to 450 °C, which causes the mercury absorbed onto the gold trap to be carried by the argon gas to the second gold trap. Once the mercury from the sample has been absorbed by the second trap, it is heated to 450 °C, releasing the mercury to be carried by the argon gas into the fluorescence cell, where light at a wavelength of 253.7 nm will be used for mercury samples. The detection limit for mercury using gold amalgamation and CVAFS is around 0.05 ng/L, but the detection limit will vary due to the equipment being used, as well as human error. Calculating CVAFS concentrations A standard solution of mercury should be made, and from this dilutions will be used to make at least five different standard solutions. Depending on the detection limit and what is being analyzed, the concentrations in the standard solutions will vary. Note that what other chemicals the standard solutions contain will depend upon how the sample is digested. Example 1 A 1.00 g/mL Hg (1 ppm) working solution is made, and by dilution, five standards are made from the working solution, at 5.0, 10.0, 25.0, 50.0, and 100.0 ng/L (ppt). If these five standards give peak heights of 10 units, 23 units, 52 units, 110 units, and 207 units, respectively, then \ref{2} is used to calculate the calibration factor, where CFx is the calibration factor, Ax is the area of the peak or peak height, and Cx is the concentration in ng/L of the standard, \ref{3}. $\text{CF}_{x}\ =\ \text{A}_{X}/\text{C}_{X} \label{2}$ $10/5.0\ \text{ng}/\text{L}\ =\ 2.00\text{ units L/ng} \label{3}$ The calibration factors for the other four standards are calculated in the same fashion: 2.30, 2.08, 2.20, and 2.07, respectively. The average of the five calibration factors is then taken, \ref{4}. $\text{CF}_{m}\ =\ (2.00\ +\ 2.30\ +\ 2.08\ +\ 2.20\ +\ 2.07)/5\ =\ 2.13\text{ units L/ng} \label{4}$ Now to calculate the concentration of mercury in the sample, \ref{5} is used, where As is the area of the peak sample, CFm is the mean calibration factor, Vstd is the volume of the standard solution minus the reagents added, and Vsmp is the volume of the initial sample (total volume minus volume of reagents added). If As is measured at 49 units, Vstd = 0.47 L, and Vsmp = 0.26 L, then the concentration can be calculated, \ref{6}. $[\text{Hg}]\ (\text{ng/L})\ =\ (\text{A}_{s}/\text{CF}_{m})\cdot (\text{V}_{std}/V_{smp}) \label{5}$ $49\ units/2.13\ units\ L/ng)\cdot (0.47\ L/0.26\ L)\ =\ 43.2\ \text{ng}/\text{L of Hg present} \label{6}$ Sources of Error Contamination from the sample collection is one of the biggest sources of error: if the sample is not properly collected or hands/gloves are not clean, this can tamper with the concentration. Also, making sure the glassware and equipment is clean from any sources of contamination. Furthermore, sample vials that are used to store mercury-containing samples should be made out of borosilicate glass or fluoropolymer, because mercury can leach or absorb other materials, which could cause an inaccurate concentration reading. The Application of Fluorescence Spectroscopy in the Mercury Ion Detection Mercury in the Environment Mercury pollution has become a global problem and seriously endangers human health. Inorganic mercury can be easily released into the environment through a variety of anthropogenic sources, such as the coal mining, solid waste incineration, fossil fuel combustion, and chemical manufacturing. It can also be released through the nonanthropogenic sources in the form of forest fires, volcanic emissions, and oceanic emission. Mercury can be easily transported into the atmosphere as the form of the mercury vapor. The atmospheric deposition of mercury ions leads to the accumulation on plants, in topsoil, in water, and in underwater sediments. Some prokaryotes living in the sediments can convert the inorganic mercury into methylmercury, which can enter food chain and finally is ingested by human. Mercury seriously endangers people’s health. One example is that many people died due to exposure to methylmercury through seafood consumption in Minamata, Japan. Exposure in the organic mercury causes a serious of neurological problems, such as prenatal brain damage, cognitive and motion disorders, vision and hearing loss, and even death. Moreover, inorganic mercury also targets the renal epithelial cells of the kidney, which results in tubular necrosis and proteinuria. The crisis of mercury in the environment and biological system compel people to carry out related work to confront the challenge. To design and implement new mercury detection tools will ultimately aid these endeavors. Therefore, in this paper, we will mainly introduce fluorescence molecular sensor, which is becoming more and more important in mercury detection due to its easy use, low cost and high efficiency. Introduction of Fluorescence Molecular Sensors Fluorescence molecular sensor, one type of fluorescence molecular probe, can be fast, reversible response in the recognition process. There are four factors, selectivity, sensitivity, in-situ detection, and real time, that are generally used to evaluate the performance of the sensor. In this paper, four fundamental principles for design fluorescence molecular sensors are introduced. Photoinduced Electron Transfer (PET) Photoinduced electron transfer is the most popular principle in the design of fluorescence molecular sensors. The characteristic structure of PET sensors includes three parts as shown in Figure $2$: • The fluorophore absorbs the light and emits fluorescence signal. • The receptor selectively interacts with the guest. • A spacer connects the fluorophore and receptor together to form an integral system and successfully, effectively transfers the recognition information from receptor to fluorophore. In the PET sensors, photoinduced electron transfer makes the transfer of recognition information to fluorescence signal between receptor and fluorophore come true. Figure $2$ shows the detailed process of how PET works in the fluorescence molecular sensor. The receptor could provide the electron to the vacated electoral orbital of the excited fluorophore. The excited electron in the fluorophore could not come back the original orbital, resulting in the quenching of fluorescence emission. The coordination of receptor and guest decreased the electron donor ability of receptor reduced or even disrupted the PET process, then leading to the enhancement of intensity of fluorescence emission. Therefore, the sensors had weak or no fluorescence emission before the coordination. However, the intensity of fluorescence emission would increase rapidly after the coordination of receptor and gust. Intramolecular Charge Transfer (ICT) Intramolecular charge transfer (ICT) is also named photoinduced charge transfer. The characteristic structure of ICT sensors includes only the fluorophore and recognition group, but no spacer. The recognition group directly binds to the fluorophore. The electron withdrawing or electron donating substituents on the recognition group plays an important role in the recognition. When the recognition happens, the coordination between the recognition group and guest affects the electron density in the fluorophore, resulting in the change of fluorescence emission in the form of blue shift or red shift. Excimer When the two fluorophores are in the proper distance, an intermolecular excimer can be formed between the excited state and ground state. The fluorescence emission of the excimer is different with the monomer and mainly in the form of new, broad, strong, and long wavelength emission without fine structures. The proper distance determines the formation of excimer, therefore modulation of the distance between the two fluorophores becomes crucial in the design of the sensors based on this mechanism. The fluorophores have long lifetime in the singlet state to be easily forming the excimers. They are often used in such sensors. Fluorescence Resonance Energy Transfer (FRET) FRET is a popular principle in the design of the fluorescence molecular sensor. In one system, there are two different fluorophores, in which one acts as a donor of excited state energy to the receptor of the other. As shown in Figure $2$, the receptor accepts the energy from the excited state of the donor and gives the fluorescence emission, while the donor will return back to the electronic ground state. There are three factors affecting the performance of FRET. They are the distance between the donor and the acceptor, the proper orientation between the donor emission dipole moment and acceptor absorption moment, and the extent of spectral overlap between the donor emission and acceptor absorption spectrum (Figure $3$). Introduction of Fluorescence Spectroscopy Fluorescence Fluorescence is a process involving the emission of light from any substance in the excited states. Generally speaking, fluorescence is the emission of electromagnetic radiation (light) by the substance absorbed the different wavelength radiation. Its absorption and emission is illustrated in the Jablonski diagram (Figure $4$), a fluorophore is excited to higher electronic and vibrational state from ground state after excitation. The excited molecules can relax to lower vibrational state due to the vibrational relaxation and, then further retune to the ground state in the form of fluorescence emission. Instrumentation Most spectrofluorometers can record both excitation and emission spectra. They mainly consists of four parts: light sources, monochromators, optical filters and detector (Figure $5$). Light Sources Light sources that can emit wavelength of light over the ultraviolet and the visible range can provide the excitation energy. There are different light sources, including arc and incandescent xenon lamps, high-pressure mercury (Hg) lamps, Xe-Hg arc lamps, low pressure Hg and Hg-Ar lamps, pulsed xenon lamps, quartz-tungsten halogen (QTH) lamps, LED light sources, etc. The proper light source is chosen based on the application. Monochromators Prisms and diffraction gratings are two mainly used types of monocharomators, which help to get the experimentally needed chromatic light with a wavelength range of 10 nm. Typically, the monocharomators are evaluated based on dispersion, efficiency, stray light level and resolution. Optical Filters Optical filters are used in addition to monochromators in order to further purifying the light. There are two kinds of optical filters. The first one is the colored filter, which is the most traditional filter and is also divided into two catagories: monochromatic filter and long-pass filter. The other one is thin film filter that is the supplement for the former one in the application and being gradually instead of colored filter. Detector An InGaAs array is the standard detector used in many spectrofluorometers. It can provide rapid and robust spectral characterization in the near-IR. Applications PET Fluorescence Sensor As a PET sensor 2-{5-[(2-{[bis-(2-ethylsulfanyl-ethyl)-amino]-methyl}-phenylamino)-methyl]-2-chloro-6-hydroxy-3-oxo-3H-xanthen-9-yl}-benzoic acid (MS1) (Figure $6$) shows good selectivity for mercury ions in buffer solution (pH = 7, 50 mM PIPES, 100 mM KCl). From Figure $7$, it is clear that, upon the increase of the concentration of Hg2+ ions, the coordination between the sensor and Hg2+ ions disrupted the PET process, leading to the increase of the intensity of fluorescence emission with slight red shift to 528 nm. Sensor MS1 also showed good selectivity for Hg2+ ions over other cations of interest as shown in the right bars in Figure $8$; moreover, it had good resistance to the interference from other cations when detected Hg2+ ions in the mixture solution excluding Cu2+ ions as shown in the dark bars in the Figure $8$. ICT Fluorescence Sensor 2,2',2'',2'''-(3-(benzo[d]thiazol-2-yl)-2-oxo-2-H-chromene-6,7-diyl) bis(azanetriyl)tetrakis(N-(2-hydroxyethyl)acetamide) (RMS) (Figure $9$) has been shown to be an ICT fluorescence sensor. From Figure $10$, it is clear that, with the gradual increase of the concentration of Hg2+ ions, fluorescence emission spectra revealed a significant blue shift, which was about 100-nm emission band shift from 567 to 475 nm in the presence of 40 equiv of Hg2+ ions. The fluorescence change came from the coexistence of two electron-rich aniline nitrogen atoms in the electron-donating receptor moiety, which prevented Hg2+ ions ejection from them simultaneously in the excited ICT fluorophore. Sensor RMS also showed good selectivity over other cations of interest. As shown in Figure $11$, it is easy to find that only Hg2+ ions can modulate the fluorescence of RMS in a neutral buffered water solution. Excimer Fluorescence Sensor The (NE,N'E)-2,2'-(ethane-1,2-diyl-bis(oxy))bis(N-(pyren-4-ylmethylene)aniline) (BA) (Figure $12$ is the excimer fluorescence sensor. As shown in Figure $13$, when BA existed without mercury ions in the mixture of HEPES-CH3CN (80:20, v/v, pH 7.2), it only had the weak monomer fluorescence emission. Upon the increase of the concentration of mercury ions in the solution of BA, a strong excimer fluorescence emission at 462 nm appeared and increased with the change of the concentration of mercury ions. From Figure $14$, it is clear that BA showed good selectivity for mercury ions. Moreover, it had good resistance to the interference when detecting mercury ions in the mixture solution. FRET Fluorescence Sensor The calix[4]arene derivative bearing two pyrene and rhodamine fluorophores (CPR) (Figure $15$) is a characteristic FRET fluorescence sensor. Fluorescence titration experiment of CPR (10.0 μM) with Hg2+ ions was carried out in CHCl3/CH3CN (50:50, v/v) with an excitation of 343 nm. As shown in Figure $16$, upon gradual increase the concentration of Hg2+ ions in the solution of CPR, the increased fluorescence emission of the ring-opened rhodamine at 576 nm was observed with a concomitantly declining excimer emission of pyrene at 470 nm. Moreover, an isosbestic point centered at 550 nm appeared. This change in the fluorescence emission demonstrated that an energy from the pyrene excimer transferred to rhodamine, resulting from the trigger of Hg2+ ions. Figure $17$ showed that CPR had good resistance to other cations of interest when detected Hg2+ ions, though Pb2+ ions had little interference in this process.
textbooks/chem/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/01%3A_Elemental_Analysis/1.11%3A_Fluorescence_Spectroscopy.txt
Introduction Energy-dispersive X-ray spectroscopy (EDX or EDS) is an analytical technique used to probe the composition of a solid materials. Several variants exist, but the all rely on exciting electrons near the nucleus, causing more distant electrons to drop energy levels to fill the resulting “holes.” Each element emits a different set of X-ray frequencies as their vacated lower energy states are refilled, so measuring these emissions can provide both qualitative and quantitative information about the near-surface makeup of the sample. However, accurate interpretation of this data is dependent on the presence of high-quality standards, and technical limitations can compromise the resolution. Physical Underpinnings In the quantum mechanical model of the atom, an electron’s energy state is defined by a set of quantum numbers. The primary quantum number, n, provides the coarsest description of the electron’s energy level, and all the sublevels that share the same primary quantum number are sometimes said to comprise an energy “shell.” Instead of describing the lowest-energy shell as the “n = 1 shell,” it is more common in spectroscopy to use alphabetical labels: The K shell has n = 1, the L shell has n = 2, the M shell has n = 3, and so on. Subsequent quantum numbers divide the shells into subshells: one for K, three for L, and five for M. Increasing primary quantum numbers correspond with increasing average distance from the nucleus and increasing energy (Figure $1$). An atom’s core shells are those with lower primary quantum numbers than the highest occupied shell, or valence shell. Transitions between energy levels follow the law of conservation of energy. Excitation of an electron to a higher energy state requires an input of energy from the surroundings, and relaxation to a lower energy state releases energy to the surroundings. One of the most common and useful ways energy can be transferred into and out of an atom is by electromagnetic radiation. Core shell transitions correspond to radiation in the X-ray portion of the spectrum; however, because the core shells are normally full by definition, these transitions are not usually observed. X-ray spectroscopy uses a beam of electrons or high-energy radiation (see instrument variations, below) to excite core electrons to high energy states, creating a low-energy vacancy in the atoms’ electronic structures. This leads to a cascade of electrons from higher energy levels until the atom regains a minimum-energy state. Due to conservation of energy, the electrons emit X-rays as they transition to lower energy states. It is these X-rays that are being measured in X-ray spectroscopy. The energy transitions are named using the letter of the shell where ionization first occurred, a Greek letter denoting the group of lines that transition belongs to, in order of decreasing importance, and a numeric subscript ranking the peak's the intensity within that group. Thus, the most intense peak resulting from ionization in the K shell would be Kα1 (Figure $2$). Since each element has a different nuclear charge, the energies of the core shells and, more importantly, the spacing between them vary from one element to the next. While not every peak in an element’s spectrum is exclusive to that element, there are enough characteristic peaks to be able to determine composition of the sample, given sufficient resolving power. Instrumentation and Sample Preparation Instrument variations There are two common methods for exciting the core electrons off the surface atoms. The first is to use a high-energy electron beam like the one in a scanning electron microscope (SEM). The beam is produced by an electron gun, in which electrons emitted thermionically from a hot cathode are guided down the column by an electric field and focused by a series of negatively charged “lenses.” X-rays emitted by the sample strike a lithium-drifted silicon p-i-n junction plate. This promotes electrons in the plate into the conduction band, inducing a voltage proportional to the energy of the impacting X-ray which generally falls between about 1 and 10 keV. The detector is cooled to liquid nitrogen temperatures to reduce electronic noise from thermal excitations. It is also possible to use X-rays to excite the core electrons to the point of ionization. In this variation, known as energy-dispersive X-ray fluorescence analysis (EDXRFA or XRF), the electron column is replaced by an X-ray tube and the X-rays emitted by the sample in response to the bombardment are called secondary X-rays, but these variants are otherwise identical. Regardless of the excitation method, subsequent interactions between the emitted X-rays and the sample can lead to poor resolution in the X-ray spectrum, producing a Gaussian-like curve instead of a sharp peak. Indeed, this spreading of energy within the sample combined with the penetration of the electron or X-ray beam leads to the analysis of a roughly 1 µm3 volume instead of only the surface features. Peak broadening can lead to overlapping peaks and a generally misleading spectrum. In cases where a normal EDS spectrum is inadequately resolved, a technique called wavelength-dispersive X-ray spectroscopy (WDS) can be used. The required instrument is very similar to the ones discussed above, and can use either excitation method. The major difference is that instead of having the X-rays emitted by the sample hit the detector directly, they first encounter an analytical crystal of know lattice dimensions. Bragg’s law predicts that the strongest reflections off the crystal will occur for wavelengths such that the path difference between a rays reflecting from consecutive layers in the lattice is equal to an integral number of wavelengths. This is represented mathematically as \ref{1}, where n is an integer, λ is the wavelength of impinging light, d is the distance between layers in the lattice, and θ is the angle of incidence. The relevant variables for the equation are labeled in Figure $3$. $n\lambda \ =\ 2d\ sin\ \theta \label{1}$ By moving the crystal and the detector around the Rowland circle, the spectrometer can be tuned to examine specific wavelengths (\ref{1}). Generally, an initial scan across all wavelengths is taken first, and then the instrument is programmed to more closely examine the wavelengths that produced strong peaks. The resolution available with WDS is about an order of magnitude better than with EDS because the analytical crystal helps filter out the noise of subsequent, non-characteristic interactions. For clarity, “X-ray spectroscopy” will be used to refer to all of the technical variants just discussed, and points made about EDS will hold true for XRF unless otherwise noted. Sample Preparation Compared with some analytical techniques, the sample preparation required for X-ray spectroscopy or any of the related methods just discussed is trivial. The sample must be stable under vacuum, since the sample chamber is evacuated to prevent the atmosphere from interfering with the electron beam or X-rays. It is also advisable to have the surface as clean as possible; X-ray spectroscopy is a near-surface technique, so it should analyze the desired material for the most part regardless, but any grime on the surface will throw off the composition calculations. Simple qualitative readings can be obtained from a solid of any thickness, as long as it fits in the machine, but for reliable quantitative measurements, the sample should be shaved as thin as possible. Data Interpretation Qualitative analysis, the determination of which elements are present in the sample but not necessarily the stoichiometry, relies on empirical standards. The energies of the commonly used core shell transitions have been tabulated for all the natural elements. Since combinations of elements can act differently than a single element alone, standards with compositions as similar as possible to the suspected makeup of the sample are also employed. To determine the sample’s composition, the peaks in the spectrum are matched with peaks from the literature or standards. Quantitative analysis, the determination of the sample’s stoichiometry, needs high resolution to be good enough that the ratio of the number of counts at each characteristic frequency gives the ratio of those elements in the sample. It takes about 40,000 counts for the spectrum to attain a 2σ precision of ±1%. It is important to note, however, that this is not necessarily the same as the empirical formula, since not all elements are visible. Spectrometers with a beryllium window between the sample and the detector typically cannot detect anything lighter than sodium. Spectrometers equipped with polymer based windows can quantify elements heavier than beryllium. Either way, hydrogen cannot be observed by X-ray spectroscopy. X-ray spectra are presented with energy in keV on the x-axis and the number of counts on the y-axis. The EDX spectra of biotite and NIST glass K309 are shown as examples (Figure $5$ and Figure $6$ respectively). Biotite is a mineral similar to mica which has the approximate chemical formula K(Mg,Fe)3AlSi3O10(F,OH)2. Strong peaks for manganese, aluminum, silicon, potassium, and iron can be seen in the spectrum. The lack of visible hydrogen is expected, and the absence of oxygen and fluorine peaks suggests the instrument had a beryllium window. The titanium peak is small and unexpected, so it may only be present in trace amounts. K309 is a mix of glass developed by the National Institute for Standards and Technology. The spectrum shows that it contains significant amounts of silicon, aluminum, calcium, oxygen, iron, and barium. The large peak at the far left is the carbon signal from the carbon substrate the glass was placed on. Limitations As has just been discussed, X-ray spectroscopy is incapable of seeing elements lighter than boron. This is a problem given the abundance of hydrogen in natural and man-made materials. The related techniques X-ray photoelectron spectroscopy (XPS) and Auger spectroscopy are able to detect Li and Be, but are likewise unable to measure hydrogen. X-ray spectroscopy relies heavily on standards for peak identification. Because a combination of elements can have noticeably different properties from the individual constituent elements in terms of X-ray fluorescence or absorption, it is important to use a standard as compositionally similar to the sample as possible. Naturally, this is more difficult to accomplish when examining new materials, and there is always a risk of the structure of the sample being appreciably different than expected. The energy-dispersive variants of X-ray spectroscopy sometimes have a hard time distinguishing between emissions that are very near each other in energy or distinguishing peaks from trace elements from background noise. Fortunately, the wavelength-dispersive variants are much better at both of these. The rough, stepwise curve in Figure $7$ represents the EDS spectrum of molybdenite, a mineral with the chemical formula MoS2. Broadened peaks make it difficult to distinguish the molybdenum signals from the sulfur ones. Because WDS can select specific wavelengths, it has much better resolution and can pinpoint the separate peaks more accurately. Similarly, the trace silicon signal in the EDS spectrum of the nickel-aluminum-manganese alloy in Figure $8$a is barely distinguishable as a bump in the baseline, but the WDS spectrum in Figure $8$b clearly picks it up.
textbooks/chem/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/01%3A_Elemental_Analysis/1.12%3A_An_Introduction_to_Energy_Dispersive_X-ray_Spectroscopy.txt
XPS Analysis of Modified Substances Introduction X-Ray photoelectron spectroscopy (XPS), also known as electron spectroscopy for chemical analysis (ESCA), is one of the most widely used surface techniques in materials science and chemistry. It allows the determination of atomic composition of the sample in a non-destructive manner, as well as other chemical information, such as binding constants, oxidation states and speciation. The sample under study is subjected to irradiation by a high energy X-ray source. The X-rays penetrate only 5 – 20 Å into the sample, allowing for surface specific, rather than bulk chemical, analysis. As an atom absorbs the X-rays, the energy of the X-ray will cause a K-shell electron to be ejected, as illustrated by Figure $1$. The K-shell is the lowest energy shell of the atom. The ejected electron has a kinetic energy (KE) that is related to the energy of the incident beam (hν), the electron binding energy (BE), and the work function of the spectrometer (φ) (\ref{1}). Thus, the binding energy of the electron can be calculated. $BE\ =\ h\nu \ -\ KE\ -\ \psi _{s} \label{1}$ Table $1$ shows the binding energy of the ejected electron, and the orbital from which the electron is ejected, which is characteristic of each element. The number of electrons detected with a specific binding energy is proportional to the number of corresponding atoms in the sample. This then provides the percent of each atom in the sample. Element Binding Energy (eV) Carbon (C) (1s) 284.5 - 285.1 Nitrogen (N) (1s) 396.1 - 400.5 Oxygen (O) (1s) 526.2 - 533.5 Silicon (Si) (2p) 98.8 - 99.5 Sulfur (S) (2p3/2) 164.0 - 164.3 Iron (Fe) (2p3/2) 706.8 - 707.2 Gold (Au) (4f7/2) 83.8 - 84.2 Table $1$ Binding energies for select elements in their elemental forms. The chemical environment and oxidation state of the atom can be determined through the shifts of the peaks within the range expected (Table $2$). If the electrons are shielded then it is easier, or requires less energy, to remove them from the atom, i.e., the binding energy is low. The corresponding peaks will shift to a lower energy in the expected range. If the core electrons are not shielded as much, such as the atom being in a high oxidation state, then just the opposite occurs. Similar effects occur with electronegative or electropositive elements in the chemical environment of the atom in question. By synthesizing compounds with known structures, patterns can be formed by using XPS and structures of unknown compounds can be determined. Compound Binding Energy (eV) COH (C 1s) 286.01 - 286.8 CHF (C 1s) 287.5 - 290.2 Nitride (N 1s) 396.2 - 398.3 Fe2O3 (from O, 1s) 529.5 - 530.2 Fe2O3 (from Fe, 2p3/2) 710.7 - 710.9 FeO (from Fe 2p3/2) 709.1 - 709.5 SiO2 (from O, 2s) 532.5 - 533.3 SiO2 (from Si, 2p) 103.2 - 103.9 Table $2$ Binding energies of electrons in various compounds. Sample preparation is important for XPS. Although the technique was originally developed for use with thin, flat films, XPS can be used with powders. In order to use XPS with powders, a different method of sample preparation is required. One of the more common methods is to press the powder into a high purity indium foil. A different approach is to dissolve the powder in a quickly evaporating solvent, if possible, which can then be drop-casted onto a substrate. Using sticky carbon tape to adhere the powder to a disc or pressing the sample into a tablet are an option as well. Each of these sample preparations are designed to make the powder compact, as powder not attached to the substrate will contaminate the vacuum chamber. The sample also needs to be completely dry. If it is not, solvent present in the sample can destroy the necessary high vacuum and contaminate the machine, affecting the data of the current and future samples. Analyzing Functionalized Surfaces Depth Profiling When analyzing a sample (Figure $2$ a) by XPS, questions often arise that deal with layers of the sample. For example, is the sample homogenous, with a consistent composition throughout, or layered, with certain elements or components residing in specific places in the sample? (Figure $2$ b,c). A simple way to determine the answer to this question is to perform a depth analysis. By sputtering away the sample, data can be collected at different depths within the sample. It should be noted that sputtering is a destructive process. Within the XPS instrument, the sample is subjected to an Ar+ ion beam that etches the surface. This creates a hole in the surface, allowing the X-rays to hit layers that would not have otherwise been analyzed. However, it should be realized that different surfaces and layers may be etched at different rates, meaning the same amount of etching does not occur during the same amount of time, depending on the element or compound currently being sputtered. It is important to note that hydrocarbons sputter very easily and can contaminate the high vacuum of the XPS instrument and thus later samples. They can also migrate to a recently sputtered (and hence unfunctionalized) surface after a short amount of time, so it is imperative to sputter and take a measurement quickly, otherwise the sputtering may appear to have had no effect. Functionalized Films When running XPS, it is important that the sample is prepared correctly. If it is not, there is a high chance of ruining not only data acquisition, but the instrument as well. With organic functionalization, it is very important to ensure the surface functional group (or as is the case with many functionalized nanoparticles, the surfactant) is immobile on the surface of the substrate. If it is removed easily in the vacuum chamber, it not only will give erroneous data, but it will contaminate the machine, which may then contaminate future samples. This is particularly important when studying thiol functionalization of gold samples, as thiol groups bond strongly with the gold. If there is any loose thiol group contaminating the machine, the thiol will attach itself to any gold sample subsequently placed in the instrument, providing erroneous data. Fortunately, with the above exception, preparing samples that have been functionalized is not much different than standard preparation procedures. However, methods for analysis may have to be modified in order to obtain good, consistent data. A common method for the analysis of surface modified material is angle resolved X-ray photoelectron spectroscopy (ARXPS). ARXPS is a non-destructive alternative to sputtering, as it relies upon using a series of small angles to analyze the top layer of the sample, giving a better picture of the surface than standard XPS. ARXPS allows for the analysis of the topmost layer of atoms to be analyzed, as opposed to standard XPS, which will analyze a few layers of atoms into the sample, as illustrated in Figure $3$. ARXPS is often used to analyze surface contaminations, such as oxidation, and surface modification or passivation. Though the methodology and limitations are beyond the scope of this module, it is important to remember that, like normal XPS, ARXPS assumes homogeneous layers are present in samples, which can give erroneous data, should the layers be heterogeneous. Limitations of XPS There are many limitations to XPS that are not based on the samples or preparation, but on the machine itself. One such limitation is that XPS cannot detect hydrogen or helium. This, of course, leads to a ratio of elements in the sample that is not entirely accurate, as there is always some amount of hydrogen. It is a common fallacy to assume the percent of atoms obtained from XPS data are completely accurate due to this presence of undetected hydrogen (Table $1$). It is possible to indirectly measure the amount of hydrogen in a sample using XPS, but it is not very accurate and has to be done in a roundabout, often time consuming manner. If the sample contains hydrogen with a partial positive charge (i.e. OH), the sample can be washed in sodium naphthalenide (C10H8Na). This replaces this hydrogen with sodium, which can then be measured. The sodium to oxygen ratio that is obtained infers the hydrogen to oxygen ratio, assuming that all the hydrogen atoms have reacted. XPS can only give an average measurement, as the electrons lower down in the sample will lose more energy as they pass other atoms while the electrons on the surface retain their original kinetic energy. The electrons from lower layers can also undergo inelastic or elastic scattering, seen in Figure $4$. This scattering may have a significant impact on data at higher angles of emission. The beam itself is also relatively wide, with the smallest width ranging from 10 – 200 μm, lending to the observed average composition inside the beam area. Due to this, XPS cannot differentiate sections of elements if the sections are smaller than the size of the beam. Sample reaction or degredation are important considerations. Caution should be exercised when analyzing polymers, as they are often chemically active and X-rays will provide energy to start degrading the polymer, altering the properties of the sample. One method found to help overcome this particular limitation is to use angle-resolved X-ray photoelectron spectroscopy (ARXPS). XPS can often reduce certain metal salts, such as Cu2+. This reduction will give peaks that indicate a certain set of properties or chemical environments when it could be completely different. It needs to be understood that charges can build up on the surface of the sample due to a number of reasons, specifically due to the loss of electrons during the XPS experiment. The charge on the surface will interact with the electrons escaping from the sample, affecting the data obtained. If the charge collecting is positive, the electrons that have been knocked off will be attracted to the charge, slowing the electrons. The detector will pick up a lower kinetic energy of the electrons, and thus calculate a different binding energy than the one expected, giving peaks which could be labeled with an incorrect oxidation state or chemical environment. To overcome this, the spectra must be charge referenced by one of the following methods: using the naturally occurring graphite peak as a reference, sputtering with gold and using the gold peak as a reference or flooding the sample with the ion gun and waiting until the desired peak stops shifting. Limitations with Surfactants and Sputtering While it is known that sputtering is destructive, there are a few other limitations that are not often considered. As mentioned above, the beam of X-rays is relatively large, giving an average composition in the analysis. Sputtering has the same limitation. If the surfactant or layers are not homogeneous, then when the sputtering is finished and detection begins, the analysis will show a homogeneous section, due to the size of both the beam and sputtered area, while it is actually separate sections of elements. The chemistry of the compounds can be changed with sputtering, as it removes atoms that were bonded, changing the oxidation state of a metal or the hybridization of a non-metal. It can also introduce charges if the sample is non-conducting or supported on a non-conducting surface. Using XPS to Analyze Metal Nanoparticles Introduction X-ray photoelectron spectroscopy (XPS) is a surface technique developed for use with thin films. More recently, however, it has been used to analyze the chemical and elemental composition of nanoparticles. The complication of nanoparticles is that they are neither flat nor larger than the diameter of the beam, creating issues when using the data obtained at face value. Samples of nanoparticles will often be large aggregates of particles. This creates problems with the analysis acquisition, as there can be a variety of cross-sections, as seen in Figure $5$. This acquisition problem is also compounded by the fact that the surfactant may not be completely covering the particle, as the curvature of the particle creates defects and divots. Even if it is possible to create a monolayer of particles on a support, other issues are still present. The background support will be analyzed with the particle, due to their small size and the size of the beam and the depth at which it can penetrate. Many other factors can introduce changes in nanoparticles and their properties. There can be probe, environmental, proximity, and sample preparation effects. The dynamics of particles can wildly vary depending on the reactivity of the particle itself. Sputtering can also be a problem. The beam used to sputter will be roughly the same size or larger than the particles. This means that what appears in the data is not a section of particle, but an average composition of several particles. Each of these issues needs to be taken into account and preventative measures need to be used so the data is the best representation possible. Sample Preparation Sample preparation of nanoparticles is very important when using XPS. Certain particles, such as iron oxides without surfactants, will interact readily with oxygen in the air. This causes the particles to gain a layer of oxygen contamination. When the particles are then analyzed, oxygen appears where it should not and the oxidation state of the metal may be changed. As shown by these particles, which call for handling, mounting and analysis without exposure to air, knowing the reactivity of the nanoparticles in the sample is very important even before starting analysis. If the reactivity of the nanoparticle is known, such as the reactivity of oxygen and iron, then preventative steps can be taken in sample preparation in order to obtain the best analysis possible. When preparing a sample for XPS, a powder form is often used. This preparation, however, will lead to aggregation of nanoparticles. If analysis is performed on such a sample, the data obtained will be an average of composition of each nanoparticle. If composition of a single particle is what is desired, then this average composition will not be sufficient. Fortunately, there are other methods of sample preparation. Samples can be supported on a substrate, which will allow for analysis of single particles. A pictorial representation in Figure $6$ shows the different types of samples that can occur with nanoparticles. Analysis Limitations Nanoparticles are dynamic; their properties can change when exposed to new chemical environments, leading to a new set of applications. It is the dynamics of nanoparticles that makes them so useful and is one of the reasons why scientists strive to understand their properties. However, it is this dynamic ability that makes analysis difficult to do properly. Nanoparticles are easily damaged and can change properties over time or with exposure to air, light or any other environment, chemical or otherwise. Surface analysis is often difficult because of the high rate of contamination. Once the particles are inserted into XPS, even more limitations appear. Probe Effects There are often artifacts introduced from the simple mechanism of conducting the analysis. When XPS is used to analyze the relatively large surface of thin films, there is small change in temperature as energy is transferred. The thin films, however, are large enough that this small change in energy has to significant change to its properties. A nanoparticle is much smaller. Even a small amount of energy can drastically change the shape of particles, in turn changing the properties, giving a much different set of data than expected. The electron beam itself can affect how the particles are supported on a substrate. Theoretically, nanoparticles would be considered separate from each other and any other chemical environments, such as solvents or substrates. This, however, is not possible, as the particles must be suspended in a solution or placed on a substrate when attempting analysis. The chemical environment around the particle will have some amount of interaction with the particle. This interaction will change characteristics of the nanoparticles, such as oxidation states or partial charges, which will then shift the peaks observed. If particles can be separated and suspended on a substrate, the supporting material will also be analyzed due to the fact that the X-ray beam is larger than the size of each individual particle. If the substrate is made of porous materials, it can adsorb gases and those will be detected along with the substrate and the particle, giving erroneous data. Environmental Effects Nanoparticles will often react, or at least interact, with their environments. If the particles are highly reactive, there will often be induced charges in the near environment of the particle. Gold nanoparticles have a well-documented ability to undergo plasmon interactions with each other. When XPS is performed on these particles, the charges will change the kinetic energy of the electrons, shifting the apparent binding energy. When working with nanoparticles that are well known for creating charges, it is often best to use an ion gun or a coating of gold. The purpose of the ion gun or gold coating is to try to move peaks back to their appropriate energies. If the peaks do not move, then the chance of there being no induced charge is high and thus the obtained data is fairly reliable. Proximity Effects The proximity of the particles to each other will cause interactions between the particles. If there is a charge accumulation near one particle, and that particle is in close proximity with other particles, the charge will become enhanced as it spreads, affecting the signal strength and the binding energies of the electrons. While the knowledge of charge enhancement could be useful to potential applications, it is not beneficial if knowledge of the various properties of individual particles is sought. Less isolated (i.e., less crowded) particles will have different properties as compared to more isolated particles. A good example of this is the plasmon effect in gold nanoparticles. The closer gold nanoparticles are to each other, the more likely they will induce the plasmon effect. This can change the properties of the particles, such as oxidation states and partial charges. These changes will then shift peaks seen in XPS spectra. These proximity effects are often introduced in the sample preparation. This, of course, shows why it is important to prepare samples correctly to get desired results. Conclusions Unfortunately there is no good general procedure for all nanoparticles samples. There are too many variables within each sample to create a basic procedure. A scientist wanting to use XPS to analyze nanoparticles must first understand the drawbacks and limitations of using their sample as well as how to counteract the artifacts that will be introduced in order to properly use XPS. One must never make the assumption that nanoparticles are flat. This assumption will only lead to a misrepresentation of the particles. Once the curvature and stacking of the particles, as well as their interactions with each other are taken into account, XPS can be run.
textbooks/chem/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/01%3A_Elemental_Analysis/1.13%3A_X-ray_Photoelectron_Spectroscopy.txt
Basic Principles Auger electron spectroscopy (AES) is one of the most commonly employed surface analysis techniques. It uses the energy of emitted electrons to identify the elements present in a sample, similar to X-ray photoelectron spectroscopy (XPS). The main difference is that XPS uses an X-ray beam to eject an electron while AES uses an electron beam to eject an electron. In AES, the sample depth is dependent on the escape energy of the electrons. It is not a function of the excitation source as in XPS. In AES, the collection depth is limited to 1-5 nm due to the small escape depth of electrons, which permits analysis of the first 2 - 10 atomic layers. In addition, a typical analysis spot size is roughly 10 nm. A representative AES spectrum illustrating the number of emitted electrons, N, as a function of kinetic energy, E, in direct form (red) and in differentiated form (black) is shown in Figure $1$. Like XPS, AES measures the kinetic energy (Ek) of an electron to determine its binding energy (Eb). The binding energy is inversely proportional to the kinetic energy and can be found from \ref{1}, where hν is the energy of the incident photon and ΔΦ is the difference in work function between the sample and the detector material. $E_{b}\ =\ h\nu \ -\ E_{k}\ +\ \Delta \Phi \label{1}$ Since the Eb is dependent on the element and the electronic environment of the nucleus, AES can be used to distinguish elements and their oxidation states. For instance, the energy required to remove an electron from Fe3+ is more than in Fe0. Therefore, the Fe3+ peak will have a lower Ek than the Fe0 peak, effectively distinguishing the oxidation states. Auger Process An Auger electron comes from a cascade of events. First, an electron beam comes in with sufficient energy to eject a core electron creating a vacancy (see Figure $2$a). Typical energies of the primary electrons range from 3 - 30 keV. A secondary electron (imaging electron) of higher energy drops down to fill the vacancy (see Figure $2$ b) and emits sufficient energy to eject a tertiary electron (Auger electron) from a higher shell (see Figure $2$ c). The shells from which the electrons move from lowest to highest energy are described as the K shell, L shell, and M shell. This nomenclature is related to quantum numbers. Explicitly, the K shell represents the 1s orbital, the L shell represents the 2s and 2p orbitals, and the M shell represents the 3s, 3p, and 3d orbitals. The cascade of events typically begins with the ionization of a K shell electron, followed by the movement of an L shell electron into the K shell vacancy. Then, either an L shell electron or M shell electron is ejected. It depends on the element, which peak is prevalent but often both peaks will be present. The peak seen in the spectrum is labeled according to the shells involved in the movement of the electrons. For example, an electron ejected from a gold atom could be labeled as Au KLL or Au KLM. The intensity of the peak depends on the amount of material present, while the peak position is element dependent. Auger transitions characteristic of each elements can be found in the literature. Auger transitions of the first forty detectable elements are listed in Table $1$. Atomic Number Element AES transition Kinetic Energy of Transition (eV) 3 Li KLL 43 4 Be KLL 104 5 B KLL 179 6 C KLL 272 7 N KLL 379 8 O KLL 508 9 F KLL 647 11 Na KLL 990 12 Mg KLL 1186 13 Al LMM 68 14 Si LMM 92 15 P LMM 120 16 S LMM 152 17 Cl LMM 181 19 K KLL 252 20 Ca LMM 291 21 Sc LMM 340 22 Ti LMM 418 23 V LMM 473 24 Cr LMM 529 25 Mn LMM 589 26 Fe LMM 703 27 Co LMM 775 28 Ni LMM 848 29 Cu LMM 920 30 Zn LMM 994 31 Ga LMM 1070 32 Ge LMM 1147 33 As LMM 1228 34 Se LMM 1315 35 Br LMM 1376 39 Y MNN 127 40 Zr MNN 147 41 Nb MNN 167 42 Mo MNN 186 Table $1$ Selected AES transitions and their corresponding kinetic energy. Adapted from H. J. Mathieu in Surface Analysis: The Principal Techniques, Second Edition, Ed. J. C. Vickerman, Wiley-VCH, Weinheim (2011). Instrumentation Important elements of an Auger spectrometer include a vacuum system, an electron source, and a detector. AES must be performed at pressures less than 10-3 pascal (Pa) to keep residual gases from adsorbing to the sample surface. This can be achieved using an ultra-high-vacuum system with pressures from 10-8 to 10-9 Pa. Typical electron sources include tungsten filaments with an electron beam diameter of 3 - 5 μm, LaB6 electron sources with a beam diameter of less than 40 nm, and Schottky barrier filaments with a 20 nm beam diameter and high beam current density. Two common detectors are the cylindrical mirror analyzer and the concentric hemispherical analyzer discussed below. Notably, concentric hemispherical analyzers typically have better energy resolution. Cylindrical Mirror Analyzer (CMA) A CMA is composed of an electron gun, two cylinders, and an electron detector (Figure $2$). The operation of a CMA involves an electron gun being directed at the sample. An ejected electron then enters the space between the inner and outer cylinders (IC and OC). The inner cylinder is at ground potential, while the outer cylinder’s potential is proportional to the kinetic energy of the electron. Due to its negative potential, the outer cylinder deflects the electron towards the electron detector. Only electrons within the solid angle cone are detected. The resulting signal is proportional to the number of electrons detected as a function of kinetic energy. Concentric Hemispherical Analyzer (CHA) A CHA contains three parts (Figure $4$): 1. A retarding and focusing input lens assembly 2. An inner and outer hemisphere (IH and OH) 3. An electron detector Electrons ejected from the surface enter the input lens, which focuses the electrons and retards their energy for better resolution. Electrons then enter the hemispheres through an entrance slit. A potential difference is applied on the hemispheres so that only electrons with a small range of energy differences reach the exit. Finally, an electron detector analyzes the electrons. Applications AES has widespread use owing to its ability to analyze small spot sizes with diameters from 5 μm down to 10 nm depending on the electron gun. For instance, AES is commonly employed to study film growth and surface-chemical composition, as well as grain boundaries in metals and ceramics. It is also used for quality control surface analyses in integrated circuit production lines due to short acquisition times. Moreover, AES is used for areas that require high spatial resolution, which XPS cannot achieve. AES can also be used in conjunction with transmission electron microscopy (TEM) and scanning electron microscopy (SEM) to obtain a comprehensive understanding of microscale materials, both chemically and structurally. As an example of combining techniques to investigate microscale materials, Figure $5$ shows the characterization of a single wire from a Sn-Nb multi-wire alloy. Figure $5$ a is a SEM image of the singular wire and Figure $5$ b is a schematic depicting the distribution of Nb and Sn within the wire. Point analysis was performed along the length of the wire to determine the percent concentrations of Nb and Sn. AES is widely used for depth profiling. Depth profiling allows the elemental distributions of layered samples 0.2 – 1 μm thick to be characterized beyond the escape depth limit of an electron. Varying the incident and collection angles, and the primary beam energy controls the analysis depth. In general, the depth resolution decreases with the square root of the sample thickness. Notably, in AES, it is possible to simultaneously sputter and collect Auger data for depth profiling. The sputtering time indicates the depth and the intensity indicates elemental concentrations. Since, the sputtering process does not affect the ejection of the Auger electron, helium or argon ions can be used to sputter the surface and create the trench, while collecting Auger data at the same time. The depth profile does not have the problem of diffusion of hydrocarbons into the trenches. Thus, AES is better for depth profiles of reactive metals (e.g., gold or any metal or semiconductor). Yet, care should be taken because sputtering can mix up different elements, changing the sample composition. Limitations While AES is a very valuable surface analysis technique, there are limitations. Because AES is a three-electron process, elements with less than three electrons cannot be analyzed. Therefore, hydrogen and helium cannot be detected. Nonetheless, detection is better for lighter elements with fewer transitions. The numerous transition peaks in heavier elements can cause peak overlap, as can the increased peak width of higher energy transitions. Detection limits of AES include 0.1 – 1% of a monolayer, 10-16 – 10-15 g of material, and 1012 – 1013 atoms/cm2. Another limitation is sample destruction. Although focusing of the electron beam can improve resolution; the high-energy electrons can destroy the sample. To limit destruction, beam current densities of greater than 1 mA/cm2 should be used. Furthermore, charging of the electron beam on insulating samples can deteriorate the sample and result in high-energy peak shifts or the appearance of large peaks.
textbooks/chem/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/01%3A_Elemental_Analysis/1.14%3A_Auger_Electron_Spectroscopy.txt
Introduction One of the main research interests of the semiconductor industry is to improve the performance of semiconducting devices and to construct new materials with reduced size or thickness that have potential application in transistors and microelectronic devices. However, the most significant challenge regarding thin film semiconductor materials is measurement. Properties such as the thickness, composition at the surface, and contamination, all are critical parameters of the thin films. To address these issues, we need an analytical technique which can measure accurately through the depth of the of the semiconductor surface without destruction of the material. Rutherford backscattering spectroscopy is a unique analysis method for this purpose. It can give us information regarding in-depth profiling in a non-destructive manner. However X-ray photo electron spectroscopy (XPS), energy dispersive X-ray analysis (EDX) and Auger electron spectroscopy are also able to study the depth-profile of semiconductor films. Table $1$ demonstrates the comparison between those techniques with RBS. Method Destructive Incident Particle Outgoing Particle Detection Limit Depth Resolution RBS No Ion Ion ~1 10 nm XPS Yes X-ray photon Electron ~0.1-1 ~1 µm EDX Yes Electron X-ray photon ~0.1 1.5 µm Auger Yes Electron Electron ~0.1-1 1.5 nm Table $1$ Comparison between different thin film analysis techniques. Basic Concept of Rutherford Backscattering Spectroscopy At a basic level, RBS demonstrates the electrostatic repulsion between high energy incident ions and target nuclei. The specimen under study is bombarded with monoenergetic beam of 4He+ particles and the backscattered particles are detected by the detector-analysis system which measures the energies of the particles. During the collision, energy is transferred from the incident particle to the target specimen atoms; the change in energy of the scattered particle depends on the masses of incoming and target atoms. For an incident particle of mass M1, the energy is E0 while the mass of the target atom is M2. After the collision, the residual energy E of the particle scattered at angle Ø can be expressed as: $E\ =\ k^{2}E_{0} \label{1}$ $\ k\ =\ \frac{(M_{1}\ cos(\theta )\ +\ \sqrt{M_{2}^{2}\ -\ M_{1}^{2}sin^{2}\theta })}{M_{1}\ +\ M_{2}} \label{2}$ where k is the kinematic scattering factor, which is actually the energy ratio of the particle before and after the collision. Since k depends on the masses of the incident particle and target atom and the scattering angle, the energy of the scattered particle is also determined by these three parameters. A simplified layout of backscattering experiment is shown in Figure $1$. The probability of a scattering event can be described by the differential scattering cross section of a target atom for scattering an incoming particle through the angle Ø into differential solid angle as follows, $\frac{d \sigma R}{d \phi }\ = (\frac{zZe2}{2E_{0}sin(2\theta )} )\ =\ \frac{[cos\theta \ +\ \sqrt{1\ -\ (\frac{M_{1}}{M_{2}}sin\theta )^{2}}]^{2}}{\sqrt{1\ -\ (\frac{M_{1}}{M_{2}}sin\theta )^{2}}} \label{3}$ where dσR is the effective differential cross section for the scattering of a particle. The above equation may looks complicated but it conveys the message that the probability of scattering event can be expressed as a function of scattering cross section which is proportional to the zZ when a particle with charge ze approaches the target atom with charge Ze. Helium ions not scattered at the surface lose energy as they traverse the solid. They lose energy due to interaction with electrons in the target. After collision the He particles lose further energy on their way out to the detector. We need to know two quantities to measure the energy loss, the distance Δt that the particles penetrate into the target and the energy loss ΔE in this distance Figure $2$. The rate of energy loss or stopping power is a critical component in backscattering experiments as it determines the depth profile in a given experiment. In thin film analysis, it is convenient to assume that total energy loss ΔE into depth t is only proportional to t for a given target. This assumption allows a simple derivation of energy loss in backscattering as more complete analysis requires many numerical techniques. In constant dE/dx approximation, total energy loss becomes linearly related to depth t, Figure $3$. Experimental Set-up The apparatus for Rutherford backscattering analysis of thin solid surface typically consist of three components: 1. A source of helium ions. 2. An accelerator to energize the helium ions. 3. A detector to measure the energy of scattered ions. There are two types of accelerator/ion source available. In single stage accelerator, the He+ source is placed within an insulating gas-filled tank (Figure $4$). It is difficult to install new ion source when it is exhausted in this type of accelerator. Moreover, it is also difficult to achieve particles with energy much more than 1 MeV since it is difficult to apply high voltages in this type of system. Another variation is “tandem accelerator.” Here the ion source is at ground and produces negative ion. The positive terminal is located is at the center of the acceleration tube (Figure $5$). Initially the negative ion is accelerated from ground to terminal. At terminal two-electron stripping process converts the He- to He++. The positive ions are further accelerated toward ground due to columbic repulsion from positive terminal. This arrangement can achieve highly accelerated He++ ions (~ 2.25 MeV) with moderate voltage of 750 kV. Particles that are backscattered by surface atoms of the bombarded specimen are detected by a surface barrier detector. The surface barrier detector is a thin layer of p-type silicon on the n-type substrate resulting p-n junction. When the scattered ions exchange energy with the electrons on the surface of the detector upon reaching the detector, electrons get promoted from the valence band to the conduction band. Thus, each exchange of energy creates electron-hole pairs. The energy of scattered ions is detected by simply counting the number of electron-hole pairs. The energy resolution of the surface barrier detector in a standard RBS experiment is 12 - 20 keV. The surface barrier detector is generally set between 90° and 170° to the incident beam. Films are usually set normal to the incident beam. A simple layout is shown in Figure $6$. Depth Profile Analysis As stated earlier, it is a good approximation in thin film analysis that the total energy loss ΔE is proportional to depth t. With this approximation, we can derive the relation between energy width ΔE of the signal from a film of thickness Δt as follows, $\Delta E\ =\ \Delta t (k\ \frac{dE}{dx_{in}}\ +\ \frac{1}{\cos \theta } \ \frac{dE}{dx_{out}}) \label{4}$ where Ø = lab scattering angle. It is worth noting that k is the kinematic factor defined in equation above and the subscripts “in” and “out” indicate the energies at which the rate of loss of energy or dE/dx is evaluated. As an example, we consider the backscattering spectrum, at scattering angle 170°, for 2 MeV He++ incidents on silicon layer deposited onto 2 mm thick niobium substrate Figure $7$. The energy loss rate of incoming He++ or dE/dx along inward path in elemental Si is ≈24.6 eV/Å at 2 MeV and is ≈26 eV/Å for the outgoing particle at 1.12 MeV (Since K of Si is 0.56 when the scattering angle is 170°, energy of the outgoing particle would be equal to 2 x 0.56 or 1.12 MeV) . Again the value of ΔESi is ≈133.3 keV. Putting the values into above equation we get $\Delta t \approx \frac{133.6\ keV}{(0.56*24.6\ \frac{eV}{Å}) \ +\ (\frac{1}{\cos 170^{\circ } }\ *\ 26\ \frac{eV}{Å})} \nonumber$ $=\ \frac{133.3\ keV}{13.77\ eV/Å \ +\ 29.985\ eV/Å} \nonumber$ $=\ \frac{133.3\ keV}{40.17 eV/Å} \nonumber$ $=\ 3318\ Å \nonumber$ Hence a Si layer of ca. 3300 Å thickness has been deposited on the niobium substrate. However we need to remember that the value of dE/dx is approximated in this calculation. Quantitative Analysis In addition to depth profile analysis, we can study the composition of an element quantitatively by backscattering spectroscopy. The basic equation for quantitative analysis is $Y\ =\ \sigma \Omega Q N \Delta t \nonumber$ Where Y is the yield of scattered ions from a thin layer of thickness Δt, Q is the number of incident ions and Ω is the detector solid angle, and NΔt is the number of specimen atoms (atom/cm2). Figure $8$ shows the RBS spectrum for a sample of silicon deposited on a niobium substrate and subjected to laser mixing. The Nb has reacted with the silicon to form a NbSi2 interphase layer. The Nb signal has broadened after the reaction as show in Figure $8$. We can use ratio of the heights HSi/HNb of the backscattering spectrum after formation of NbSi2 to determine the composition of the silicide layer. The stoichiometric ratio of Nb and Si can be approximated as, $\frac{N_{Si}}{N_{Nb}}\ \approx \frac{[H_{Si}\ *\ \sigma_{Si}]}{[H_{Nb}\ *\ \sigma _{Nb}]} \nonumber$ Hence the concentration of Si and Nb can be determined if we can know the appropriate cross sections σSiand σNb. However the yield in the backscattering spectra is better represented as the product of signal height and the energy width ΔE. Thus stoichiometric ratio can be better approximated as $\frac{N_{Si}}{N_{Nb}}\ \approx \frac{[H_{Si}\ *\ \Delta E_{Si}\ *\ \sigma_{Si}]}{[H_{Nb}\ *\ \Delta E_{Nb}\ *\ \ \sigma _{Nb}]} \nonumber$ Limitations It is of interest to understand the limitations of the backscattering technique in terms of the comparison with other thin film analysis technique such as AES, XPS and SIMS (Table $1$). AES has better mass resolution, lateral resolution and depth resolution than RBS. But AES suffers from sputtering artifacts. Compared to RBS, SIMS has better sensitivity. RBS does not provide any chemical bonding information which we can get from XPS. Again, sputtering artifact problems are also associated in XPS. The strength of RBS lies in quantitative analysis. However, conventional RBS systems cannot analyze ultrathin films since the depth resolution is only about 10 nm using surface barrier detector. Summary Rutherford Backscattering analysis is a straightforward technique to determine the thickness and composition of thin films (< 4000 Å). Areas that have been lately explored are the use of backscattering technique in composition determination of new superconductor oxides; analysis of lattice mismatched epitaxial layers, and as a probe of thin film morphology and surface clustering.
textbooks/chem/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/01%3A_Elemental_Analysis/1.15%3A_Rutherford_Backscattering_of_Thin_Films.txt
Introduction Crystallographic positional disorder is evident when a position in the lattice is occupied by two or more atoms; the average of which constitutes the bulk composition of the crystal. If a particular atom occupies a certain position in one unit cell and another atom occupies the same position in other unit cells, the resulting electron density will be a weight average of the situation in all the unit cells throughout the crystal. Since the diffraction experiment involves the average of a very large number of unit cells (ca. 1018 in a crystal used for single crystal X-ray diffraction analysis), minor static displacements of atoms closely simulate the effects of vibrations on the scattering power of the “average” atom. Unfortunately, the determination of the “average” atom in a crystal may be complicated if positional disorder is encountered. Crystal disorder involving groups such as CO, CN and Cl have been documented to create problems in assigning the correct structure through refinement procedures. While attempts have been made to correlate crystallographic lattice parameters with bulk chemical composition of the solution from which single crystal was grown, there has been little effort to correlate crystallographic site occupancy with chemical composition of the crystal from which single crystal diffraction data was obtained. These are two very different issues that must be considered when solving a crystal structure with site occupancy disorder. What is the relationship of a single crystal to the bulk material? Is the refinement of a site-occupancy-factor actually gives a realistic value for % occupancy when compared to the "actual" % composition for that particular single crystal? The following represents a description of a series of methods for the refinement of a site occupancy disorder between two atoms (e.g., two metal atoms within a mixture of isostructural compounds). Methods for X-ray Diffraction Determination of Positional Disorder in Molecular Solid Solutions An atom in a structure is defined by several parameters: the type of atom, the positional coordinates (x, y, z), the occupancy factor (how many “atoms” are at that position) and atomic displacement parameters (often called temperature or thermal parameters). The latter can be thought of as being a “picture” of the volume occupied by the atom over all the unit cells, and can be isotropic (1 parameter defining a spherical volume) or anisotropic (6 parameters defining an ellipsoidal volume). For a “normal” atom, the occupancy factor is fixed as being equal to one, and the positions and displacement parameters are “refined” using least-squares methods to values in which the best agreement with the observed data is obtained. In crystals with site-disorder, one position is occupied by different atoms in different unit cells. This refinement requires a more complicated approach. Two broad methods may be used: either a new atom type that is the appropriate combination of the different atoms is defined, or the same positional parameters are used for different atoms in the model, each of which has occupancy values less than one, and for which the sum is constrained to total one. In both approaches, the relative occupancies of the two atoms are required. For the first approach, these occupancies have to be defined. For the second, the value can be refined. However, there is a relationship between the thermal parameter and the occupancy value so care must be taken when doing this. These issues can be addressed in several ways. Method 1 The simplest assumption is that the crystal from which the X-ray structure is determined represents the bulk sample was crystallized. With this value, either a new atom type can be generated that is the appropriate combination of the measured atom type 1 (M) and atom type 2 (M’) percent composition or two different atoms can be input with the occupancy factor set to reflect the percent composition of the bulk material. In either case the thermal parameters can be allowed to refine as usual. Method 2 The occupancy values for two atoms (M and M’) are refined (such that their sum was equal to 1), while the two atoms are constrained to have the same displacement parameters. Method 3 The occupancy values (such that their sum was equal to 1) and the displacement parameters are refined independently for the two atoms. Method 4 Once the best values for occupancy is obtained using either Methods 2 or 3, these values were fixed and the displacement parameters are allowed to refine freely. A Model System Metal β-diketonate complexes (Figure \(1\)) for metals in the same oxidation state are isostructural and often isomorphous. Thus, crystals obtained from co-crystallization of two or more metal β-diketonate complexes [e.g., Al(acac)3 and Cr(acac)3] may be thought of as a hybrid of the precursors; that is, the metal position in the crystal lattice may be defined as having the average metal composition. A series of solid solutions of Al(acac)3 and Cr(acac)3 can be prepared for study by X-ray diffraction, by the crystallization from acetone solutions of specific mixtures of Al(acac)3 and Cr(acac)3 (Table \(1\), Column 1). The pure derivatives and the solid solution, Al1-xCrx(acac)3, crystallize in the monoclinic space group P21/c with Z = 4. Solution Composition (% Cr) WDS Composition of Single Crystal (% Cr) Composition as Refined from X-ray Diffraction (% Cr) 13 1.9 ± 0.2 0a 2 2.1 ± 0.3 0a 20 17.8 ± 1.6 17.3 ± 1.8 26 26.7 ± 1.7 28.3 ± 1.9 18 48.5 ± 4.9 46.7 ± 2.1 60 75.1 ± 4.1 72.9 ± 2.4 80 91.3 ± 1.2 82.3 ± 3.1 Table \(1\) Variance in chromium concentrations (%) for samples of Al1-xCrx(acac)3 crystallized from solutions of Al(acac)3 and Cr(acac)3. aConcentration too low to successfully refine the Cr occupancy. Substitution of Cr for Al in the M(acac)3 structure could possibly occur in a random manner, i.e., a metal site has an equal probability of containing an aluminum or a chromium atom. Alternatively, if the chromium had preference for specific sites a super lattice structure of lower symmetry would be present. Such an ordering is not observed since all the samples show no additional reflections other than those that may be indexed to the monoclinic cell. Therefore, it may be concluded that the Al(acac)3 and Cr(acac)3 do indeed form solid solutions: Al1-xCrx(acac)3. Electron microprobe analysis, using wavelength-dispersive spectrometry (WDS), on the individual crystal from which X-ray crystallographic data was collected provides the “actual” composition of each crystal. Analysis was performed on at least 6 sites on each crystal using a 10 μm sized analysis spot providing a measure of the homogeneity within the individual crystal for which X-ray crystallographic data was collected. An example of a SEM image of one of the crystals and the point analyses is given in Figure \(2\). The data in Table \(1\) and Figure \(2\) demonstrate that while a batch of crystals may contain individual crystals with different compositions, each individual crystal is actually reasonably homogenous. There is, for most samples, a significant variance between the molar Al:Cr ratio in the bulk material and an individual crystal chosen for X-ray diffraction. The variation in Al:Cr ratio within each individual crystal (±10%) is much less than that between crystals. Comparison of the Methods Method 1 Since Method 1 does not refine the %Cr and relies on an input for the Al and Cr percent composition of the "bulk" material, i.e., the %Cr in the total mass of the material (Table \(1\), Column 1), as opposed to the analysis of the single crystal on which X-ray diffraction was performed, (Table \(1\), Column 2), the closer these values were to the "actual" value determined by WDS for the crystal on which X-ray diffraction was performed (Table \(1\), Column 1 vs 2) then the closer the overall refinement of the structure to those of Methods 2 - 4. While this assumption is obviously invalid for many of the samples, it is one often used when bulk data (for example, from NMR) is available. However, as there is no reason to assume that one crystal is completely representative of the bulk sample, it is unwise to rely only on such data. Method 2 This method always produced final, refined, occupancy values that were close to those obtained from WDS (Table \(1\)). This approach assumes that the motion of the central metal atoms is identical. While this is obviously not strictly true as they are of different size, the results obtained herein imply that this is a reasonable approximation where simple connectivity data is required. For samples where the amount of one of the elements (i.e., Cr) is very low so low a good refinement can not often be obtained. In theses cases, when refining the occupancy values, that for Al would exceed 1 while that of Cr would be less than 1! Method 3 In some cases, despite the interrelationship between the occupancy and the displacement parameters, convergence was obtained successfully. In these cases the refined occupancies were both slightly closer to those observed from WDS than the occupancy values obtained using Method 2. However, for some samples with higher Cr content the refinement was unstable and would not converge. Whether this observation was due to the increased percentage of Cr or simply lower data quality is not certain. While this method does allow refinement of any differences in atomic motion between the two metals, it requires extremely high quality data for this difference to be determined reliably. Method 4 This approach adds little to the final results. Correlation between Analyzed Composition and Refined Composition Figure \(3\) shows the relationship between the chromium concentration (%Cr) determined from WDS and the refinement of X-ray diffraction data using Methods 2 or 3 (labeled in Figure \(3\). Clearly there exists a good correlation, with only a slight divergence at high Cr concentration. This is undoubtedly a consequence of trying to refine a low fraction of a light atom (Al) in the presence of a large fraction of a heavier atom (Cr). X-ray diffraction is, therefore, an accurate method of determining the M:M' ratios in crystalline solid solution.
textbooks/chem/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/01%3A_Elemental_Analysis/1.16%3A_An_Accuracy_Assessment_of_the_Refinement_of_Crystallographic_Positional_Metal_Disorder_in_Molecular_Solid_Soluti.txt
Introduction Gamma-ray (γ-ray) spectroscopy is a quick and nondestructive analytical technique that can be used to identify various radioactive isotopes in a sample. In gamma-ray spectroscopy, the energy of incident gamma-rays is measured by a detector. By comparing the measured energy to the known energy of gamma-rays produced by radioisotopes, the identity of the emitter can be determined. This technique has many applications, particularly in situations where rapid nondestructive analysis is required. Background Principles Radioactive Decay The field of chemistry typically concerns itself with the behavior and interactions of stable isotopes of the elements. However, elements can exist in numerous states which are not stable. For example, a nucleus can have too many neutrons for the number of protons it has or contrarily, it can have too few neutrons for the number of protons it has. Alternatively, the nuclei can exist in an excited state, wherein a nucleon is present in an energy state that is higher than the ground state. In all of these cases, the unstable state is at a higher energy state and the nucleus must undergo some kind of decay process to reduce that energy. There are many types of radioactive decay, but type most relevant to gamma-ray spectroscopy is gamma decay. When a nucleus undergoes radioactive decay by α or β decay, the resultant nucleus produced by this process, often called the daughter nucleus, is frequently in an excited state. Similar to how electrons are found in discrete energy levels around a nucleus, nucleons are found in discrete energy levels within the nucleus. In γ decay, the excited nucleon decays to a lower energy state and the energy difference is emitted as a quantized photon. Because nuclear energy levels are discrete, the transitions between energy levels are fixed for a given transition. The photon emitted from a nuclear transition is known as a γ-ray. Radioactive Decay Kinetics and Equilibria Radioactive decay, with few exceptions, is independent of the physical conditions surrounding the radioisotope. As a result, the probability of decay at any given instant is constant for any given nucleus of that particular radioisotope. We can use calculus to see how the number of parent nuclei present varies with time. The time constant, λ, is a representation of the rate of decay for a given nuclei, \ref{1}. $\frac{dN}{N}\ =\ -\lambda dt \label{1}$ If the symbol N0 is used to represent the number of radioactive nuclei present at t = 0, then \ref{2} describes the number of nuclei present at some given time. $N\ =\ N_{0}e^{-\lambda t} \label{2}$ The same equation can be applied to the measurement of radiation with some sort of detector. The count rate will decrease from some initial count rate in the same manner that the number of nuclei will decrease from some initial number of nuclei. The decay rate can also be represented in a way that is more easily understood. The equation describing half-life (t1/2) is shown in \ref{3}. $t_{1/2}\ =\ \frac{ln\ 2}{\lambda } \label{3}$ The half-life has units of time and is a measure of how long it takes for the number of radioactive nuclei in a given sample to decrease to half of the initial quantity. It provides a conceptually easy way to compare the decay rates of two radioisotopes. If one has a the same number of starting nuclei for two radioisotopes, one with a short half-life and one with a long half-life, then the count rate will be higher for the radioisotope with the short half-life, as many more decay events must happen per unit time in order for the half-life to be shorter. When a radioisotope decays, the daughter product can also be radioactive. Depending upon the relative half-lives of the parent and daughter, several situations can arise: no equilibrium, a transient equilibrium, or a secular equilibrium. This module will not discuss the former two possibilities, as they are off less relevance to this particular discussion. Secular equilibrium takes place when the half-life of the parent is much longer than the half-life of the daughter. In any arbitrary equilibrium, the ratio of atoms of each can be described as in \ref{4}. $\frac{N_{P}}{N_{D}}\ =\ \frac{\lambda _{D}\ -\ \lambda _{P}}{\lambda _{P}} \label{4}$ Because the half-life of the parent is much, much greater than the daughter, as the parent decays, the observed amount of activity changes very little. $\frac{N_{P}}{N_{D}}\ =\ \frac{\lambda _{D}}{\lambda _{P}} \label{5}$ This can be rearranged to show that the activity of the daughter should equal the activity of the parent. $A_{P}\ =\ A_{D} \label{6}$ Once this point is reached, the parent and the daughter are now in secular equilibrium with one another and the ratio of their activities should be fixed. One particularly useful application of this concept, to be discussed in more detail later, is in the analysis of the refinement level of long-lived radioisotopes that are relevant to trafficking. Detectors Scintillation Detector A scintillation detector is one of several possible methods for detecting ionizing radiation. Scintillation is the process by which some material, be it a solid, liquid, or gas, emits light in response to incident ionizing radiation. In practice, this is used in the form of a single crystal of sodium iodide that is doped with a small amount of thallium, referred to as NaI(Tl). This crystal is coupled to a photomultiplier tube which converts the small flash of light into an electrical signal through the photoelectric effect. This electrical signal can then be detected by a computer. Semiconductor Detector A semiconductor accomplishes the same effect as a scintillation detector, conversion of gamma radiation into electrical pulses, except through a different route. In a semiconductor, there is a small energy gap between the valence band of electrons and the conduction band. When a semiconductor is hit with gamma-rays, the energy imparted by the gamma-ray is enough to promote electrons to the conduction band. This change in conductivity can be detected and a signal can be generated correspondingly. Germanium crystals doped with lithium, Ge(Li), and high-purity germanium (HPGe) detectors are among the most common types. Advantages and Disadvantages Each detector type has its own advantages and disadvantages. The NaI(Tl) detectors are generally inferior to Ge(Li) or HPGe detectors in many respects, but are superior to Ge(Li) or HPGe detectors in cost, ease of use, and durability. Germanium-based detectors generally have much higher resolution than NaI(Tl) detectors. Many small photopeaks are completely undetectable on NaI(Tl) detectors that are plainly visible on germanium detectors. However, Ge(Li) detectors must be kept at cryogenic temperatures for the entirety of their lifetime or else they rapidly because incapable of functioning as a gamma-ray detector. Sodium iodide detectors are much more portable and can even potentially be used in the field because they do not require cryogenic temperatures so long as the photopeak that is being investigated can be resolved from the surrounding peaks. Gamma Spectrum Features There are several dominant features that can be observed in a gamma spectrum. The dominant feature that will be seen is the photopeak. The photopeak is the peak that is generated when a gamma-ray is totally absorbed by the detector. Higher density detectors and larger detector sizes increase the probability of the gamma-ray being absorbed. The second major feature that will be observed is that of the Compton edge and distribution. The Compton edge arises due to Compton Effect, wherein a portion of the energy of the gamma-ray is transferred to the semiconductor detector or the scintillator. This occurs when the relatively high energy gamma ray strikes a relatively low energy electron. There is a relatively sharp edge to the Compton edge that corresponds to the maximum amount of energy that can be transferred to the electron via this type of scattering. The broad peak lower in energy than the Compton edge is the Compton distribution and corresponds to the energies that result from a variety of scattering angles. A feature in Compton distribution is the backscatter peak. This peak is a result of the same effect but corresponds to the minimum energy amount of energy transferred. The sum of the energies of the Compton edge and the backscatter peak should yield the energy of the photopeak. Another group of features in a gamma spectrum are the peaks that are associated with pair production. Pair production is the process by which a gamma ray of sufficiently high energy (>1.022 MeV) can produce an electron-positron pair. The electron and positron can annihilate and produce two 0.511 MeV gamma photons. If all three gamma rays, the original with its energy reduced by 1.022 MeV and the two annihilation gamma rays, are detected simultaneously, then a full energy peak is observed. If one of the annihilation gamma rays is not absorbed by the detector, then a peak that is equal to the full energy less 0.511 MeV is observed. This is known as an escape peak. If both annihilation gamma rays escape, then a full energy peak less 1.022 MeV is observed. This is known as a double escape peak. Example of Experiments Determination of Depleted Uranium Natural uranium is composed mostly of 238U with low levels of 235U and 234U. In the process of making enriched uranium, uranium with a higher level of 235U, depleted uranium is produced. Depleted uranium is used in many applications particularly for its high density. Unfortunately, uranium is toxic and is a potential health hazard and is sometimes found in trafficked radioactive materials, so it is important to have a methodology for detection and analysis of it. One easy method for this determination is achieved by examining the spectrum of the sample and comparing it qualitatively to the spectrum of a sample that is known to be natural uranium. This type of qualitative approach is not suitable for issues that are of concern to national security. Fortunately, the same approach can be used in a quantitative fashion by examining the ratios of various gamma-ray photopeaks. The concept of a radioactive decay chain is important in this determination. In the case of 238U, it decays over many steps to 206Pb. In the process, it goes through 234mPa, 234Pa, and 234Th. These three isotopes have detectable gamma emissions that are capable of being used quantitatively. As can be seen in Table $1$, the half-life of these three emitters is much less than the half-life of 238U. As a result, these should exist in secular equilibrium with 238U. Given this, the ratio of activity of 238U to each daughter products should be 1:1. They can thus be used as a surrogate for measuring 238U decay directly via gamma spectroscopy. The total activity of the 238U can be determined by \ref{7}, where A is the total activity of 238U, R is the count rate of the given daughter isotope, and B is the probability of decay via that mode. The count rate may need to be corrected for self-absorption of the sample is particularly thick. It may also need to be corrected for detector efficiency if the instrument does not have some sort of internal calibration. $A= R/B \nonumber$ Isotope Half-life 238U 4.5 x 10^{9} years 234Th 24.1 days 234mPa 1.17 minutes Table $1$ Half-lives of pertinent radioisotopes in the 238U decay chain Example 1 Question A gamma spectrum of a sample is obtained. The 63.29 keV photopeak associated with 234Th was found to have a count rate of 5.980 kBq. What is the total activity of 238U present in the sample? Answer 234Th exists in secular equilibrium with 238U. The total activity of 234Th must be equal to the activity of the 238U. First, the observed activity must be converted to the total activity using Equation A=R/B. It is known that the emission probability for the 63.29 kEv gamma-ray for 234Th is 4.84%. Therefore, the total activity of 238U in the sample is 123.6 kBq. The count rate of 235U can be observed directly with gamma spectroscopy. This can be converted, as was done in the case of 238U above, to the total activity of 235U present in the sample. Given that the natural abundances of 238U and 235U are known, the ratio of the expected activity of 238U to 235U can be calculated to be 21.72 : 1. If the calculated ratio of disintegration rates varies significantly from this expected value, then the sample can be determined to be depleted or enriched. Example 2 Question As shown above, the activity of 238U in a sample was calculated to be 123.6 kBq. If the gamma spectrum of this sample shows a count rate 23.73 kBq at the 185.72 keV photopeak for 235U, can this sample be considered enriched uranium? The emission probability for this photopeak is 57.2%. Answer As shown in the example above, the count rate can be converted to a total activity for 235U. This yields a total activity of 41.49 kBq for 235U. The ratio of activities of 238U and 235U can be calculated to be 2.979. This is lower than the expected ratio of 21.72, indicating that the 235U content of the sample greater than the natural abundance of 235U. This type of calculation is not unique to 238U. It can be used in any circumstance where the ratio of two isotopes needs to be compared so long as the isotope itself or a daughter product it is in secular equilibrium with has a usable gamma-ray photopeak. Determination of the Age of Highly-enriched Uranium Particularly in the investigation of trafficked radioactive materials, particularly fissile materials, it is of interest to determine how long it has been since the sample was enriched. This can help provide an idea of the source of the fissile material—if it was enriched for the purpose of trade or if it was from cold war era enrichment, etc. When uranium is enriched, 235U is concentrated in the enriched sample by removing it from natural uranium. This process will separate the uranium from its daughter products that it was in secular equilibrium with. In addition, when 235U is concentrated in the sample, 234U is also concentrated due to the particulars of the enrichment process. The 234U that ends up in the enriched sample will decay through several intermediates to 214Bi. By comparing the activities of 234U and 214Bi or 226Ra, the age of the sample can be determined. $A_{Bi}\ =\ A_{Ra}\ =\ \frac{A_{U}}{2} \lambda _{Th}\lambda _{Ra} T^{2} \label{7}$ In \ref{7}, ABi is the activity of 214Bi, ARais the activity of 226Ra, AU is the activity of 234U, λTh is the decay constant for 230Th, λRa is the decay constant for 226Ra, and T is the age of the sample. This is a simplified form of a more complicated equation that holds true over all practical sample ages (on the order of years) due to the very long half-lives of the isotopes in question. The results of this can be graphically plotted as they are in Figure $1$. Example 3 Question The gamma spectrum for a sample is obtained. The count rate of the 121 keV 234U photopeak is 4500 counts per second and the associated emission probability is 0.0342%. The count rate of the 609.3 keV 214Bi photopeak is 5.83 counts per second and the emission probability is 46.1%. How old is the sample? Answer The observed count rates can be converted to the total activities for each radionuclide. Doing so yields a total activity for 234U of 4386 kBq and a total activity for 214Bi of 12.65 Bq. This gives a ratio of 9.614 x 10-7. Using Figure $1$, as graphed this indicates that the sample must have been enriched 22.0 years prior to analysis.
textbooks/chem/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/01%3A_Elemental_Analysis/1.17%3A_Principles_of_Gamma-ray_Spectroscopy_and_Applications_in_Nuclear_Forensics.txt
• 2.1: Melting Point Analysis Melting point (Mp) is a quick and easy analysis that may be used to qualitatively identify relatively pure samples (approximately <10% impurities). It is also possible to use this analysis to quantitatively determine purity. Melting point analysis, as the name suggests, characterizes the melting point, a stable physical property, of a sample in a straightforward manner, which can then be used to identify the sample. • 2.2: Molecular Weight Determination The cryoscopic method was formally introduced in the 1880’s when François-Marie Raoult published how solutes depressed the freezing points of various solvents such as benzene, water, and formic acid. He concluded from his experimentation “if one molecule of a substance can be dissolved in one-hundred molecules of any given solvent then the solvent temperature is lowered by a specific temperature increment”. Based on Raoult’s research, Ernst Otto Beckmann invented the Beckmann thermometer and the • 2.3: BET Surface Area Analysis of Nanoparticles In the past few years, nanotechnology research has expanded out of the chemistry department and into the fields of medicine, energy, aerospace and even computing and information technology. With bulk materials, the surface area to volume is insignificant in relation to the number of atoms in the bulk, however when the particles are only 1 to 100 nm across, different properties begin to arise. • 2.4: Dynamic Light Scattering Dynamic light scattering (DLS), which is also known as photon correlation spectroscopy (PCS) or quasi-elastic light scattering (QLS), is a spectroscopy method used in the fields of chemistry, biochemistry, and physics to determine the size distribution of particles (polymers, proteins, colloids, etc.) in solution or suspension. • 2.5: Zeta Potential Analysis Zeta potential is a parameter that measures the electrochemical equilibrium at the particle-liquid interface. It measures the magnitude of electrostatic repulsion/attraction between particles and thus, it has become one of the fundamental parameters known to affect stability of colloidal particles. • 2.6: Viscosity All liquids have a natural internal resistance to flow termed viscosity. Viscosity is the result of frictional interactions within a given liquid and is commonly expressed in two different ways. • 2.7: Electrochemistry Cyclic voltammetry is a very important analytical characterization in the field of electrochemistry. Any process that includes electron transfer can be investigated with this characterization.  In this module, we will focus on the application of CV measurement in the field of characterization of solar cell materials. • 2.8: Thermal Analysis Thermogravimetric analysis (TGA) and the associated differential thermal analysis (DTA) are widely used for the characterization of both as-synthesized and side-wall functionalized single walled carbon nanotubes (SWNTs). Under oxygen, SWNTs will pyrolyze leaving any inorganic residue behind. Differential scanning calorimetry (DSC) is a technique used to measure the difference in the heat flow rate of a sample and a reference over a controlled temperature range. • 2.9: Electrical Permittivity Characterization of Aqueous Solutions Permittivity (in the framework of electromagnetics) is a fundamental material property that describes how a material will affect, and be affected by, a time-varying electromagnetic field. The parameters of permittivity are often treated as a complex function of the applied electromagnetic field as complex numbers allow for the expression of magnitude and phase. • 2.10: Dynamic Mechanical Analysis Dynamic mechanical analysis (DMA), also known as forced oscillatory measurements and dynamic rheology, is a basic tool used to measure the viscoelastic properties of materials (particularly polymers). To do so, DMA instrument applies an oscillating force to a material and measures its response; from such experiments, the viscosity (the tendency to flow) and stiffness of the sample can be calculated. These viscoelastic properties can be related to temperature, time, or frequency. • 2.11: Finding a Representative Lithology Sample sediments are typically sent in a large plastic bag inside a brown paper bag labeled with the company or organization name, drill site name and number, and the depth the sediment was taken (in meters). 02: Physical and Thermal Analysis Melting point (Mp) is a quick and easy analysis that may be used to qualitatively identify relatively pure samples (approximately <10% impurities). It is also possible to use this analysis to quantitatively determine purity. Melting point analysis, as the name suggests, characterizes the melting point, a stable physical property, of a sample in a straightforward manner, which can then be used to identify the sample. Equipment Although different designs of apparatus exist, they all have some sort of heating or heat transfer medium with a control, a thermometer, and often a backlight and magnifying lens to assist in observing melting (Figure \(1\)). Most models today utilize capillary tubes containing the sample submerged in a heated oil bath. The sample is viewed with a simple magnifying lens. Some new models have digital thermometers and controls and even allow for programming. Programming allows more precise control over the starting temperature, ending temperature and the rate of change of the temperature. Sample Preparation For melting point analysis, preparation is straight forward. The sample must be thoroughly dried and relatively pure ( <10% impurities). The dry sample should then be packed into a melting point analysis capillary tube, which is simply a glass capillary tube with only one open end. Only 1 to 3 mm of sample is needed for sufficient analysis. The sample needs to be packed down into the closed end of the tube. This may be done by gently tapping the tube or dropping it upright onto a hard surface (Figure \(2\)). Some apparatuses have a vibrator to assist in packing the sample. Finally the tube should be placed into the machine. Some models can accommodate multiple samples. Recording Data Performing analysis is different from machine to machine, but the overall process is the same (Figure \(3\)). If possible, choose a starting temperature, ending temperature, and rate of change of temperature. If the identity of the sample is known, base the starting and ending temperatures from the known melting point of the chemical, providing margins on both sides of the range. If using a model without programming, simply turn on the machine and monitor the rate of temperature change manually. Figure \(3\) A video discussing sample preparation, recording data and melting point analysis in general. Made by Indiana University-Purdue University Indianapolis chemistry department. Visually inspect the sample as it heats. Once melting begins, note the temperature. When the sample is completely melted, note the temperature again. That is the melting point range for the sample. Pure samples typically have a 1 - 2 °C melting point range, however, this may be broadened due to colligative properties. Interpreting Data There are two primary uses of melting point analysis data. The first is for qualitative identification of the sample, and the second is for quantitative purity characterization of the sample. For identification, compare the experimental melting point range of the unknown to literature values. There are several vast databases of these values. Obtain a pure sample of the suspected chemical and mix a small amount of the unknown with it and conduct melting point analysis again. If a sharp melting point range is observed at similar temperatures to the literature values, then the unknown has likely been identified correctly. Conversely, if the melting point range is depressed or broadened, which would be due to colligative properties, then the unknown was not successfully identified. To characterize purity, first the identity of the solvent (the main constituent of the sample) and the identity of the primary solute need to be known. This may be done using other forms of analysis, such as gas chromatography-mass spectroscopy coupled with a database. Because melting point depression is unique between chemicals, a mixed melting curve comparing molar fractions of the two constituents with melting point needs to either be obtained or prepared (Figure \(4\)). Simply prepare standards with known molar fraction ratios, then perform melting point analysis on each standard and plot the results. Compare the melting point range of the experimental sample to the curve to identify the approximate molar fractions of the constituents. This sort of purity characterization cannot be performed if there are more than two primary components to the sample. Specificity and Accuracy Melting point analysis is fairly specific and accurate given its simplicity. Because melting point is a unique physical characteristic of a substance, melting point analysis does have high specificity. Although, many substances have similar melting points, so having an idea of possible chemicals in mind can greatly narrow down the choices. The thermometers used are also accurate. However, melting point is dependent on pressure as well, so experimental results can vary from literature values, especially at extreme locations, i.e., places of high altitude. The biggest source of error stems from the visual detection of melting by the experimenter. Controlling the change rate and running multiple trials can lessen the degree of error introduced at this step. Advantages of Melting Point Analysis Melting point analysis is a quick, relatively easy, and inexpensive preliminary analysis if the sample is already mostly pure and has a suspected identity. Additionally, analysis requires small samples only. Limitations of Melting Point Analysis As with any analysis, there are certain drawbacks to melting point analysis. If the sample is not solid, melting point analysis cannot be done. Also, analysis is destructive of the sample. For qualitative identification analysis, there are now more specific and accurate analyses that exist, although they are typically much more expensive. Also, samples with more than one solute cannot be analyzed quantitatively for purity.
textbooks/chem/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/02%3A_Physical_and_Thermal_Analysis/2.01%3A_Melting_Point_Analysis.txt
Solution Molecular Weight of Small Molecules The cryoscopic method was formally introduced in the 1880’s when François-Marie Raoult published how solutes depressed the freezing points of various solvents such as benzene, water, and formic acid. He concluded from his experimentation “if one molecule of a substance can be dissolved in one-hundred molecules of any given solvent then the solvent temperature is lowered by a specific temperature increment”. Based on Raoult’s research, Ernst Otto Beckmann invented the Beckmann thermometer and the associated freezing - point apparatus, which was a significant improvement in measuring freezing - point depression values for a pure solvent. The simplicity, ease, and accuracy of this apparatus has allowed it to remain as a current standard with few modifications for molecular weight determination of unknown compounds. The historical significance of Raoult and Beckmann’s research, among many other investigators, has revolutionized a physical chemistry technique that is currently applied to a vast range of disciplines from food science to petroleum fluids. For example, measured cryoscopic molecular weights of crude oil are used to predict the viscosity and surface tension for necessary fluid flow calculations in pipeline. Freezing Point Depression Freezing point depression is a colligative property in which the freezing temperature of a pure solvent decreases in proportion to the number of solute molecules dissolved in the solvent. The known mass of the added solute and the freezing point of the pure solvent information permit an accurate calculation of the molecular weight of the solute. In Equation \ref{1} the freezing point depression of a non-ionic solution is described. Where ∆Tf is the change in the initial and final temperature of the pure solvent, Kf is the freezing point depression constant for the pure solvent, and m (moles solute/kg solvent) is the molality of the solution. $\Delta T _ { f } = K _ { f } m \label{1}$ For an ionic solution shown in Figure $2$, the dissociation particles must be accounted for with the number of solute particles per formula unit, $i$ (the van’t Hoff factor). $\Delta T _ { f } = K _ { f } m i\ \label{2}$ Cryoscopic Procedures Cryoscopic Apparatus For cryoscopy, the apparatus to measure freezing point depression of a pure solvent may be representative of the Beckmann apparatus previously shown in Figure $3$. The apparatus consists of a test tube containing the solute dissolved in a pure solvent, stir bar or magnetic wire and closed with a rubber stopper encasing a mercury thermometer. The test tube component is immersed in an ice-water bath in a beaker. An example of the apparatus is shown in Figure $4$. The rubber stopper and stir bar/wire stirrer are not shown in the figure. Sample and Solvent Selection The cryoscopic method may be used for a wide range of samples with various degrees of polarity. The solute and solvent selection should follow the premise of like dissolved like or in terms of Raoult’s principle of the dissolution of one molecule of solute in one-hundred molecules of a solvent. The most common solvents such as benzene are generally selected because it is unreactive, volatile, and miscible with many compounds.Table $1$ shows the cryoscopic constants (Kf) for the common solvents used for cryoscopy. A complete list of Kf values are available in Knovel Critical Tables. Table $1$: Cryoscopic constants (Kf) for common solvents used for cryoscopy. Compound Kf Acetic Acid 3.90 Benzene 5.12 Camphor 39.7 Carbon disulfide 3.8 Carbon tetrachloride 30 Chloroform 4.68 Cyclohexane 20.2 Ethanol 1.99 Naphthalene 6.8 Phenol 7.27 Water 1.86 Cryoscopic Method The detailed information about the procedure used for cryoscopy is shown below: Allow the solution to stir continuously to avoid supercooling 1. Weigh (15 to 20 grams) of the pure solvent in a test tube and record the measured weight value of the pure solvent. 2. Place a stir bar or wire stirrer in the test tube and close with a rubber stopper that has a hole to encase a mercury thermometer. 3. Place a mercury thermometer in the rubber stopper hole. 4. Immerse the test tube apparatus in an ice-water bath. 5. Allow the solvent to stir continuously and equilibrate to a few degrees below the freezing point of the solvent. 6. Record the temperature at which the solvent reaches the freezing point, which remains at a constant temperature reading. 7. Repeat the freezing point data collection for at least two more measurements without a difference less than 0.5 °C between the measurements. 8. Weigh a quantity of the solute for investigation and record the measured value. 9. Add the weighed solute to the test tube containing the pure solvent. 10. Re - close the test tube with a rubber stopper encasing a mercury thermometer. 11. Re-immerse the test tube in an ice water bath and allow the mixture to stir to fully dissolve the solute in the pure solvent. 12. Measure the freezing point and record the temperature value. The observed freezing point of the solution is when the temperature reading remains constant. Sample calculation to determine molecular weight Sample Data Set Table $2$ represents an example of a data set collection for cryoscopy. Table $2$ Example data set collection for cryoscopy Parameter Trial 1 Trial 2 Trial 3 Avg. Mass of cyclohexane (g) 9.05 9.00 9.04 9.03 Mass of unknown solute (g) 0.4000 0.41010 0.4050 0.4050 Freezing point of cyclohexane (°C) 6.5°C 6.5°C 6.5°C 6.5°C Freezing point of cyclohexane mixed with unknown solute (°C) 4.2°C 4.3°C 4.2°C 4.2°C Calculation of molecular weight using the freezing point depression equation Calculate the freezing point (Fpt) depression of the solution (TΔf) from Equation \ref{3} $T \Delta _{f} =\ (Fpt\ of\ pure\ solvent)\ -\ (Fpt\ of\ solution)\ \label{3}$ $T \Delta_{f} = \ 6.5^{\circ}C-4.2^{\circ}C \nonumber$ $T \Delta_{f} =\ 2.3^{\circ} \nonumber$ Calculate the molal concentration, m, of the solution using the freezing point depression and Kf (see \label{4}) $T \Delta_{f} = K_{f}m\ \label{4}$ $m = (2.3^{\circ}C)/(20.2^{\circ}C/molal) \nonumber$ $m = 0.113 molal \nonumber$ $m = g(solute)/kg(solvent) \nonumber$ Calculate the MW of the unknown sample. i = 1 for covalent compounds in \ref{2} $M_{W} =\frac{K_{f}(g\ solute)}{\Delta T_{f} (kg solvent)} \nonumber$ $M_{W} = \frac{20.2^{\circ}C*kg/moles \times 0.405\ g}{2.3^{\circ}C \times 0.00903\ kg} \nonumber$ $M_{W} =\ 393\ g/mol \nonumber$ Problems 1. Nicotine (Figure $5$ is an extracted pale yellow oil from tobacco leaves that dissolves in water at temperatures less than 60°C. What is the molality of nicotine in an aqueous solution that begins to freeze at -0.445°C? See Table $1$ for Kf values. 2. If the solution used in Problem 1 is obtained by dissolving 1.200 g of nicotine in 30.56 g of water, what is the molar mass of nicotine? 3. What would be the freezing point depression when 0.500 molal of Ca(NO3)2 is dissolved in 60 g of water? 4. Calculate the number of weighted grams of Ca(NO3)2 added to the 60 g of water to achieve the freezing point depression from Problem 3. 1. 2. 3. 4. Molecular Weight of Polymers Knowledge of the molecular weight of polymers is very important because the physical properties of macromolecules are affected by their molecular weight. For example, shown in Figure $6$ the interrelation between molecular weight and strength for a typical polymer. Dependence of mechanical strength on polymer molecular weight. Adapted from G. Odian, Principles of Polymerization, 4th edition, Willey-Interscience, New York (2004). The melting point of polymers are also slightly depend on their molecular weight. Figure $7$ shows relationship between molecular weight and melting temperatures of polyethylene (Figure $8$ ) Most linear polyethylenes have melting temperatures near 140 °C. The approach to the theoretical asymptote, that is a line whose distance to a given curve tends to zero, indicative that a theoretical polyethylene of infinite molecular weight (i.e., M = ∞) would have a melting point of 145 °C. The molecular weight-melting temperature relationship for the alkane series. Adapted from L. H. Sperling, Introduction to physical polymer science, 4th edition, Wiley-Interscience, New York (2005). There are several ways to calculate molecular weight of polymers like number average of molecular weight, weight average of molecular weight, Z-average molecular weight, viscosity average molecular weight, and distribution of molecular weight. Molecular Weight Calculations Number average of molecular weight (Mn) Number average of molecular weight is measured to determine number of particles. Number average of molecular weight is the total weight of polymer, \ref{5}, divided by the number of polymer molecules, \ref{6} . The number average molecular weight (Mn) is given by \ref{7} , where Mi is molecular weight of a molecule of oligomer n, and Ni is number of molecules of that molecular weight. $Total\ weight =\ \Sigma _{i=1} ^{∞} M_{i} N_{i} \label{5}$ $Total\ number = \ \Sigma _{i=1} ^{∞} N _{i} \label{6}$ $M_{n} =\frac{ \Sigma _{i=1} ^{∞} M_{i} N_{i}}{\Sigma _{i=1} ^{∞} N _{i}} \label{7}$ Example $8$ Consider a polymer sample comprising of 5 moles of polymer molecules having molecular weight of 40.000 g/mol and 15 moles of polymer molecules having molecular weight of 30.000 g/mol. Weight average of molecular weight (MW) Weight average of molecular weight (MW) is measured to determine the mass of particles. MW defined as \ref{8} , where Mi is molecular weight of oligomer n, and Ni is number of molecules of that molecular weight. $M_{W} =\frac{\Sigma _{i=1} ^{∞} N_{i} (M_{i})^{2}}{\Sigma _{i=1} ^{∞} N_{i} M_{i}} \label{8}$ Example: Consider the polymer described in the previous problem. Calculate the MW for a polymer sample comprising of 9 moles of polymer molecules having molecular weight of 30.000 g/mol and 5 moles of polymer molecules having molecular weight of 50.000 g/mol. Z-average molecular weight (MZ) The Z-average molecular weight (Mz) is measured in some sedimentation equilibrium experiments. Mz isn’t common technique for molecular weight of polymers. The molar mass depends on size and mass of the molecules. The ultra centrifugation techniques employ to determine Mz. Mz emphasizes large particles and it defines the EQ, where Mi is molecular weight and Ni is number of molecules. $M_{W} =\frac{\Sigma N_{i} M_{i}^{3}}{\Sigma N_{i} M{i}^{2}} \nonumber$ Consider the polymer described in the previous problem. Viscosity average molecular weight (MV) One of the ways to measure the average molecular weight of polymers is viscosity of solution. Viscosity of a polymer depend on concentration and molecular weight of polymers. Viscosity techniques is common since it is experimentally simple. Viscosity average molecular weight defines as \ref{9} , where Mi is molecular weight and Ni is number of molecules, a is a constant which depend on the polymer-solvent in the viscosity experiments. When a is equal 1, Mv is equal to the weight average molecular weight, if it isn’t equal 1 it is between weight average molecular weight and the number average molecular weight. $(\frac{\Sigma N_{i} M_{i} ^{1+a}}{\Sigma N_{i} M_{i}})^{\frac{1}{2}} \label{9}$ Distribution of molecular weight Molecular weight distribution is one of the important characteristic of polymer because it affects polymer properties. A typical molecular distribution of polymers show in $6$. There are various molecular weights in the range of curve. The distribution of sizes in a polymer sample isn't totally defined by its central tendency. The width and shape of distribution must be known. It is always true that the various range molecular weight is \ref{10} . The equality is occurring when all polymer in the sample have the same molecular weight. $M_{N} \geq M_{V} \geq M_{W} \geq M_{Z} \geq M_{Z+1} \label{10}$ Molecular weight analysis of polymers Gel permeation chromatography (GPC) Gel permeation chromatography is also called size exclusion chromatography. It is widely used method to determine high molecular weight distribution. In this technique, substances separate according to their molecule size. Firstly, large molecules begin to elute then smaller molecules are eluted Figure $7$. The sample is injected into the mobile phase then the mobile phase enters into the columns. Retention time is the length of time that a particular fraction remains in the column. As shown in Figure $7$, while the mobile phase passes through the porous particles, the area between large molecules and small molecules is getting increase. GPC gives a full molecular distribution, but its cost is high. According to basic theory of GPC, the basic quantity measured in chromatography is the retention volume, \ref{11}, where V0 is mobile phase volume and Vp is the volume of a stationary phase. K is a distribution coefficient related to the size and types of the molecules. $V_{e} = V_{0} + V_{p} K \label{11}$ The essential features of gel permeation chromatography are shown in Figure $8$. Solvent leaves the solvent supply, then solvent is pumped through a filter. The desired amount of flow through the sample column is adjusted by sample control valves and the reference flow is adjusted that the flow through the reference and flow through the sample column reach the detector in common front. The reference column is used to remove any slight impurities in the solvent. In order to determine the amount of sample, a detector is located at the end of the column. Also, detectors may be used to continuously verify the molecular weight of species eluting from the column. The flow of solvent volume is as well monitored to provide a means of characterizing the molecular size of the eluting species. As an example, consider the block copolymer of ethylene glycol (PEG, Figure $9$ ) and poly(lactide) (PLA, Figure $10$ ), i.e., Figure $11$. In the first step starting with a sample of PEG with a Mn of 5,700 g/mol. After polymerization, the molecular weight increased because of the progress of lactide polymerization initiated from end of PEG chain. Varying composition of PEG-PLA shown in Table $3$ can be detected by GPC (Figure $12$ ). Polymer Mn of PEG Mw/Mn of PEG Mn of PLA Mw/Mn of block copolymer Weight ratio of PLA to PEG PEG-PLA (41-12) 4100 1.05 1200 1.05 0.29 PEG-PLA (60-30) 6000 1.03 3000 1.08 0.50 PEG-PLA (57-54) 5700 1.03 5400 1.08 0.95 PEG-PLA (61-78) 6100 1.03 7800 1.11 1.28 Table $3$ Characteristics of PEG-PLA block copolymer with varying composition. Adapted from K. Yasugi, Y. Nagasaki, M. Kato, and K. Kataoka, J. Control Release , 1999, 62, 89 Light-scattering One of the most used methods to characterize the molecular weight is light scattering method. When polarizable particles are placed in the oscillating electric field of a beam of light, the light scattering occurs. Light scattering method depends on the light, when the light is passing through polymer solution, it is measure by loses energy because of absorption, conversion to heat and scattering. The intensity of scattered light relies on the concentration, size and polarizability that is proportionality constant which depends on the molecular weight. Figure $13$ shows light scattering off a particle in solution. A schematic laser light-scattering is shown in Figure $14$. A major problem of light scattering is to prepare perfectly clear solutions. This problem is usually accomplished by ultra-centrifugation. A solution should be as possible as clear and dust free to determine absolute molecular weight of polymer. The advantages of this method, it doesn’t need calibration to obtain absolute molecular weight and it can give information about shape and Mw information. Also, it can be performed rapidly with less amount of sample and absolute determinations of the molecular weight can be measured. The weaknesses of the method is high price and most times it requires difficult clarification of the solutions. The weight average molecular weight value of scattering polymers in solution related to their light scattering properties that define by \ref{12} , where K is the wave vector, that defined by \ref{13} . C is solution concentration, R(θ) is the reduced Rayleigh ratio, P(θ) the particle scattering function, θ is the scattering angle, A is the osmotic virial coefficients, where n0 solvent refractive index, λ the light wavelength and Na Avagadro’s number. The particle scattering function is given by \ref{14} , where Rz is the radius of gyration. $KC / R( \theta )\ = \ 1/M_{W} ( P( \theta) \ +\ 2A_{2}C\ +\ 3A_{3}C_{2}\ +\ ...) \label{12}$ $K\ =\ 2 \pi ^{2}n_{0}^{2}(dn/dC)^{2}/N_{a} \lambda ^{2} \label{13}$ $1/(P(\theta )) \ =\ 1+16 \pi ^{2} n_{0} ^{2} ( R _{z} ^{2} )sin ^{2} ( \theta /2) 3 \lambda ^{2} \label{14}$ Weight average molecular weight of a polymer is found from extrapolation of data in the form of a Zimm plot ( Figure $15$ ). Experiments are performed at several angles and at least at 4 different concentrations. The straight line extrapolations provides Mw. X-ray Scattering X-rays are a form of electromagnetic wave with wavelengths between 0.001 nm and 0.2 nm. X-ray scattering is particularly used for semicrystalline polymers which includes thermoplastics, thermoplastic elastomers, and liquid crystalline polymers. Two types of X-ray scattering are used for polymer studies: 1. Wide-angle X-ray scattering (WAXS) which is used to study orientation of the crystals and the packing of the chains. 2. Small-angle X-ray scattering (SAXS) which is used to study the electron density fluctuations that occur over larger distances as a result of structural inhomogeneities. Schematic representation of X-ray scattering shows in Figure $16$. At least two SAXS curves are required to determine the molecular weight of a polymer. The SAXS procedure to determine the molecular weight of polymer sample in monomeric or multimeric state solution requires the following conditions. a. The system should be monodispersed. b. The solution should be dilute enough to avoid spatial correlation effects. c. The solution should be isotropic. d. The polymer should be homogenous. Osometer Osmometry is applied to determine number average of molecular weight (Mn). There are two types of osmometer: 1. Vapor pressure osmometry (VPO). 2. Membrane osmometry. Vapor pressure osmometry measures vapor pressure indirectly by measuring the change in temperature of a polymer solution on dilution by solvent vapor and is generally useful for polymers with Mn below 10,000–40,000 g/mol. When molecular weight is more than that limit, the quantity being measured becomes very small to detect. A typical vapor osmometry shows in the Figure $17$. Vapor pressure is very sensitive because of this reason it is measured indirectly by using thermistors to measure voltage changes caused by changes in temperature. Membrane osmometry is absolute technique to determine Mn(Figure $18$ ). The solvent is separated from the polymer solution with semipermeable membrane that is strongly held between the two chambers. One chamber is sealed by a valve with a transducer attached to a thin stainless steel diaphragm which permits the measurement of pressure in the chamber continuously. Membrane osmometry is useful to determine Mn about 20,000-30,000 g/mol and less than 500,000 g/mol. When Mn of polymer sample more than 500,000 g/mol, the osmotic pressure of polymer solution becomes very small to measure absolute number average of molecular weight. In this technique, there are problems with membrane leakage and symmetry. The advantages of this technique is that it doesn’t require calibration and it gives an absolute value of Mn for polymer samples. Summary Properties of polymers depend on their molecular weight. There are different kind of molecular weight and each can be measured by different techniques. The summary of these techniques and molecular weight is shown in the Table $4$. Method Type of Molecular Weight Range of Application Light Scattering MW Membrane Osmometry Mn 104 -106 Vapor Phase Osmometry Mn 40,000 X-ray scattering Mw, n, z 102 Table $4$ Summary of techniques of molecular weight of polymers. Size Exclusion Chromatography and its Application in Polymer Science Size exclusion chromatography (SEC) is a useful technique that is specifically applicable to high-molecular-weight species, such as polymer. It is a method to sort molecules according to their sizes in solution. The sample solution is injected into the column, which is filled with rigid, porous, materials, and is carried by the solvent through the packed column. The sizes of molecules are determined by the pore size of the packing particle in the column within which the separation occurs. For polymeric materials, the molecular weight (Mw) or molecular size plays a key role in determining the mechanical, bulk, and solution properties of materials. It is known that the sizes of polymeric molecules depend on their molecular weights, side chain configurations, molecular interaction, and so on. For example, the exclusion volume of polymers with rigid side group is larger than those with soft long side chains. Therefore, in order to determine the molecular weight and molecular weight distribution of a polymer, one of the most widely applied methods is gel-permeation chromatography. Gel permeation chromatography (GPC) is a term used for when the separation technique size exclusion chromatography (SEC) is applied to polymers. The primary purpose and use of the SEC technique is to provide molecular weight distribution information about a particular polymeric material. Typically, in about 30 minutes using standard SEC, the complete molecular weight distribution of a polymer as well as all the statistical information of the distribution can be determined. Thus, SEC has been considered as a technique essentially supplanting classical molecular weight techniques. To apply this powerful technique, there is some basic work that needs to be done before its use. The selection of an appropriate solvent and the column, as well as the experimental conditions, are important for proper separation of a sample. Also, it is necessary to have calibration curves in order to determine the relative molecular weight from a given retention volume/time. It is well known that both the majority of natural and synthetic polymers are polydispersed with respect to molar mass. For synthetic polymers, the more mono-dispersed a polymer can be made, the better the understanding of its inherent properties will be obtained. Polymer Properties A polymer is a large molecule (macromolecule) composed of repeating structural units typically connected by covalent chemical bonds. Polymers are common materials that are widely used in our lives. One of the most important features which distinguishes most synthetic polymers from simple molecular compounds is the inability to assign an exact molar mass to a polymer. This is a consequence of the fact that during the polymerization reaction the length of the chain formed is determined by several different events, each of which have different reaction rates. Hence, the product is a mixture of chains of different length due to the random nature of growth. In addition, some polymers are also branched (rather than linear) as a consequence of alternative reaction steps. The molecular weight (Mw) and molecular weight distribution influences many of the properties of polymers: • Processability - the suitability of the polymer to physical processing. • Glass-transition temperature - refers to the transformation of a glass-forming liquid into a glass. • Solution viscosity - measure of the resistance of a fluid which is being deformed by either shear stress or tensile stress. • Hardness - a measure of how resistant a polymer is to various kinds of permanent shape change when a force is applied. • Melt viscosity - the rate of extrusion of thermoplastics through an orifice at a prescribed temperature and load. • Tear strength - a measure of the polymers resistance to tearing. • Tensile strength - as indicated by the maxima of a stress-strain curve and, in general, is the point when necking occurs upon stretching a sample. • Stress-crack resistance - the formation of cracks in a polymer caused by relatively low tensile stress and environmental conditions. • Brittleness - the liability of a polymer to fracture when subjected to stress. • Impact resistance - the relative susceptibility of polymers to fracture under stresses applied at high speeds. • Flex life - the number of cycles required to produce a specified failure in a specimen flexed in a prescribed manner. • Stress relaxation - describes how polymers relieve stress under constant strain. • Toughness - the resistance to fracture of a polymer when stressed. • Creep strain - the tendency of a polymer to slowly move or deform permanently under the influence of stresses. • Drawability - The ability of fiber-forming polymers to undergo several hundred percent permanent deformation, under load, at ambient or elevated temperatures. • Compression - the result of the subjection of a polymer to compressive stress. • Fatigue - the failure by repeated stress. • Tackiness - the property of a polymer being adhesive or gummy to the touch. • Wear - the erosion of material from the polymer by the action of another surface. • Gas permeability - the permeability of gas through the polymer. Consequently, it is important to understand how to determine the molecular weight and molecular weight distribution. Molecular Weight Simpler pure compounds contain the same molecular composition for the same species. For example, the molecular weight of any sample of styrene will be the same (104.16 g/mol). In contrast, most polymers are not composed of identical molecules. The molecular weight of a polymer is determined by the chemical structure of the monomer units, the lengths of the chains and the extent to which the chains are interconnected to form branched molecules. Because virtually all polymers are mixtures of many large molecules, we have to resort to averages to describe polymer molecular weight. The polymers produced in polymerization reactions have lengths which are distributed according to a probability function which is governed by the polymerization reaction. To define a particular polymer weight average, the average molecular weight Mavg is defined by \ref{15} Where Ni is the number of molecules with molecular weight Mi. $M_{avg} \ =\ \frac{\Sigma N_{i} M_{i}^{a}}{\Sigma N_{i} M_{i} ^{a-1}} \label{15}$ There are several possible ways of reporting polymer molecular weight. Three commonly used molecular weight descriptions are: the number average (Mn), weight average (Mw), and z-average molecular weight (Mz). All of three are applicable to different constant a in \ref{16} and are shown in Figure $19$. When a = 1, the number average molecular weight, \ref{16} . $M_{n,\ avg} \ =\ \frac{\Sigma N_{i}M_{i}}{\Sigma N_{i}}\ =\ \frac{w}{N} \label{16}$ When a = 2, the number average molecular weight, \ref{17} . $M_{n,\ avg} \ =\ \frac{\Sigma N_{i}M_{i}^{2}}{\Sigma N_{i}M_{i}}\ =\ \frac{\Sigma N_{i}M_{i}}{w} \label{17}$ When a = 2, the number average molecular weight, \ref{18} . $M_{n,\ avg} \ =\ \frac{\Sigma N_{i}M_{i}^{3}}{\Sigma N_{i}M_{i}^{2}}\ =\ \frac{\Sigma N_{i}M_{i}^{2}}{\Sigma N_{i}M_{i}} \label{18}$ Bulk properties weight average molecular weight, Mw is the most useful one, because it fairly accounts for the contributions of different sized chains to the overall behavior of the polymer, and correlates best with most of the physical properties of interest. There are various methods published to detect these three different primary average molecular weights respectively. For instance, a colligative method, such as osmotic pressure, effectively calculates the number of molecules present and provides a number average molecular weight regardless of their shape or size of polymers. The classical van’t Hoff equation for the osmotic pressure of an ideal, dilute solution is shown in \ref{19} . $\frac{\pi }{c}\ =\ \frac{RT}{M_{n}} \label{19}$ The weight average molecular weight of a polymer in solution can be determined by either measuring the intensity of light scattered by the solution or studying the sedimentation of the solute in an ultracentrifuge. From light scattering method which is depending on the size rather than the number of molecules, weight average molecular weight is obtained. This work requires concentration fluctuations which are the main source of the light scattered by a polymer solution. The intensity of the light scattering of polymer solution is often expressed by its turbidity τ which is given in Rayleigh’s law in \ref{20} . Where iθ is scattered intensity at only one angle θ, r is the distance from the scattering particle to the detection point, and I0 is the incident intensity. $\tau\ =\ \frac{16\pi i_{\Theta} r^{2}}{3I_{0}(1+cos^{2}\Theta )} \label{20}$ The intensity scattered by molecules (Ni) of molecular weight (Mi) is proportional to NiMi2. Thus, the total light scattered by all molecules is described in \ref{21} , where c is the total weight of the sample ∑NiMi. $\frac{\pi}{c} ~ \frac{\Sigma N_{i}M_{i}^{2}}{\Sigma N_{i}M_{i}}\ =\ M_{W,\ avg} \label{21}$ Poly-disperse index (PDI) The polydispersity index (PDI), is a measure of the distribution of molecular mass in a given polymer sample. As shown in Figure $19$, it is the result of the definitions that Mw ≥ Mn. The equality of Mw and Mn would correspond with a perfectly uniform (monodisperse) sample. The ratio of these average molecular weights is often used as a guide to the dispersity of the chain lengths in a polymer sample. The greater Mw / Mn is, the greater the dispersity is. The properties of a polymer sample are strongly dependent on the way in which the weights of the individual molecules are distributed about the average. The ratio Mw/Mn gives sufficient information to characterize the distribution when the mathematical form of the distribution curve is known. Generally, the narrow molecular weight distribution materials are the models for much of work aimed at understanding the materials’ behaviors. For example, polystyrene and its block copolymer polystyrene-b-polyisoprene have quite narrow distribution. As a result, narrow molecular weight distribution materials are a necessary requirement when people study their behavior, such as self-assembly behavior for block copolymer. Nonetheless, there are still lots of questions for scientists to explore the influence of polydispersity. For example, research on self-assembly which is one of the interesting fields in polymer science shows that we cannot throw polydispersity away. Setup of SEC Equipment In SEC, sample components migrate through the column at different velocities and elute separately from the column at different times. In liquid chromatography and gas chromatography, as a solute moves along with the carrier fluid, it is at times held back either by surface of the column packing, by stationary phase, or by both. Unlike gas chromatography (GC) and liquid chromatography (LC), molecular size, or more precisely, molecular hydrodynamic volume governs the separation process of SEC, not varied by the type of mobile phase. The smallest molecules are able to penetrate deeply into pores whereas the largest molecules are excluded by the smaller pore sizes. Figure $20$ shows the regular instrumental setup of SEC. The properties of mobile phase are still important in that it is supposed to be strong affinity to stationary phase and be a good solubility to samples. The purpose of well soluble of sample is to make the polymer be perfect coil suspending in solution. Thus, as a mixture of solutes of different size passes through a column packed with porous particles. As shown in Figure $21$, it clearly depicts the general idea for size separation by SEC. the main setup of SEC emphasizes three concepts: stationary phase (column), mobile phase (solvent) and sample preparation. Solvent Selection Solvent selection for SEC involves a number if considerations, such as convenience, sample type, column packing, operating variables, safety, and purity. For samples concern, the solvents used for mobile phase of SEC are limited to those follows following criteria: • The solvent must dissolve the sample completely. • The solvent has different properties with solute in the eluent: typically with solvent refractive index (RI) different from the sample RI by ± 0.05 unit of more, or more than 10% of incident energy for UV detector. • The solvent must not degrade the sample during use. Otherwise, the viscosity of eluent will gradually increase over times. • The solvent is not corrosive to any components of the equipment. Therefore, several solvents are qualified to be solvents such as THF, chlorinated hydrocarbons (chloroform, methylene chloride, dichloroethane, etc), aromatic hydrocarbons (benzene, toluene , trichlorobenzene, etc). Normally, high purity of solvent (HPLC-grade) is recommended. The reasons are to avoid suspended particulates that may abrade the solvent pumping system or cause plugging of small-particle columns, to avoid impurities that may generate baseline noise, and to avoid impurities that are concentrated due to evaporation of solvent. Column Selection Column selection of SEC depends mainly on the desired molecular weight range of separation and the nature of the solvents and samples. Solute molecules should be separated solely by the size of gels without interaction of packing materials. Better column efficiencies and separations can be obtained with small particle packing in columns and high diffusion rates for sample solutes. Furthermore, optimal performance of an SEC packing materials involves high resolution and low column backpressure. Compatible solvent and column must be chosen because, for example, organic solvent is used to swell the organic column packing and used to dissolve and separate the samples. It is convenient that columns are now usually available from manufacturers regarding the various types of samples. They provide the information such as maximum tolerant flow rates, backpressure tolerances, recommended sample concentration, and injection volumes, etc. Nonetheless, users have to notice a few things upon using columns: • Vibration and extreme temperatures should be avoided because these will post irreversible damage on columns. • For aqueous mobile phase, it is unwise to allow the extreme pH solutions staying in the columns for a long period of time. • The columns should be stored with some neat organic mobile phase, or aqueous mobile phase with pH range 2 - 8 to prevent degradation of packing when not in use. Sample Preparation The sample solutions are supposed to be prepared in dilute concentration (less than 2 mg/mL) for several concerns. For polymer samples, samples must be dissolved in the solvent same as used for mobile phase except some special cases. A good solvent can dissolve a sample in any proportion in a range of temperatures. It is a slow process for dissolution because the rate determining step is solvent diffusion into polymers to produce swollen gels. Then, gradual disintegration of gels makes sample-solvent mixture truly become solution. Agitation and warming the mixtures are useful methods to speed up sample preparation. It is recommended to filter the sample solutions before injecting into columns or storing in sample vials in order to get rid of clogging and excessively high pressure problems. If unfortunately the excessively high pressure or clogging occur due to higher concentration of sample solution, raising the column temperature will reduce the viscosity of the mobile phase, and may be helpful to redissolve the precipitated or adsorbed solutes in the column. Back flushing of the columns should only be used as the last resort. Analysis of SEC Data The size exclusion separation mechanism is based on the effective hydrodynamic volume of the molecule, not the molecular weight, and therefore the system must be calibrated using standards of known molecular weight and homogeneous chemical composition. Then, the curve of sample is used to compare with calibration curve and obtain information relative to standards. The further step is required to covert relative molecular weight into absolute molecular weight of a polymer. Calibration The purpose of calibration in SEC is to define the relationship between molecular weight and retention volume/time in the chosen permeation range of column set and to calculate the relative molecular weight to standard molecules. There are several calibration methods are commonly employed in modern SEC: direct standard calibration, poly-disperse standard calibration, universal calibration. The most commonly used calibration method is direct standard calibration. In the direct standard calibration method, narrowly distributed standards of the same polymer being analyzed are used. Normally, narrow-molecular weight standards commercially available are polystyrene (PS). The molecular weight of standards are measured originally by membrane osmometry for number-average molecular weight, and by light scattering for weight-average molecular weight as described above. The retention volume at the peak maximum of each standard is equated with its stated molecular weight. Relative Mw versus absolute Mw The molecular weight and molecular weight distribution can be determined from the calibration curves as described above. But as the relationship between molecular weight and size depends on the type of polymer, the calibration curve depends on the polymer used, with the result that true molecular weight can only be obtained when the sample is the same type as calibration standards. As Figure $23$ depicted, large deviations from the true molecular weight occur in the instance of branched samples because the molecular density of these is higher than in the linear chains. Light-scattering detector is now often used to overcome the limitations of conventional SEC. These signals depend only on concentration, not on molecular weight or polymer size. For instance, for LS detector, \ref{22} applies: $LS\ Signal\ =\ K_{LS} \cdot (dn/dc)^{2}\cdot M_{W} \cdot c \label{22}$ Where KLS is an apparatus-specific sensitivity constant, dn/dc is the refractive index increment and c is concentration. Therefore, accurate molecular weight can be determined while the concentration of the sample is known without calibration curve. A Practical Example The syntheses of poly(3-hexylthiophene) are well developed during last decade. It is an attractive polymer due to its potential as electronic materials. Due to its excellent charge transport performances and high solubility, several studies discuss its further improvement such as making block copolymer even triblock copolymer. The details are not discussed here. However, the importance of molecular weight and molecular weight distribution is still critical. As shown in Figure $24$, they studied the mechanism of chain-growth polymerization and successfully produced low polydispersity P3HT. The figure also demonstrates that the molecule with larger molecular size/ or weight elutes out of the column earlier than those which has smaller molecular weight. The real molecular weight of P3HT is smaller than the molecular weight relative to polystyrene. In this case, the backbone of P3HT is harder compared with polystyrenes’ backbone because of the position of aromatic groups. It results in less flexibility. We can briefly judge the authentic molecular weight of the synthetic polymer according to its molecular structure.
textbooks/chem/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/02%3A_Physical_and_Thermal_Analysis/2.02%3A_Molecular_Weight_Determination.txt