chapter
stringlengths
1.97k
1.53M
path
stringlengths
47
241
Two common analytical problems are matrix components that interfere with an analyte’s analysis and an analyte with a concentration that is too small to analyze accurately. As we have learned in this chapter, we can use a separation to solve the first problem. Interestingly, we often can use a separation to solve the second problem as well. For a separation in which we recover the analyte in a new phase, it may be possible to increase the analyte’s concentration if we can extract the analyte from a larger volume into a smaller volume. This step in an analytical procedure is known as a preconcentration. An example from the analysis of water samples illustrates how we can simultaneously accomplish a separation and a preconcentration. In the gas chromatographic analysis for organophosphorous pesticides in environmental waters, the analytes in a 1000-mL sample are separated from their aqueous matrix by a solid-phase extraction that uses 15 mL of ethyl acetate [Aguilar, C.; Borrul, F.; Marcé, R. M. LC•GC 1996, 14, 1048–1054]. After the extraction, the analytes in the ethyl acetate have a concentration that is 67 times greater than that in the original sample (assuming the extraction is 100% efficient). 7.09: Problems 1. Because of the risk of lead poisoning, the exposure of children to lead-based paint is a significant public health concern. The first step in the quantitative analysis of lead in dried paint chips is to dissolve the sample. Corl evaluated several dissolution techniques [Corl, W. E. Spectroscopy 1991, 6(8), 40–43]. Samples of paint were collected and then pulverized using a Pyrex mortar and pestle. Replicate portions of the powdered paint were taken for analysis. The following table shows results for a paint sample and for a standard reference material. Both samples and standards were digested with HNO3 on a hot plate. Replicate % w/w Pb in Sample % w/w Pb in Standard 1 5.09 11.48 2 6.29 11.62 3 6.64 11.47 4 4.63 11.86 (a) Determine the overall variance, the variance due to the method and the variance due to sampling. (b) What percentage of the overall variance is due to sampling? (c) How might you decrease the variance due to sampling? 2. To analyze a shipment of 100 barrels of an organic solvent, you plan to collect a single sample from each of 10 barrels selected at random. From which barrels should you collect samples if the first barrel is given by the twelfth entry in the random number table in Appendix 14, with subsequent barrels given by every third entry? Assume that entries in the random number table are arranged by rows. 3. The concentration of dissolved O2 in a lake shows a daily cycle from the effect of photosynthesis, and a yearly cycle due to seasonal changes in temperature. Suggest an appropriate systematic sampling plan to monitor the daily change in dissolved O2. Suggest an appropriate systematic sampling plan for monitoring the yearly change in dissolved O2. 4. The data in the following table were collected during a preliminary study of the pH of an industrial wastewater stream. time (hr) pH time (hr) pH 0.5 4.4 9.0 5.7 1.0 4.8 9.5 5.5 1.5 5.2 10.0 6.5 2.0 5.2 10.5 6.0 2.5 5.6 11.0 5.8 3.0 5.4 11.5 6.0 3.5 5.4 12.0 5.6 4.0 4.4 12.5 5.6 4.5 4.8 13.0 5.4 5.0 4.8 13.5 4.9 5.5 4.2 14.0 5.2 6.0 4.2 14.5 4.4 6.5 3.8 15.0 4.0 7.0 4.0 15.5 4.5 7.5 4.0 0 16.0 4.0 8.0 3.9 16.5 5.0 8.5 4.7 17.0 5.0 Prepare a figure showing how the pH changes as a function of time and suggest an appropriate sampling frequency for a long-term monitoring program. 5. You have been asked to monitor the daily fluctuations in atmospheric ozone in the downtown area of a city to determine if there is relationship between daily traffic patterns and ozone levels. (a) Which of the following sampling plans will you use and why: random, systematic, judgmental, systematic–judgmental, or stratified? (b) Do you plan to collect and analyze a series of grab samples, or will you form a single composite sample? (c) Will your answers to these questions change if your goal is to determine if the average daily ozone level exceeds a threshold value? If yes, then what is your new sampling strategy? 6. The distinction between a homogeneous population and a heterogeneous population is important when we develop a sampling plan. (a) Define homogeneous and heterogeneous. (b) If you collect and analyze a single sample, can you determine if the population is homogeneous or is heterogeneous? 7. Beginning with equation 7.2.2, derive equation 7.2.3. Assume that the particles are spherical with a radius of r and a density of d. 8. The sampling constant for the radioisotope 24Na in homogenized human liver is approximately 35 g [Kratochvil, B.; Taylor, J. K. Anal. Chem. 1981, 53, 924A–938A]. (a) What is the expected relative standard deviation for sampling if we analyze 1.0-g samples? (b) How many 1.0-g samples must we analyze to obtain a maximum sampling error of ±5% at the 95% confidence level? 9. Engels and Ingamells reported the following results for the % w/w K2O in a mixture of amphibolite and orthoclase [Engels, J. C.; Ingamells, C. O. Geochim. Cosmochim. Acta 1970, 34, 1007–1017]. 0.247 0.300 0.236 0.247 0.275 0.212 0.258 0.311 0.304 0.258 0.330 0.187 Each of the 12 samples had a nominal mass of 0.1 g. Using this data, calculate the approximate value for Ks, and then, using this value for Ks, determine the nominal mass of sample needed to achieve a percent relative standard deviation of 2%. 10. The following data was reported for the determination of KH2PO4 in a mixture of KH2PO4 and NaCl [Guy, R. D.; Ramaley, L.; Wentzell, P. D. J. Chem. Educ. 1998, 75, 1028–1033]. nominal mass (g) actual mass (g) % w/w KH2PO4 0.10 0.1039 0.085 0.1015 1.078 0.1012 0.413 0.1010 1.248 0.1060 0.654 0.0997 0.507 0.25 0.2515 0.847 0.2465 0.598 0.2770 0.431 0.2460 0.842 0.2485 0.964 0.2590 1.178 0.50 0.5084 1.009 0.4954 0.947 0.5286 0.618 0.5232 0.744 0.4965 0.572 0.4995 0.709 1.00 1.027 0.696 0.987 0.843 0.991 0.535 0.998 0.750 0.997 0.711 1.001 0.639 2.50 2.496 0.766 2.504 0.769 2.496 0.682 2.496 0.609 2.557 0.589 2.509 0.617 (a) Prepare a graph of % w/w KH2PO4 vs. the actual sample mass. Is this graph consistent with your understanding of the factors that affect sampling variance. (b) For each nominal mass, calculate the percent relative standard deviation, Rexp, based on the data. The value of Ks for this analysis is estimated as 350. Use this value of Ks to determine the theoretical percent relative standard deviation, Rtheo, due to sampling. Considering these calculations, what is your conclusion about the importance of indeterminate sampling errors for this analysis? (c) For each nominal mass, convert Rtheo to an absolute standard deviation. Plot points on your graph that correspond to ±1 absolute standard deviations about the overall average % w/w KH2PO4 for all samples. Draw smooth curves through these two sets of points. Does the sample appear homogeneous on the scale at which it is sampled? 11.In this problem you will collect and analyze data to simulate the sampling process. Obtain a pack of M&M’s (or other similar candy). Collect a sample of five candies and count the number that are red (or any other color of your choice). Report the result of your analysis as % red. Return the candies to the bag, mix thoroughly, and repeat the analysis for a total of 20 determinations. Calculate the mean and the standard deviation for your data. Remove all candies from the bag and determine the true % red for the population. Sampling in this exercise should follow binomial statistics. Calculate the expected mean value and the expected standard deviation, and compare to your experimental results. 12. Determine the error ($\alpha = 0.05$) for the following situations. In each case assume that the variance for a single determination is 0.0025 and that the variance for collecting a single sample is 0.050. (a) Nine samples are collected, each analyzed once. (b) One sample is collected and analyzed nine times. (c) Five samples are collected, each analyzed twice. 13. Which of the sampling schemes in problem 12 is best if you wish to limit the overall error to less than ±0.30 and the cost to collect a single sample is $1 and the cost to analyze a single sample is$10? Which is the best sampling scheme if the cost to collect a single sample is $7 and the cost to analyze a single sample is$3? 14. Maw, Witry, and Emond evaluated a microwave digestion method for Hg against the standard open-vessel digestion method [Maw, R.; Witry, L.; Emond, T. Spectroscopy 1994, 9, 39–41]. The standard method requires a 2-hr digestion and is operator-intensive. The microwave digestion is complete in approximately 0.5 hr and requires little monitoring by the operator. Samples of baghouse dust from air-pollution-control equipment were collected from a hazardous waste incinerator and digested in triplicate before determining the concentration of Hg in ppm. Results are summarized in the following two tables. ppm Hg Following Microwave Digestion sample replicate 1 replicate 2 replicate 3 1 7.12 7.66 7.17 2 16.1 15.7 15.6 3 4.89 4.62 4.28 4 9.64 9.03 8.44 a 6.76 7.22 7.50 6 6.19 6.61 7.61 7 9.44 9.56 10.7 8 30.8 29.0 26.3 ppm Hg Following Standard Digestion sample replicate 1 replicate 2 replicate 3 1 5.60 5.54 5.40 2 13.1 13.8 13.0 3 5.39 5.12 5.36 4 6.50 6.52 7.20 a 6.20 6.03 5.77 6 6.25 5.65 5.61 7 15.0 13.9 14.0 8 20.4 16.1 20.0 Does the microwave digestion method yields acceptable results when compared to the standard digestion method? 15. Simpson, Apte, and Batley investigated methods for preserving water samples collected from anoxic (O2-poor) environments that have high concentrations of dissolved sulfide [Simpson, S. L.: Apte, S. C.; Batley, G. E. Anal. Chem. 1998, 70, 4202–4205]. They found that preserving water samples with HNO3 (a common method for preserving aerobic samples) gave significant negative determinate errors when analyzing for Cu2+. Preserving samples by first adding H2O2 and then adding HNO3 eliminated the determinate error. Explain their observations. 16. In a particular analysis the selectivity coefficient, KA,I, is 0.816. When a standard sample with an analyte-to-interferent ratio of 5:1 is carried through the analysis, the error when determining the analyte is +6.3%. (a) Determine the apparent recovery for the analyte if RI =0. (b) Determine the apparent recovery for the interferent if RA = 0. 17. The amount of Co in an ore is determined using a procedure for which Fe in an interferent. To evaluate the procedure’s accuracy, a standard sample of ore known to have a Co/Fe ratio of 10.2 is analyzed. When pure samples of Co and Fe are taken through the procedure the following calibration relationships are obtained $S_{\mathrm{Co}}=0.786 \times m_{\mathrm{Co}} \text { and } S_{\mathrm{Fe}}=0.699 \times m_{\mathrm{Fe}} \nonumber$ where S is the signal and m is the mass of Co or Fe. When 278.3 mg of Co are taken through the separation step, 275.9 mg are recovered. Only 3.6 mg of Fe are recovered when a 184.9 mg sample of Fe is carried through the separation step. Calculate (a) the recoveries for Co and Fe; (b) the separation factor; (c) the selectivity ratio; (d) the error if no attempt is made to separate the Co and Fe; (e) the error if the separation step is carried out; and (f ) the maximum possible recovery for Fe if the recovery for Co is 1.00 and the maximum allowed error is 0.05%. 18. The amount of calcium in a sample of urine is determined by a method for which magnesium is an interferent. The selectivity coefficient, KCa,Mg, for the method is 0.843. When a sample with a Mg/Ca ratio of 0.50 is carried through the procedure, an error of $-3.7 \%$ is obtained. The error is +5.5% when using a sample with a Mg/Ca ratio of 2.0. (a) Determine the recoveries for Ca and Mg. (b) What is the expected error for a urine sample in which the Mg/Ca ratio is 10.0? 19. Using the formation constants in Appendix 12, show that F is an effective masking agent for preventing a reaction between Al3+ and EDTA. Assume that the only significant forms of fluoride and EDTA are F and Y4–. 20. Cyanide is frequently used as a masking agent for metal ions. Its effectiveness as a masking agent is better in more basic solutions. Explain the reason for this dependence on pH. 21. Explain how we can separate an aqueous sample that contains Cu2+, Sn4+, Pb2+, and Zn2+ into its component parts by adjusting the pH of the solution. 22. A solute, S, has a distribution ratio between water and ether of 7.5. Calculate the extraction efficiency if we extract a 50.0-mL aqueous sample of S using 50.0 mL of ether as (a) a single portion of 50.0 mL; (b) two portions, each of 25.0 mL; (c) four portions, each of 12.5 mL; and (d) five portions, each of 10.0 mL. Assume the solute is not involved in any secondary equilibria. 23. What volume of ether is needed to extract 99.9% of the solute in problem 23 when using (a) 1 extraction; (b) 2 extractions; (c) four extrac- tions; and (d) five extractions. 24. What is the minimum distribution ratio if 99% of the solute in a 50.0-mL sample is extracted using a single 50.0-mL portion of an organic solvent? Repeat for the case where two 25.0-mL portions of the organic solvent are used. 25. A weak acid, HA, with a Ka of $1.0 \times 10^{-5}$ has a partition coefficient, KD, of $1.2 \times 10^3$ between water and an organic solvent. What restriction on the sample’s pH is necessary to ensure that 99.9% of the weak acid in a 50.0-mL sample is extracted using a single 50.0-mL portion of the organic solvent? 26. For problem 25, how many extractions are needed if the sample’s pH cannot be decreased below 7.0? 27. A weak base, B, with a Kb of $1.0 \times 10^{-3}$ has a partition coefficient, KD, of $5.0 \times 10^2$ between water and an organic solvent. What restriction on the sample’s pH is necessary to ensure that 99.9% of the weak base in a 50.0-mL sample is extracted when using two 25.0-mL portions of the organic solvent? 28. A sample contains a weak acid analyte, HA, and a weak acid interferent, HB. The acid dissociation constants and the partition coefficients for the weak acids are Ka,HA = $1.0 \times 10^{-3}$, Ka,HB = $1.0 \times 10^{-7}$, KD,HA = KD,HB = $5.0 \times 10^2$. (a) Calculate the extraction efficiency for HA and HB when a 50.0-mL sample, buffered to a pH of 7.0, is extracted using 50.0 mL of the organic solvent. (b) Which phase is enriched in the analyte? (c) What are the recoveries for the analyte and the interferent in this phase? (d) What is the separation factor? (e) A quantitative analysis is conducted on the phase enriched in analyte. What is the expected relative error if the selectivity coefficient, KHA,HB, is 0.500 and the initial ratio of HB/HA is 10.0? 29. The relevant equilibria for the extraction of I2 from an aqueous solution of KI into an organic phase are shown below. (a) Is the extraction efficiency for I2 better at higher or at a lower concentrations of I? (b) Derive an expression for the distribution ratio for this extraction. 30. The relevant equilibria for the extraction of the metal-ligand complex ML2 from an aqueous solution into an organic phase are shown below. (a) Derive an expression for the distribution ratio for this extraction. (b) Calculate the extraction efficiency when a 50.0-mL aqueous sample that is 0.15 mM in M2+ and 0.12 M in L is extracted using 25.0 mL of the organic phase. Assume that KD is 10.3 and that $\beta_2$ is 560. 31. Derive equation 7.7.12 for the extraction scheme outlined in figure 7.7.5. 32. The following information is available for the extraction of Cu2+ by CCl4 and dithizone: KD,c = $7 \times 10^4$; $\beta_2 = 5 \times 10^{22}$; Ka,HL = $3 \times 10^{-5}$; KD,HL = $1.1 \times 10^4$; and n = 2. What is the extraction efficiency if a 100.0-mL sample of an aqueous solution that is $1.0 \times 10^{-7}$ M Cu2+ and 1 M in HCl is extracted using 10.0 mL of CCl4 containing $4.0 \times 10^{-4}$ M dithizone (HL)? 33. Cupferron is a ligand whose strong affinity for metal ions makes it useful as a chelating agent in liquid–liquid extractions. The following table provides pH-dependent distribution ratios for the extraction of Hg2+, Pb2+, and Zn2+ from an aqueous solution to an organic solvent. Distribution Ratios for $\text{Hg}^{2+}$, $\text{Pb}^{2+}$, and $\text{Zn}^{2+}$ as a Function of pH pH Hg2+ Pb2+ Zn2+ 1 3.3 0.0 0.0 2 10.0 0.43 0.0 3 32.3 999 0.0 4 32.3 9999 0.0 5 19.0 9999 0.18 6 4.0 9999 0.33 7 1.0 9999 0.82 8 0.54 9999 1.50 9 0.15 9999 2.57 10 0.05 9999 2.57 (a) Suppose you have a 50.0-mL sample of an aqueous solution that contains Hg2+, Pb2+, and Zn2+. Describe how you can separate these metal ions. (b) Under the conditions for your extraction of Hg2+, what percent of the Hg2+ remains in the aqueous phase after three 50.0-mL extractions with the organic solvent? (c) Under the conditions for your extraction of Pb2+, what is the minimum volume of organic solvent needed to extract 99.5% of the Pb2+ in a single extraction? (d) Under the conditions for your extraction of Zn2+, how many extractions are needed to remove 99.5% of the Zn2+ if each extraction uses 25.0 mL of organic solvent?
textbooks/chem/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/07%3A_Obtaining_and_Preparing_Samples_for_Analysis/7.08%3A_Separation_Versus_Preconcentration.txt
The following set of experiments and class exercises introduce students to the importance of sampling on the quality of analytical results. • Bauer, C. F. “Sampling Error Lecture Demonstration,” J. Chem. Educ. 1985, 62, 253. • Canaes, L. S.; Brancalion, M. L.; Rossi, A. V.; Rath, S. “Using Candy Samples to Learn About Sampling Techniques and Statistical Evaluation of Data,” J. Chem. Educ. 2008, 85, 1083–1088. • Clement, R. E. “Environmental Sampling for Trace Analysis,” Anal. Chem. 1992, 64, 1076A–1081A. • Dunn, J. G.; Phillips, D. N.; van Bronswijk, W. “An Exercise to Illustrate the Importance of Sample Preparation in Chemical Analysis,” J. Chem. Educ. 1997, 74, 1188–1191. • Fillman, K. L.; Palkendo, J. A. “Collection, Extraction, and Analysis of Lead in Atmospheric Particles,” J. Chem. Edu. 2014, 91, 590–592. • Fritz, M. D. “A Demonstration of Sample Segregation,” J. Chem. Educ. 2005, 82, 255–256. • Guy, R. D.; Ramaley, L.; Wentzell, P. D. “An Experiment in the Sampling of Solids for Chemical Analysis”, J. Chem. Educ. 1998, 75, 1028–1033. • Hartman, J. R. “An In-Class Experiment to Illustrate the Importance of Sampling Techniques and Statistical Analysis of Data to Quantitative Analysis Students,” J. Chem. Educ. 2000, 77, 1017–1018. • Harvey, D. T. “Two Experiments Illustrating the Importance of Sampling in a Quantitative Chemical Analysis,” J. Chem. Educ. 2002, 79, 360–363. • Herrington, B. L. “A Demonstration of the Necessity for Care in Sampling,” J. Chem. Educ. 1937, 14, 544. • Kratochvil, B.; Reid, R. S.; Harris, W. E. “Sampling Error in a Particulate Mixture”, J. Chem. Educ. 1980, 57, 518–520. • Ross, M. R. “A Classroom Exercise in Sampling Technique,” J. Chem. Educ. 2000, 77, 1015–1016. • Settle, F. A.; Pleva, M. “The Weakest Link Exercise,” Anal. Chem. 1999, 71, 538A–540A. • Vitt, J. E.; Engstrom, R. C. “Effect of Sample Size on Sampling Error,” J. Chem. Educ. 1999, 76, 99–100. The following experiments describe homemade sampling devices for collecting samples in the field. • Delumyea, R. D.; McCleary, D. L. “A Device to Collect Sediment Cores,” J. Chem. Educ. 1993, 70, 172–173. • Rockwell, D. M.; Hansen, T. “Sampling and Analyzing Air Pollution,” J. Chem. Educ. 1994, 71, 318–322. • Saxena, S., Upadhyay, R.; Upadhyay, P. “A Simple and Low-Cost Air Sampler,” J. Chem. Educ. 1996, 73, 787–788. • Shooter, D. “Nitrogen Dioxide and Its Determination in the Atmosphere,” J. Chem. Educ. 1993, 70, A133–A140. The following experiments introduce students to methods for extracting analytes from their matrix. • “Extract-CleanTM SPE Sample Preparation Guide Volume 1”, Bulletin No. 83, Alltech Associates, Inc. Deerfield, IL. • Freeman, R. G.; McCurdy, D. L. “Using Microwave Sample Decomposition in Undergraduate Analytical Chemistry,” J. Chem. Educ. 1998, 75, 1033–1032. • Snow, N. H.; Dunn, M.; Patel, S. “Determination of Crude Fat in Food Products by Supercritical Fluid Extraction and Gravimetric Analysis,” J. Chem. Educ. 1997, 74, 1108–1111. • Yang, M. J.; Orton, M. L.; Pawliszyn, J. “Quantitative Determination of Caffeine in Beverages Using a Combined SPME-GC/MS Method,” J. Chem. Educ. 1997, 74, 1130–1132. The following papers provides a general introduction to the terminology used in describing sampling. • “Terminology—The key to understanding analytical science. Part 2: Sampling and sample preparation,” AMCTB 19, 2005. • Majors, R. E. “Nomenclature for Sampling in Analytical Chemistry” LC•GC 1992, 10, 500–506. Further information on the statistics of sampling is covered in the following papers and textbooks. • Analytical Methods Committee “What is uncertainty from sampling, and why is it important?” AMCTB 16A, 2004. • Analytical Methods Committee “Analytical and sampling strategy, fitness for purpose, and computer games,” AMCTB 20, 2005. • Analytical Methods Committee “Measurement uncertainty arising from sampling: the new Eurachem Guide,” AMCTB No. 31, 2008. • Analytical Methods Committee “The importance, for regulation, of uncertainty from sampling,” AMCTB 42, 2009. • Analytical Methods Committee “Estimating sampling uncertainty—how many duplicate samples are needed?” AMCTB 58, 2014. • Analytical Methods Committee “Random samples,” AMCTB 60, 2014. • Analytical Methods Committee “Sampling theory and sampling uncertainty,” AMCTB 71, 2015. • Sampling for Analytical Purpose, Gy, P. ed., Wiley: NY, 1998. • Baiulescu, G. E.; Dumitrescu, P.; Zuaravescu, P. G. Sampling, Ellis Horwood: NY, 1991. • Cohen, R. D. “How the Size of a Random Sample Affects How Accurately It Represents a Population,” J. Chem. Educ. 1992, 74, 1130–1132. • Efstathiou, C. E. “On the sampling variance of ultra-dilute solutions,” Talanta 2000, 52, 711–715. • Esbensen, K. H.; Wagner, C. “Theory of sampling (TOS) versus measurement uncertainty (MU)–A call for integration,” TRAC-Trend. Anal. Chem. 2014, 57, 93–106. • Gerlach, R. W.; Dobb, D. E.; Raab, G. A.; Nocerino, J. M. J. Chemom. 2002, 16, 321–328. • Gy, P. M. Sampling of Particulate Materials: Theory and Practice; Elsevier: Amsterdam, 1979. • Gy, P. M. Sampling of Heterogeneous and Dynamic Materials: Theories of Heterogeneity, Sampling and Homogenizing; Elsevier: Amsterdam, 1992. • Harrington, B.; Nickerson, B.; Guo, M. X.; Barber, M.; Giamalva, D.; Lee, C.; Scrivens, G. “Sample Preparation Composite and Replicate Strategy for Assay of Solid Oral Drug Products,” Anal. Chem. 2014, 86, 11930–11936. • Kratochvil, B.; Taylor, J. K. “Sampling for Chemical Analysis,” Anal. Chem. 1981, 53, 924A–938A. • Kratochvil, B.; Goewie, C. E.; Taylor, J. K. “Sampling Theory for Environmental Analysis,” Trends Anal. Chem. 1986, 5, 253–256. • Meyer, V. R. LC•GC 2002, 20, 106–112. • Rohlf, F. J.; AkÇakaya, H. R.; Ferraro, S. P. “Optimizing Composite Sampling Protocols,” Environ. Sci. Technol. 1996, 30, 2899–2905. • Smith, R.; James, G. V. The Sampling of Bulk Materials; Royal Society of Chemistry: London, 1981. The process of collecting a sample presents a variety of difficulties, particularly with respect to the analyte’s integrity. The following papers provide representative examples of sampling problems. • Barceló, D.; Hennion, M. C. “Sampling of Polar Pesticides from Water Matrices,” Anal. Chim. Acta 1997, 338, 3–18. • Batley, G. E.; Gardner, D. “Sampling and Storage of Natural Waters for Trace Metal Analysis,” Wat. Res. 1977, 11, 745–756. • Benoit, G.; Hunter, K. S.; Rozan, T. F. “Sources of Trace Metal Contamination Artifacts during Collection, Handling, and Analysis of Freshwaters,” Anal. Chem. 1997, 69, 1006–1011 • Brittain, H. G. “Particle-Size Distribution II: The Problem of Sampling Powdered Solids,” Pharm. Technol. July 2002, 67–73. • Ramsey, M. H. “Measurement Uncertainty Arising from Sampling: Implications for the Objectives of Geoanalysis,” Analyst, 1997, 122, 1255–1260. • Seiler, T-B; Schulze, T.; Hollert, H. “The risk of altering soil and sediment samples upon extract prepa- ration for analytical and bio-analytical investigations—a review,” Anal. Bioanal. Chem. 2008, 390, 1975–1985. The following texts and articles provide additional information on methods for separating analytes and inter- ferents. • “Guide to Solid Phase Extraction,” Bulletin 910, Sigma-Aldrich, 1998. • “Solid Phase Microextraction: Theory and Optimization of Conditions,” Bulletin 923, Sigma-Aldrich, 1998. • Microwave-Enhanced Chemistry: Fundamentals, Sample Preparation, and Applications, Kingston, H. M.; Haswell, S. J., eds.; American Chemical Society: Washington, D.C., 1997. • Anderson, R. Sample Pretreatment and Separation, Wiley: Chichester, 1987. • Bettiol, C.; Stievano, L.; Bertelle, M.; Delfino, F.; Argese, E. “Evaluation of microwave-assisted acid extraction procedures for the determination of metal content and potential bioavailability in sediments,” Appl. Geochem. 2008, 23, 1140–1151. • Compton, T. R. Direct Preconcentration Techniques, Oxford Science Publications: Oxford, 1993. • Compton, T. R. Complex-Formation Preconcentration Techniques, Oxford Science Publications: Oxford, 1993. • Hinshaw, J. V. “Solid-Phase Microextraction,” LC•GC Europe 2003, December, 2–5. • Karger, B. L.; Snyder, L. R.; Harvath, C. An Introduction to Separation Science, Wiley-Interscience: N. Y.; 1973. • Majors, R. E.; Raynie, D. E. “Sample Preparation and Solid-Phase Extraction”, LC•GC 1997, 15, 1106–1117. • Luque de Castro, M. D.; Priego-Capote, F.; Sánchez-Ávila, N. “Is dialysis alive as a membrane-based separation technique?” Trends Anal. Chem. 2008, 27, 315–326. • Mary, P.; Studer, V.; Tabeling, P. “Microfluidic Droplet-Based Liquid–Liquid Extraction,” Anal. Chem. 2008, 80, 2680–2687. • Miller, J. M. Separation Methods in Chemical Analysis, Wiley-Interscience: N. Y.; 1975. • Morrison, G. H.; Freiser, H. Solvent Extraction in Analytical Chemistry, John Wiley and Sons: N. Y.; 1957. • Pawliszyn, J. Solid-Phase Microextraction: Theory and Practice, Wiley: NY, 1997. • Pawliszyn, J. “Sample Preparation: Quo Vadis?” Anal. Chem. 2003, 75, 2543–2558. • Sulcek, Z.; Povondra, P. Methods of Decomposition in Inorganic Analysis; CRC Press: Boca Raton, FL, 1989. • Theis, A. L.; Waldack, A. J.; Hansen, S. M.; Jeannot, M. A. “Headspace Solvent Microextraction,” Anal. Chem. 2001, 73, 5651–5654. • Thurman, E. M.; Mills, M. S. Solid-Phase Extraction: Principles and Practice, Wiley: NY, 1998. • Zhang, Z.; Yang, M.; Pawliszyn, J. “Solid-Phase Microextraction,” Anal. Chem. 1994, 66, 844A–853A. 7.11: Chapter Summary and Key Terms Chapter Summary An analysis requires a sample and how we acquire that sample is critical. The samples we collect must accurately represent their target population, and our sampling plan must provide a sufficient number of samples of appropriate size so that uncertainty in sampling does not limit the precision of our analysis. A complete sampling plan requires several considerations, including the type of sample to collect (random, judgmental, systematic, systematic–judgmental, stratified, or convenience); whether to collect grab samples, composite samples, or in situ samples; whether the population is homogeneous or heterogeneous; the appropriate size for each sample; and the number of samples to collect. Removing a sample from its population may induce a change in its composition due to a chemical or physical process. For this reason, we collect samples in inert containers and we often preserve them at the time of collection. When an analytical method’s selectivity is insufficient, we may need to separate the analyte from potential interferents. Such separations take advantage of physical properties—such as size, mass or density—or chemical properties. Important examples of chemical separations include masking, distillation, and extractions. Key Terms centrifugation convenience sampling distillation extraction efficiency grab sample homogeneous laboratory sample Nyquist theorem purge-and-trap recrystallization secondary equilibrium reaction size exclusion chromatography sublimation systematic–judgmental sampling composite sample density gradient centrifugation distribution ratio filtrate gross sample in situ sampling masking partition coefficient random sampling retentate selectivity coefficient Soxhlet extractor subsamples systematic sampling coning and quartering dialysis extraction filtration heterogeneous judgmental sampling masking agents preconcentration recovery sampling plan separation factor stratified sampling supercritical fluid target population
textbooks/chem/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/07%3A_Obtaining_and_Preparing_Samples_for_Analysis/7.10%3A_Additional_Resources.txt
Gravimetry includes all analytical methods in which the analytical signal is a measurement of mass or a change in mass. When you step on a scale after exercising you are, in a sense, making a gravimetric determination of your mass. Mass is the most fundamental of all analytical measurements and gravimetry unquestionably is the oldest quantitative analytical technique. Vannoccio Biringuccio’s Pirotechnia, first published in 1540, is an early example of applying gravimetry—although not yet known by this name—to the analysis of metals and ores; the first chapter of Book Three, for example, is entitled “The Method of Assaying the Ores of all Metals in General and in Particular Those That Contain Silver and Gold.” Although gravimetry no longer is the most important analytical method, it continues to find use in specialized applications. • 8.1: Overview of Gravimetric Methods Before we consider specific gravimetric methods, let’s take a moment to develop a broad survey of gravimetry. Later, as you read through the descriptions of specific gravimetric methods, this survey will help you focus on their similarities instead of their differences. It is easier to understand a new analytical method when you can see its relationship to other similar methods. • 8.2: Precipitation Gravimetry In precipitation gravimetry an insoluble compound forms when we add a precipitating reagent, or precipitant, to a solution that contains our analyte. In most cases the precipitate is the product of a simple metathesis reaction between the analyte and the precipitant; however, any reaction that generates a precipitate potentially can serve as a gravimetric method. • 8.3: Volatilization Gravimetry A second approach to gravimetry is to thermally or chemically decompose the sample and measure the resulting change in its mass. Alternatively, we can trap and weigh a volatile decomposition product. Because the release of a volatile species is an essential part of these methods, we classify them collectively as volatilization gravimetric methods of analysis. • 8.4: Particulate Gravimetry Precipitation and volatilization gravimetric methods require that the analyte, or some other species in the sample, participates in a chemical reaction. In some situations, however, the analyte already is present in a particulate form that is easy to separate from its liquid, gas, or solid matrix. When such a separation is possible, we can determine the analyte’s mass without relying on a chemical reaction. • 8.5: Problems End-of-chapter problems to test your understanding of topics in this chapter. • 8.6: Additional Resources A compendium of resources to accompany topics in this chapter. • 8.7: Chapter Summary and Key Terms Summary of chapter's main topics and list of key terms introduced in this chapter. 08: Gravimetric Methods Before we consider specific gravimetric methods, let’s take a moment to develop a broad survey of gravimetry. Later, as you read through the descriptions of specific gravimetric methods, this survey will help you focus on their similarities instead of their differences. It is easier to understand a new analytical method when you can see its relationship to other similar methods. Using Mass as an Analytical Signal Suppose we are to determine the total suspended solids in the water released by a sewage-treatment facility. Suspended solids are just that: solid matter that has yet to settle out of its solution matrix. The analysis is easy. After collecting a sample, we pass it through a preweighed filter that retains the suspended solids, and then dry the filter and solids to remove any residual moisture. The mass of suspended solids is the difference between the filter’s final mass and its original mass. We call this a direct analysis because the analyte—the suspended solids in this example—is the species that is weighed. Method 2540D in Standard Methods for the Examination of Waters and Wastewaters, 20th Edition (American Public Health Association, 1998) provides an approved method for determining total suspended solids. The method uses a glass-fiber filter to retain the suspended solids. After filtering the sample, the filter is dried to a constant weight at 103–105oC. What if our analyte is an aqueous ion, such as Pb2+? Because the analyte is not a solid, we cannot isolate it by filtration. We can still measure the analyte’s mass directly if we first convert it into a solid form. If we suspend a pair of Pt electrodes in the sample and apply a sufficiently positive potential between them for a long enough time, we can convert the Pb2+ to PbO2, which deposits on the Pt anode. $\mathrm{Pb}^{2+}(a q)+4 \mathrm{H}_{2} \mathrm{O}(l)\rightleftharpoons\mathrm{PbO}_{2}(s)+\mathrm{H}_{2}(g)+2 \mathrm{H}_{3} \mathrm{O}^{+}(a q) \nonumber$ If we weigh the anode before and after we apply the potential, its change in mass gives the mass of PbO2 and, from the reaction’s stoichiometry, the amount of Pb2+ in the sample. This is a direct analysis because PbO2 contains the analyte. Sometimes it is easier to remove the analyte and let a change in mass serve as the analytical signal. Suppose we need to determine a food’s moisture content. One approach is to heat a sample of the food to a temperature that will vaporize water and capture the water vapor using a preweighed absorbent trap. The change in the absorbent’s mass provides a direct determination of the amount of water in the sample. An easier approach is to weigh the sample of food before and after we heat it and use the change in its mass to determine the amount of water originally present. We call this an indirect analysis because we determine the analyte, H2O in this case, using a signal that is proportional its disappearance. Method 925.10 in Official Methods of Analysis, 18th Edition (AOAC International, 2007) provides an approved method for determining the moisture content of flour. A preweighed sample is heated for one hour in a 130oC oven and transferred to a desiccator while it cools to room temperature. The loss in mass gives the amount of water in the sample. The indirect determination of a sample’s moisture content is made by measuring a change in mass. The sample’s initial mass includes the water, but its final mass does not. We can also determine an analyte indirectly without its being weighed. For example, phosphite, $\text{PO}_3^{3-}$, reduces Hg2+ to $\text{Hg}_2^{2+}$, which in the presence of Cl precipitates as Hg2Cl2. $2 \mathrm{HgCl}_{2}(a q)+\mathrm{PO}_{3}^{3-}(a q) +3 \mathrm{H}_{2} \mathrm{O}(l)\rightleftharpoons \mathrm{Hg}_{2} \mathrm{Cl}_{2}(s)+2 \mathrm{H}_{3} \mathrm{O}^{+}(a q)+2 \mathrm{Cl}^{-}(a q)+\mathrm{PO}_{4}^{3-}(a q) \nonumber$ If we add HgCl2 in excess to a sample that contains phosphite, each mole of $\text{PO}_3^{3-}$ will produce one mole of Hg2Cl2. The precipitate’s mass, therefore, provides an indirect measurement of the amount of $\text{PO}_3^{3-}$ in the original sample. Types of Gravimetric Methods The examples in the previous section illustrate four different ways in which a measurement of mass may serve as an analytical signal. When the signal is the mass of a precipitate, we call the method precipitation gravimetry. The indirect determination of $\text{PO}_3^{3-}$ by precipitating Hg2Cl2 is an example, as is the direct determination of Cl by precipitating AgCl. In electrogravimetry, we deposit the analyte as a solid film on an electrode in an electrochemical cell. The deposition as PbO2 at a Pt anode is one example of electrogravimetry. The reduction of Cu2+ to Cu at a Pt cathode is another example of electrogravimetry. We will not consider electrogravimetry in this chapter. See Chapter 11 on electrochemical methods of analysis for a further discussion of electrogravimetry. When we use thermal or chemical energy to remove a volatile species, we call the method volatilization gravimetry. In determining the moisture content of bread, for example, we use thermal energy to vaporize the water in the sample. To determine the amount of carbon in an organic compound, we use the chemical energy of combustion to convert it to CO2. Finally, in particulate gravimetry we determine the analyte by separating it from the sample’s matrix using a filtration or an extraction. The determination of total suspended solids is one example of particulate gravimetry. Conservation of Mass An accurate gravimetric analysis requires that the analytical signal—whether it is a mass or a change in mass—is proportional to the amount of analyte in our sample. For all gravimetric methods this proportionality involves a conservation of mass. If the method relies on one or more chemical reactions, then we must know the stoichiometry of the reactions. In the analysis of $\text{PO}_3^{3-}$ described earlier, for example, we know that each mole of Hg2Cl2 corresponds to a mole of $\text{PO}_3^{3-}$. If we remove the analyte from its matrix, then the separation must be selective for the analyte. When determining the moisture content in bread, for example, we know that the mass of H2O in the bread is the difference between the sample’s final mass and its initial mass. We will return to this concept of applying a conservation of mass later in the chapter when we consider specific examples of gravimetric methods. Why Gravimetry is Important Except for particulate gravimetry, which is the most trivial form of gravimetry, you probably will not use gravimetry after you complete this course. Why, then, is familiarity with gravimetry still important? The answer is that gravimetry is one of only a small number of definitive techniques whose measurements require only base SI units, such as mass or the mole, and defined constants, such as Avogadro’s number and the mass of 12C. Ultimately, we must be able to trace the result of any analysis to a definitive technique, such as gravimetry, that we can relate to fundamental physical properties [Valacárcel, M.; Ríos, A. Analyst 1995, 120, 2291–2297]. Although most analysts never use gravimetry to validate their results, they often verifying an analytical method by analyzing a standard reference material whose composition is traceable to a definitive technique [(a) Moody, J. R.; Epstein, M. S. Spectrochim. Acta 1991, 46B, 1571–1575; (b) Epstein, M. S. Spectrochim. Acta 1991, 46B, 1583–1591]. Other examples of definitive techniques are coulometry and isotope-dilution mass spectrometry. Coulometry is discussed in Chapter 11. Isotope-dilution mass spectrometry is beyond the scope of this textbook; however, you will find some suggested readings in this chapter’s Additional Resources.
textbooks/chem/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/08%3A_Gravimetric_Methods/8.01%3A_Overview_of_Gravimetric_Methods.txt
In precipitation gravimetry an insoluble compound forms when we add a precipitating reagent, or precipitant, to a solution that contains our analyte. In most cases the precipitate is the product of a simple metathesis reaction between the analyte and the precipitant; however, any reaction that generates a precipitate potentially can serve as a gravimetric method. Most precipitation gravimetric methods were developed in the nineteenth century, or earlier, often for the analysis of ores. Figure 1.1.1 in Chapter 1, for example, illustrates a precipitation gravimetric method for the analysis of nickel in ores. Theory and Practice All precipitation gravimetric analyses share two important attributes. First, the precipitate must be of low solubility, of high purity, and of known composition if its mass is to reflect accurately the analyte’s mass. Second, it must be easy to separate the precipitate from the reaction mixture. Solubility Considerations To provide an accurate result, a precipitate’s solubility must be minimal. The accuracy of a total analysis technique typically is better than ±0.1%, which means the precipitate must account for at least 99.9% of the analyte. Extending this requirement to 99.99% ensures the precipitate’s solubility will not limit the accuracy of a gravimetric analysis. A total analysis technique is one in which the analytical signal—mass in this case—is proportional to the absolute amount of analyte in the sample. See Chapter 3 for a discussion of the difference between total analysis techniques and concentration techniques. We can minimize solubility losses by controlling the conditions under which the precipitate forms. This, in turn, requires that we account for every equilibrium reaction that might affect the precipitate’s solubility. For example, we can determine Ag+ gravimetrically by adding NaCl as a precipitant, forming a precipitate of AgCl. $\mathrm{Ag}^{+}(a q)+\mathrm{Cl}^{-}(a q)\rightleftharpoons\mathrm{AgCl}(s) \label{8.1}$ If this is the only reaction we consider, then we predict that the precipitate’s solubility, SAgCl, is given by the following equation. $S_{\mathrm{AgCl}}=\left[\mathrm{Ag}^{+}\right]=\frac{K_{\mathrm{sp}}}{\left[\mathrm{Cl}^{-}\right]} \label{8.2}$ Equation \ref{8.2} suggests that we can minimize solubility losses by adding a large excess of Cl. In fact, as shown in Figure 8.2.1 , adding a large excess of Cl increases the precipitate’s solubility. To understand why the solubility of AgCl is more complicated than the relationship suggested by Equation \ref{8.2}, we must recall that Ag+ also forms a series of soluble silver-chloro metal–ligand complexes. $\operatorname{Ag}^{+}(a q)+\mathrm{Cl}^{-}(a q)\rightleftharpoons\operatorname{AgCl}(a q) \quad \log K_{1}=3.70 \label{8.3}$ $\operatorname{AgCl}(a q)+\mathrm{Cl}^{-}(a q)\rightleftharpoons\operatorname{AgCl}_{2}(a q) \quad \log K_{2}=1.92 \label{8.4}$ $\mathrm{AgCl}_{2}^{-}(a q)+\mathrm{Cl}^{-}(a q)\rightleftharpoons\mathrm{AgCl}_{3}^{2-}(a q) \quad \log K_{3}=0.78 \label{8.5}$ Note the difference between reaction \ref{8.3}, in which we form AgCl(aq) as a product, and reaction \ref{8.1}, in which we form AgCl(s) as a product. The formation of AgCl(aq) from AgCl(s) $\operatorname{AgCl}(s)\rightleftharpoons\operatorname{AgCl}(a q) \nonumber$ is called AgCl’s intrinsic solubility. The actual solubility of AgCl is the sum of the equilibrium concentrations for all soluble forms of Ag+. $S_{\mathrm{AgCl}}=\left[\mathrm{Ag}^{+}\right]+[\mathrm{AgCl}(a q)]+\left[\mathrm{AgCl}_{2}^-\right]+\left[\mathrm{AgCl}_{3}^{2-}\right] \label{8.6}$ By substituting into Equation \ref{8.6} the equilibrium constant expressions for reaction \ref{8.1} and reactions \ref{8.3}–\ref{8.5}, we can define the solubility of AgCl as $S_\text{AgCl} = \frac {K_\text{sp}} {[\text{Cl}^-]} + K_1K_\text{sp} + K_1K_2K_\text{sp}[\text{Cl}^-]+K_1K_2K_3K_\text{sp}[\text{Cl}^-]^2 \label{8.7}$ Equation \ref{8.7} explains the solubility curve for AgCl shown in Figure 8.2.1 . As we add NaCl to a solution of Ag+, the solubility of AgCl initially decreases because of reaction \ref{8.1}. Under these conditions, the final three terms in Equation \ref{8.7} are small and Equation \ref{8.2} is sufficient to describe AgCl’s solubility. For higher concentrations of Cl, reaction \ref{8.4} and reaction \ref{8.5} increase the solubility of AgCl. Clearly the equilibrium concentration of chloride is important if we wish to determine the concentration of silver by precipitating AgCl. In particular, we must avoid a large excess of chloride. The predominate silver-chloro complexes for different values of pCl are shown by the ladder diagram along the x-axis in Figure 8.2.1 . Note that the increase in solubility begins when the higher-order soluble complexes of $\text{AgCl}_2^-$ and $\text{AgCl}_3^{2-}$ are the predominate species. Another important parameter that may affect a precipitate’s solubility is pH. For example, a hydroxide precipitate, such as Fe(OH)3, is more soluble at lower pH levels where the concentration of OHis small. Because fluoride is a weak base, the solubility of calcium fluoride, $S_{\text{CaF}_2}$, also is pH-dependent. We can derive an equation for $S_{\text{CaF}_2}$ by considering the following equilibrium reactions $\mathrm{CaF}_{2}(s)\rightleftharpoons \mathrm{Ca}^{2+}(a q)+2 \mathrm{F}^{-}(a q) \quad K_{\mathfrak{sp}}=3.9 \times 10^{-11} \label{8.8}$ $\mathrm{HF}(a q)+\mathrm{H}_{2} \mathrm{O}(l )\rightleftharpoons\mathrm{H}_{3} \mathrm{O}^{+}(a q)+\mathrm{F}^{-}(a q) \quad K_{\mathrm{a}}=6.8 \times 10^{-4} \label{8.9}$ and the following equation for the solubility of CaF2. $S_{\mathrm{Ca} \mathrm{F}_{2}}=\left[\mathrm{Ca}^{2+}\right]=\frac{1}{2}\left\{\left[\mathrm{F}^{-}\right]+[\mathrm{HF}]\right\} \label{8.10}$ Be sure that Equation \ref{8.10} makes sense to you. Reaction \ref{8.8} tells us that the dissolution of CaF2 produces one mole of Ca2+ for every two moles of F, which explains the term of 1/2 in Equation \ref{8.10}. Because F is a weak base, we must account for both chemical forms in solution, which explains why we include HF. Substituting the equilibrium constant expressions for reaction \ref{8.8} and reaction \ref{8.9} into Equation \ref{8.10} allows us to define the solubility of CaF2 in terms of the equilibrium concentration of H3O+. $S_{\mathrm{CaF}_{2}}=\left[\mathrm{Ca}^{2+}\right]=\left\{\frac{K_{\mathrm{p}}}{4}\left(1+\frac{\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]}{K_{\mathrm{a}}}\right)^{2}\right\}^{1 / 3} \label{8.11}$ Figure 8.2.2 shows how pH affects the solubility of CaF2. Depending on the solution’s pH, the predominate form of fluoride is either HF or F. When the pH is greater than 4.17, the predominate species is F and the solubility of CaF2 is independent of pH because only reaction \ref{8.8} occurs to an appreciable extent. At more acidic pH levels, the solubility of CaF2 increases because of the contribution of reaction \ref{8.9}. Exercise 8.2.1 You can use a ladder diagram to predict the conditions that will minimize a precipitate’s solubility. Draw a ladder diagram for oxalic acid, H2C2O4, and use it to predict the range of pH values that will minimize the solubility of CaC2O4. Relevant equilibrium constants are in the appendices. Answer The solubility reaction for CaC2O4 is $\mathrm{CaC}_{2} \mathrm{O}_{4}(s)\rightleftharpoons \mathrm{Ca}^{2+}(a q)+\mathrm{C}_{2} \mathrm{O}_{4}^{2-}(a q) \nonumber$ To minimize solubility, the pH must be sufficiently basic that oxalate, $\text{C}_2\text{O}_4^{2-}$, does not react to form $\text{HC}_2\text{O}_4^{-}$ or H2C2O4. The ladder diagram for oxalic acid, including approximate buffer ranges, is shown below. Maintaining a pH greater than 5.3 ensures that $\text{C}_2\text{O}_4^{2-}$ is the only important form of oxalic acid in solution, minimizing the solubility of CaC2O4. When solubility is a concern, it may be possible to decrease solubility by using a non-aqueous solvent. A precipitate’s solubility generally is greater in an aqueous solution because of water’s ability to stabilize ions through solvation. The poorer solvating ability of a non-aqueous solvent, even those that are polar, leads to a smaller solubility product. For example, the Ksp of PbSO4 is $2 \times 10^{-8}$ in H2O and $2.6 \times 10^{-12}$ in a 50:50 mixture of H2O and ethanol. Avoiding Impurities In addition to having a low solubility, a precipitate must be free from impurities. Because precipitation usually occurs in a solution that is rich in dissolved solids, the initial precipitate often is impure. To avoid a determinate error, we must remove these impurities before we determine the precipitate’s mass. The greatest source of impurities are chemical and physical interactions that take place at the precipitate’s surface. A precipitate generally is crystalline—even if only on a microscopic scale—with a well-defined lattice of cations and anions. Those cations and anions at the precipitate’s surface carry, respectively, a positive or a negative charge because they have incomplete coordination spheres. In a precipitate of AgCl, for example, each silver ion in the precipitate’s interior is bound to six chloride ions. A silver ion at the surface, however, is bound to no more than five chloride ions and carries a partial positive charge (Figure 8.2.3 ). The presence of these partial charges makes the precipitate’s surface an active site for the chemical and physical interactions that produce impurities. One common impurity is an inclusion, in which a potential interferent, whose size and charge is similar to a lattice ion, can substitute into the lattice structure if the interferent precipitates with the same crystal structure (Figure 8.2.4 a). The probability of forming an inclusion is greatest when the interfering ion’s concentration is substantially greater than the lattice ion’s concentration. An inclusion does not decrease the amount of analyte that precipitates, provided that the precipitant is present in sufficient excess. Thus, the precipitate’s mass always is larger than expected. An inclusion is difficult to remove since it is chemically part of the precipitate’s lattice. The only way to remove an inclusion is through reprecipitation in which we isolate the precipitate from its supernatant solution, dissolve the precipitate by heating in a small portion of a suitable solvent, and then reform the precipitate by allowing the solution to cool. Because the interferent’s concentration after dissolving the precipitate is less than that in the original solution, the amount of included material decreases upon reprecipitation. We can repeat the process of reprecipitation until the inclusion’s mass is insignificant. The loss of analyte during reprecipitation, however, is a potential source of determinate error. Suppose that 10% of an interferent forms an inclusion during each precipitation. When we initially form the precipitate, 10% of the original interferent is present as an inclusion. After the first reprecipitation, 10% of the included interferent remains, which is 1% of the original interferent. A second reprecipitation decreases the interferent to 0.1% of the original amount. An occlusion forms when an interfering ions is trapped within the growing precipitate. Unlike an inclusion, which is randomly dispersed within the precipitate, an occlusion is localized, either along flaws within the precipitate’s lattice structure or within aggregates of individual precipitate particles (Figure 8.2.4 b). An occlusion usually increases a precipitate’s mass; however, the precipitate’s mass is smaller if the occlusion includes the analyte in a lower molecular weight form than that of the precipitate. We can minimize an occlusion by maintaining the precipitate in equilibrium with its supernatant solution for an extended time, a process called digestion. During a digestion, the dynamic nature of the solubility–precipitation equilibria, in which the precipitate dissolves and reforms, ensures that the occlusion eventually is reexposed to the supernatant solution. Because the rates of dissolution and reprecipitation are slow, there is less opportunity for forming new occlusions. After precipitation is complete the surface continues to attract ions from solution (Figure 8.2.4 c). These surface adsorbates comprise a third type of impurity. We can minimize surface adsorption by decreasing the precipitate’s available surface area. One benefit of digestion is that it increases a precipitate’s average particle size. Because the probability that a particle will dissolve completely is inversely proportional to its size, during digestion larger particles increase in size at the expense of smaller particles. One consequence of forming a smaller number of larger particles is an overall decrease in the precipitate’s surface area. We also can remove surface adsorbates by washing the precipitate, although we cannot ignore the potential loss of analyte. Inclusions, occlusions, and surface adsorbates are examples of coprecipitates—otherwise soluble species that form along with the precipitate that contains the analyte. Another type of impurity is an interferent that forms an independent precipitate under the conditions of the analysis. For example, the precipitation of nickel dimethylglyoxime requires a slightly basic pH. Under these conditions any Fe3+ in the sample will precipitate as Fe(OH)3. In addition, because most precipitants rarely are selective toward a single analyte, there is a risk that the precipitant will react with both the analyte and an interferent. In addition to forming a precipitate with Ni2+, dimethylglyoxime also forms precipitates with Pd2+ and Pt2+. These cations are potential interferents in an analysis for nickel. We can minimize the formation of additional precipitates by controlling solution conditions. If an interferent forms a precipitate that is less soluble than the analyte’s precipitate, we can precipitate the interferent and remove it by filtration, leaving the analyte behind in solution. Alternatively, we can mask the analyte or the interferent to prevent its precipitation. Both of the approaches outline above are illustrated in Fresenius’ analytical method for the determination of Ni in ores that contain Pb2+, Cu2+, and Fe3+ (see Figure 1.1.1 in Chapter 1). Dissolving the ore in the presence of H2SO4 selectively precipitates Pb2+ as PbSO4. Treating the resulting supernatant with H2S precipitates Cu2+ as CuS. After removing the CuS by filtration, ammonia is added to precipitate Fe3+ as Fe(OH)3. Nickel, which forms a soluble amine complex, remains in solution. Masking was introduced in Chapter 7. Controlling Particle Size Size matters when it comes to forming a precipitate. Larger particles are easier to filter and, as noted earlier, a smaller surface area means there is less opportunity for surface adsorbates to form. By controlling the reaction conditions we can significantly increase a precipitate’s average particle size. The formation of a precipitate consists of two distinct events: nucleation, the initial formation of smaller, stable particles of the precipitate, and particle growth. Larger particles form when the rate of particle growth exceeds the rate of nucleation. Understanding the conditions that favor particle growth is important when we design a gravimetric method of analysis. We define a solute’s relative supersaturation, RSS, as $R S S=\frac{Q-S}{S} \label{8.12}$ where Q is the solute’s actual concentration and S is the solute’s concentration at equilibrium [Von Weimarn, P. P. Chem. Revs. 1925, 2, 217–242]. The numerator of Equation \ref{8.12}, Q S, is a measure of the solute’s supersaturation. A solution with a large, positive value of RSS has a high rate of nucleation and produces a precipitate with many small particles. When the RSS is small, precipitation is more likely to occur by particle growth than by nucleation. A supersaturated solution is one that contains more dissolved solute than that predicted by equilibrium chemistry. A supersaturated solution is inherently unstable and precipitates solute to reach its equilibrium position. How quickly precipitation occurs depends, in part, on the value of RSS. Equation \ref{8.12} suggests that we can minimize RSS if we decrease the solute’s concentration, Q, or if we increase the precipitate’s solubility, S. A precipitate’s solubility usually increases at higher temperatures and adjusting pH may affect a precipitate’s solubility if it contains an acidic or a basic ion. Temperature and pH, therefore, are useful ways to increase the value of S. Forming the precipitate in a dilute solution of analyte or adding the precipitant slowly and with vigorous stirring are ways to decrease the value of Q. There are practical limits to minimizing RSS. Some precipitates, such as Fe(OH)3 and PbS, are so insoluble that S is very small and a large RSS is unavoidable. Such solutes inevitably form small particles. In addition, conditions that favor a small RSS may lead to a relatively stable supersaturated solution that requires a long time to precipitate fully. For example, almost a month is required to form a visible precipitate of BaSO4 under conditions in which the initial RSS is 5 [Bassett, J.; Denney, R. C.; Jeffery, G. H. Mendham. J. Vogel’s Textbook of Quantitative Inorganic Analysis, Longman: London, 4th Ed., 1981, p. 408]. A visible precipitate takes longer to form when RSS is small both because there is a slow rate of nucleation and because there is a steady decrease in RSS as the precipitate forms. One solution to the latter problem is to generate the precipitant in situ as the product of a slow chemical reaction, which effectively maintains a constant RSS. Because the precipitate forms under conditions of low RSS, initial nucleation produces a small number of particles. As additional precipitant forms, particle growth supersedes nucleation, which results in larger particles of precipitate. This process is called a homogeneous precipitation [Gordon, L.; Salutsky, M. L.; Willard, H. H. Precipitation from Homogeneous Solution, Wiley: NY, 1959]. Two general methods are used for homogeneous precipitation. If the precipitate’s solubility is pH-dependent, then we can mix the analyte and the precipitant under conditions where precipitation does not occur, and then increase or decrease the pH by chemically generating OH or H3O+. For example, the hydrolysis of urea, CO(NH2)2, is a source of OH because of the following two reactions. $\mathrm{CO}\left(\mathrm{NH}_{2}\right)_{2}(a q)+\mathrm{H}_{2} \mathrm{O}( l)\rightleftharpoons2 \mathrm{NH}_{3}(a q)+\mathrm{CO}_{2}(g) \nonumber$ $\mathrm{NH}_{3}(a q)+\mathrm{H}_{2} \mathrm{O}( l)\rightleftharpoons\mathrm{OH}^{-}(a q)+\mathrm{NH}_{4}^{+}(a q) \nonumber$ Because the hydrolysis of urea is temperature-dependent—the rate is negligible at room temperature—we can use temperature to control the rate of hydrolysis and the rate of precipitate formation. Precipitates of CaC2O4, for example, have been produced by this method. After dissolving a sample that contains Ca2+, the solution is made acidic with HCl before adding a solution of 5% w/v (NH4)2C2O4. Because the solution is acidic, a precipitate of CaC2O4 does not form. The solution is heated to approximately 50oC and urea is added. After several minutes, a precipitate of CaC2O4 begins to form, with precipitation reaching completion in about 30 min. In the second method of homogeneous precipitation, the precipitant is generated by a chemical reaction. For example, Pb2+ is precipitated homogeneously as PbCrO4 by using bromate, $\text{BrO}_3^-$, to oxidize Cr3+ to $\text{CrO}_4^{2-}$. $6 \mathrm{BrO}_{3}^{-}(a q)+10 \mathrm{Cr}^{3+}(a q)+22 \mathrm{H}_{2} \mathrm{O}(l)\rightleftharpoons 3 \mathrm{Br}_{2}(a q)+10 \mathrm{CrO}_{4}^{2-}(a q)+44 \mathrm{H}^{+}(a q) \nonumber$ Figure 8.2.5 shows the result of preparing PbCrO4 by direct addition of K2CrO4 (Beaker A) and by homogenous precipitation (Beaker B). Both beakers contain the same amount of PbCrO4. Because the direct addition of K2CrO4 leads to rapid precipitation and the formation of smaller particles, the precipitate remains less settled than the precipitate prepared homogeneously. Note, as well, the difference in the color of the two precipitates. The effect of particle size on color is well-known to geologists, who use a streak test to help identify minerals. The color of a bulk mineral and its color when powdered often are different. Rubbing a mineral across an unglazed porcelain plate leaves behind a small streak of the powdered mineral. Bulk samples of hematite, Fe2O3, are black in color, but its streak is a familiar rust-red. Crocite, the mineral PbCrO4, is red-orange in color; its streak is orange-yellow. A homogeneous precipitation produces large particles of precipitate that are relatively free from impurities. These advantages, however, are offset by the increased time needed to produce the precipitate and by a tendency for the precipitate to deposit as a thin film on the container’s walls. The latter problem is particularly severe for hydroxide precipitates generated using urea. An additional method for increasing particle size deserves mention. When a precipitate’s particles are electrically neutral they tend to coagulate into larger particles that are easier to filter. Surface adsorption of excess lattice ions, however, provides the precipitate’s particles with a net positive or a net negative surface charge. Electrostatic repulsion between particles of similar charge prevents them from coagulating into larger particles. Let’s use the precipitation of AgCl from a solution of AgNO3 using NaCl as a precipitant to illustrate this effect. Early in the precipitation, when NaCl is the limiting reagent, excess Ag+ ions chemically adsorb to the AgCl particles, forming a positively charged primary adsorption layer (Figure 8.2.6 a). The solution in contact with this layer contains more inert anions, $\text{NO}_3^-$ in this case, than inert cations, Na+, giving a secondary adsorption layer with a negative charge that balances the primary adsorption layer’s positive charge. The solution outside the secondary adsorption layer remains electrically neutral. Coagulation cannot occur if the secondary adsorption layer is too thick because the individual particles of AgCl are unable to approach each other closely enough. We can induce coagulation in three ways: by decreasing the number of chemically adsorbed Ag+ ions, by increasing the concentration of inert ions, or by heating the solution. As we add additional NaCl, precipitating more of the excess Ag+, the number of chemically adsorbed silver ions decreases and coagulation occurs (Figure 8.2.6 b). Adding too much NaCl, however, creates a primary adsorption layer of excess Cl with a loss of coagulation. The coagulation and decoagulation of AgCl as we add NaCl to a solution of AgNO3 can serve as an endpoint for a titration. See Chapter 9 for additional details. A second way to induce coagulation is to add an inert electrolyte, which increases the concentration of ions in the secondary adsorption layer (Figure 8.2.6 c). With more ions available, the thickness of the secondary absorption layer decreases. Particles of precipitate may now approach each other more closely, which allows the precipitate to coagulate. The amount of electrolyte needed to cause spontaneous coagulation is called the critical coagulation concentration. Heating the solution and the precipitate provides a third way to induce coagulation. As the temperature increases, the number of ions in the primary adsorption layer decreases, which lowers the precipitate’s surface charge. In addition, heating increases the particles’ kinetic energy, allowing them to overcome the electrostatic repulsion that prevents coagulation at lower temperatures. Filtering the Precipitate After precipitating and digesting a precipitate, we separate it from solution by filtering. The most common filtration method uses filter paper, which is classified according to its speed, its size, and its ash content on ignition. Speed, or how quickly the supernatant passes through the filter paper, is a function of the paper’s pore size. A larger pore size allows the supernatant to pass more quickly through the filter paper, but does not retain small particles of precipitate. Filter paper is rated as fast (retains particles larger than 20–25 μm), medium–fast (retains particles larger than 16 μm), medium (retains particles larger than 8 μm), and slow (retains particles larger than 2–3 μm). The proper choice of filtering speed is important. If the filtering speed is too fast, we may fail to retain some of the precipitate, which causes a negative determinate error. On the other hand, the precipitate may clog the pores if we use a filter paper that is too slow. A filter paper’s size is just its diameter. Filter paper comes in many sizes, including 4.25 cm, 7.0 cm, 11.0 cm, 12.5 cm, 15.0 cm, and 27.0 cm. Choose a size that fits comfortably into your funnel. For a typical 65-mm long-stem funnel, 11.0 cm and 12.5 cm filter paper are good choices. Because filter paper is hygroscopic, it is not easy to dry it to a constant weight. When accuracy is important, the filter paper is removed before we determine the precipitate’s mass. After transferring the precipitate and filter paper to a covered crucible, we heat the crucible to a temperature that coverts the paper to CO2(g) and H2O(g), a process called ignition. Igniting a poor quality filter paper leaves behind a residue of inorganic ash. For quantitative work, use a low-ash filter paper. This grade of filter paper is pretreated with a mixture of HCl and HF to remove inorganic materials. Quantitative filter paper typically has an ash content of less than 0.010% w/w. Gravity filtration is accomplished by folding the filter paper into a cone and placing it in a long-stem funnel (Figure 8.2.7 ). To form a tight seal between the filter cone and the funnel, we dampen the paper with water or supernatant and press the paper to the wall of the funnel. When prepared properly, the funnel’s stem fills with the supernatant, increasing the rate of filtration. The precipitate is transferred to the filter in several steps. The first step is to decant the majority of the supernatant through the filter paper without transferring the precipitate (Figure 8.2.8 ). This prevents the filter paper from clogging at the beginning of the filtration process. The precipitate is rinsed while it remains in its beaker, with the rinsings decanted through the filter paper. Finally, the precipitate is transferred onto the filter paper using a stream of rinse solution. Any precipitate that clings to the walls of the beaker is transferred using a rubber policeman (a flexible rubber spatula attached to the end of a glass stirring rod). An alternative method for filtering a precipitate is to use a filtering crucible. The most common option is a fritted-glass crucible that contains a porous glass disk filter. Fritted-glass crucibles are classified by their porosity: coarse (retaining particles larger than 40–60 μm), medium (retaining particles greater than 10–15 μm), and fine (retaining particles greater than 4–5.5 μm). Another type of filtering crucible is the Gooch crucible, which is a porcelain crucible with a perforated bottom. A glass fiber mat is placed in the crucible to retain the precipitate. For both types of crucibles, the pre- cipitate is transferred in the same manner described earlier for filter paper. Instead of using gravity, the supernatant is drawn through the crucible with the assistance of suction from a vacuum aspirator or pump (Figure 8.2.9 ). Rinsing the Precipitate Because the supernatant is rich with dissolved inert ions, we must remove residual traces of supernatant without incurring loss of analyte due to solubility. In many cases this simply involves the use of cold solvents or rinse solutions that contain organic solvents such as ethanol. The pH of the rinse solution is critical if the precipitate contains an acidic or a basic ion. When coagulation plays an important role in determining particle size, adding a volatile inert electrolyte to the rinse solution prevents the precipitate from reverting into smaller particles that might pass through the filter. This process of reverting to smaller particles is called peptization. The volatile electrolyte is removed when drying the precipitate. In general, we can minimize the loss of analyte if we use several small portions of rinse solution instead of a single large volume. Testing the used rinse solution for the presence of an impurity is another way to guard against over-rinsing the precipitate. For example, if Cl is a residual ion in the supernatant, we can test for its presence using AgNO3. After we collect a small portion of the rinse solution, we add a few drops of AgNO3 and look for the presence or absence of a precipitate of AgCl. If a precipitate forms, then we know Cl is present and continue to rinse the precipitate. Additional rinsing is not needed if the AgNO3 does not produce a precipitate. Drying the Precipitate After separating the precipitate from its supernatant solution, we dry the precipitate to remove residual traces of rinse solution and to remove any volatile impurities. The temperature and method of drying depend on the method of filtration and the precipitate’s desired chemical form. Placing the precipitate in a laboratory oven and heating to a temperature of 110oC is sufficient to remove water and other easily volatilized impurities. Higher temperatures require a muffle furnace, a Bunsen burner, or a Meker burner, and are necessary if we need to decompose the precipitate before its weight is determined. Because filter paper absorbs moisture, we must remove it before we weigh the precipitate. This is accomplished by folding the filter paper over the precipitate and transferring both the filter paper and the precipitate to a porcelain or platinum crucible. Gentle heating first dries and then chars the filter paper. Once the paper begins to char, we slowly increase the temperature until there is no trace of the filter paper and any remaining carbon is oxidized to CO2. Fritted-glass crucibles can not withstand high temperatures and are dried in an oven at a temperature below 200oC. The glass fiber mats used in Gooch crucibles can be heated to a maximum temperature of approximately 500oC. Composition of the Final Precipitate For a quantitative application, the final precipitate must have a well-defined composition. A precipitate that contains volatile ions or substantial amounts of hydrated water, usually is dried at a temperature that completely removes these volatile species. For example, one standard gravimetric method for the determination of magnesium involves its precipitation as MgNH4PO4•6H2O. Unfortunately, this precipitate is difficult to dry at lower temperatures without losing an inconsistent amount of hydrated water and ammonia. Instead, the precipitate is dried at a temperature greater than 1000oC where it decomposes to magnesium pyrophosphate, Mg2P2O7. An additional problem is encountered if the isolated solid is nonstoichiometric. For example, precipitating Mn2+ as Mn(OH)2 and heating frequently produces a nonstoichiometric manganese oxide, MnOx, where x varies between one and two. In this case the nonstoichiometric product is the result of forming a mixture of oxides with different oxidation state of manganese. Other nonstoichiometric compounds form as a result of lattice defects in the crystal structure [Ward, R., ed., Non-Stoichiometric Compounds (Ad. Chem. Ser. 39), American Chemical Society: Washington, D. C., 1963]. Representative Method 8.2.1: Determination of Mg in Water and Wastewater The best way to appreciate the theoretical and practical details discussed in this section is to carefully examine a typical precipitation gravimetric method. Although each method is unique, the determination of Mg2+ in water and wastewater by precipitating MgNH4PO4• 6H2O and isolating Mg2P2O7 provides an instructive example of a typical procedure. The description here is based on Method 3500-Mg D in Standard Methods for the Examination of Water and Wastewater, 19th Ed., American Public Health Asso- ciation: Washington, D. C., 1995. With the publication of the 20th Edition in 1998, this method is no longer listed as an approved method. Description of Method Magnesium is precipitated as MgNH4PO4•6H2O using (NH4)2HPO4 as the precipitant. The precipitate’s solubility in a neutral solution is relatively high (0.0065 g/100 mL in pure water at 10oC), but it is much less soluble in the presence of dilute ammonia (0.0003 g/100 mL in 0.6 M NH3). Because the precipitant is not selective, a preliminary separation of Mg2+ from potential interferents is necessary. Calcium, which is the most significant interferent, is removed by precipitating it as CaC2O4. The presence of excess ammonium salts from the precipitant, or from the addition of too much ammonia, leads to the formation of Mg(NH4)4(PO4)2, which forms Mg(PO3)2 after drying. The precipitate is isolated by gravity filtration, using a rinse solution of dilute ammonia. After filtering, the precipitate is converted to Mg2P2O7 and weighed. Procedure Transfer a sample that contains no more than 60 mg of Mg2+ into a 600-mL beaker. Add 2–3 drops of methyl red indicator, and, if necessary, adjust the volume to 150 mL. Acidify the solution with 6 M HCl and add 10 mL of 30% w/v (NH4)2HPO4. After cooling and with constant stirring, add concentrated NH3 dropwise until the methyl red indicator turns yellow (pH > 6.3). After stirring for 5 min, add 5 mL of concentrated NH3 and continue to stir for an additional 10 min. Allow the resulting solution and precipitate to stand overnight. Isolate the precipitate by filtering through filter paper, rinsing with 5% v/v NH3. Dissolve the precipitate in 50 mL of 10% v/v HCl and precipitate a second time following the same procedure. After filtering, carefully remove the filter paper by charring. Heat the precipitate at 500oC until the residue is white, and then bring the precipitate to constant weight at 1100oC. Questions 1. Why does the procedure call for a sample that contains no more than 60 mg of Mg2+? A 60-mg portion of Mg2+ generates approximately 600 mg of MgNH4PO4•6H2O, which is a substantial amount of precipitate. A larger quantity of precipitate is difficult to filter and difficult to rinse free of impurities. 2. Why is the solution acidified with HCl before we add the precipitant? The HCl ensures that MgNH4PO4 • 6H2O does not precipitate immediately upon adding the precipitant. Because $\text{PO}_4^{3-}$ is a weak base, the precipitate is soluble in a strongly acidic solution. If we add the precipitant under neutral or basic conditions (that is, a high RSS), then the resulting precipitate will consist of smaller, less pure particles. Increasing the pH by adding base allows the precipitate to form under more favorable (that is, a low RSS) conditions. 3. Why is the acid–base indicator methyl red added to the solution? The indicator changes color at a pH of approximately 6.3, which indicates that there is sufficient NH3 to neutralize the HCl added at the beginning of the procedure. The amount of NH3 is crucial to this procedure. If we add insufficient NH3, then the solution is too acidic, which increases the precipitate’s solubility and leads to a negative determinate error. If we add too much NH3, the precipitate may contain traces of Mg(NH4)4(PO4)2, which, on drying, forms Mg(PO3)2 instead of Mg2P2O7. This increases the mass of the ignited precipitate, and gives a positive determinate error. After adding enough NH3 to neutralize the HCl, we add an additional 5 mL of NH3 to complete the quantitative precipitation of MgNH4PO4 • 6H2O. 4. Explain why forming Mg(PO3)2 instead of Mg2P2O7 increases the precipitate’s mass. Each mole of Mg2P2O7 contains two moles of magnesium and each mole of Mg(PO3)2 contains only one mole of magnesium. A conservation of mass, therefore, requires that two moles of Mg(PO3)2 form in place of each mole of Mg2P2O7. One mole of Mg2P2O7 weighs 222.6 g. Two moles of Mg(PO3)2 weigh 364.5 g. Any replacement of Mg2P2O7 with Mg(PO3)2 must increase the precipitate’s mass. 5. What additional steps, beyond those discussed in questions 2 and 3, help improve the precipitate’s purity? Two additional steps in the procedure help to form a precipitate that is free of impurities: digestion and reprecipitation. 6. Why is the precipitate rinsed with a solution of 5% v/v NH3? This is done for the same reason that the precipitation is carried out in an ammonical solution; using dilute ammonia minimizes solubility losses when we rinse the precipitate. Quantitative Applications Although no longer a common analytical technique, precipitation gravimetry still provides a reliable approach for assessing the accuracy of other methods of analysis, or for verifying the composition of standard reference materials. In this section we review the general application of precipitation gravimetry to the analysis of inorganic and organic compounds. Inorganic Analysis Table 8.2.1 provides a summary of precipitation gravimetric methods for inorganic cations and anions. Several methods for the homogeneous generation of precipitants are shown in Table 8.2.2 . The majority of inorganic precipitants show poor selectivity for the analyte. Many organic precipitants, however, are selective for one or two inorganic ions. Table 8.2.3 lists examples of several common organic precipitants. Table 8.2.1 . Selected Precipitation Gravimetric Methods for Inorganic Cations and Anions (Arranged by Precipitant) analyte precipitant precipitate formed precipitate weighed Ba2+ (NH4)2CrO4 BaCrO4 BaCrO4 Pb2+ K2CrO4 PbCrO4 PbCrO4 Ag+ HCl AgCl AgCl $\text{Hg}_2^{2+}$ HCl Hg2Cl2 Hg2Cl2 Al3+ NH3 Al(OH)3 Al2O3 Be2+ NH3 Be(OH)2 BeO Fe3+ NH3 Fe(OH)3 Fe2O3 Ca2+ (NH4)2CrO4 CaC2O4 CaCO3 or CaO Sb3+ H2S Sb2S3 Sb2S3 As3+ H2S As2S3 As2S3 Hg2+ H2S HgS HgS Ba2+ H2SO4 BaSO4 BaSO4 Pb2+ H2SO4 PbSO4 PbSO4 Sr2+ H2SO4 SrSO4 SrSO4 Be2+ (NH4)2HPO4 NH4BePO4 Be2P2O7 Mg2+ (NH4)2HPO4 NH4MgPO4 Mg2P2O7 Zn2+ (NH4)2HPO4 NH4ZnPO4 Zn2P2O7 Sr2+ KH2PO4 SrHPO4 Sr2P2O7 CN AgNO3 AgCN AgCN I– AgNO3 AgI AgI Br AgNO3 AgBr AgBr Cl AgNO3 AgCl AgCl $\text{ClO}_3^-$ FeSO4/AgNO3 AgCl AgCl SCN SO2/CuSO4 CuSCN CuSCN $\text{SO}_4^2-$ BaCl2 BaSO4 BaSO4 Table 8.2.2 . Reactions for the Homogeneous Preparation of Selected Inorganic Precipitants precipitant reaction OH– $\left(\mathrm{NH}_{2}\right)_{2} \mathrm{CO}(a q)+3 \mathrm{H}_{2} \mathrm{O}(l)\rightleftharpoons2 \mathrm{NH}_{4}^{+}(a q)+\mathrm{CO}_{2}(g)+2 \mathrm{OH}^{-}(a q)$ $\text{SO}_4^{2-}$ $\mathrm{NH}_{2} \mathrm{HSO}_{3}(a q)+2 \mathrm{H}_{2} \mathrm{O}(l )\rightleftharpoons\mathrm{NH}_{4}^{+}(a q)+\mathrm{H}_{3} \mathrm{O}^{+}(a q)+\mathrm{SO}_{4}^{2-}(a q)$ S2– $\mathrm{CH}_{3} \mathrm{CSNH}_{2}(a q)+\mathrm{H}_{2} \mathrm{O}(l )\rightleftharpoons\mathrm{CH}_{3} \mathrm{CONH}_{2}(a q)+\mathrm{H}_{2} \mathrm{S}(a q)$ $\text{IO}_3^-$ $\mathrm{HOCH}_{2} \mathrm{CH}_{2} \mathrm{OH}(a q)+\mathrm{IO}_{4}^{-}(a q)\rightleftharpoons2 \mathrm{HCHO}(a q)+\mathrm{H}_{2} \mathrm{O}(l)+\mathrm{IO}_{3}^{-}(a q)$ $\text{PO}_4^{3-}$ $\left(\mathrm{CH}_{3} \mathrm{O}\right)_{3} \mathrm{PO}(a q)+3 \mathrm{H}_{2} \mathrm{O}(l)\rightleftharpoons3 \mathrm{CH}_{3} \mathrm{OH}(a q)+\mathrm{H}_{3} \mathrm{PO}_{4}(a q)$ $\text{C}_2\text{O}_4^{2-}$ $\left(\mathrm{C}_{2} \mathrm{H}_{5}\right)_{2} \mathrm{C}_{2} \mathrm{O}_{4}(a q)+2 \mathrm{H}_{2} \mathrm{O}( l)\rightleftharpoons2 \mathrm{C}_{2} \mathrm{H}_{5} \mathrm{OH}(a q)+\mathrm{H}_{2} \mathrm{C}_{2} \mathrm{O}_{4}(a q)$ $\text{CO}_3^{2-}$ $\mathrm{Cl}_{3} \mathrm{CCOOH}(a q)+2 \mathrm{OH}^{-}(a q)\rightleftharpoons\mathrm{CHCl}_{3}(a q)+\mathrm{CO}_{3}^{2-}(a q)+\mathrm{H}_{2} \mathrm{O}(l)$ Table 8.2.3 . Selected Precipitation Gravimetric Methods for Inorganic Ions Using an Organic Precipitant analyte precipitant structure precipirate formed precipitate weighed Ni2+ dimethylglyoxime Ni(C4H7O2N2)2 Ni(C4H7O2N2)2 Fe3+ cupferron Fe(C6H5N2O2)3 Fe2O3 Cu2+ cupron CuC14H11O2N CuC14H11O2N Co2+ 1-nitrso-2-napthol Co(C10H6O2N)3 Co or CoSO4 K+ sodium tetraphenylborate Na[B(C6H5)4] K[B(C6H5)4] K[B(C6H5)4] $\text{NO}_3^-$ nitron C20H16N4HNO3 C20H16N4HNO3 Precipitation gravimetry continues to be listed as a standard method for the determination of $\text{SO}_4^{2-}$ in water and wastewater analysis [Method 4500-SO42– C and Method 4500-SO42– D as published in Standard Methods for the Examination of Waters and Wastewaters, 20th Ed., American Public Health Association: Wash- ington, D. C., 1998]. Precipitation is carried out using BaCl2 in an acidic solution (adjusted with HCl to a pH of 4.5–5.0) to prevent the precipitation of BaCO3 or Ba3(PO4)2, and at a temperature near the solution’s boiling point. The precipitate is digested at 80–90oC for at least two hours. Ashless filter paper pulp is added to the precipitate to aid in its filtration. After filtering, the precipitate is ignited to constant weight at 800oC. Alternatively, the precipitate is filtered through a fine porosity fritted glass crucible (without adding filter paper pulp), and dried to constant weight at 105oC. This procedure is subject to a variety of errors, including occlusions of Ba(NO3)2, BaCl2, and alkali sulfates. Other standard methods for the determination of sulfate in water and wastewater include ion chromatography (see Chapter 12), capillary ion electrophoresis (see Chapter 12), turbidimetry (see Chapter 10), and flow injection analysis (see Chapter 13). Organic Analysis Several organic functional groups or heteroatoms can be determined using precipitation gravimetric methods. Table 8.2.4 provides a summary of several representative examples. Note that the determination of alkoxy functional groups is an indirect analysis in which the functional group reacts with and excess of HI and the unreacted I determined by precipitating as AgCl. Table 8.2.4 . Selected Precipitation Gravimetric Methods for the Analysis of Organic Functional Groups and Heteroatoms analyte treatment precipitant precipitate organic halides (R-X) where X is Cl, Br, or I oxidation with HNO3 in the presence of Ag+ AgNO3 AgX organic halides (R-X) where X is Cl, Br, or I combustion in O2 (with a Pt catalyst) in the presence of Ag+ AgNO3 AgX organic sulfur oxidation with HNO3 in the presence of Ba2+ BaCl2 BaSO4 organic sulfur combustion in O2 (with Pt catalyst) with SO2 and SO3 collected in dilute H2O2 BaCl2 BaSO4 alkoxy groups (–O-R or –COO-R) where R is –CH3 or –C2H5 reaction with HI to produce RI AgNO3 AgI Quantitative Calculations The stoichiometry of a precipitation reaction provides a mathematical relationship between the analyte and the precipitate. Because a precipitation gravimetric method may involve additional chemical reactions to bring the analyte into a different chemical form, knowing the stoichiometry of the precipitation reaction is not always sufficient. Even if you do not have a complete set of balanced chemical reactions, you can use a conservation of mass to deduce the mathematical relationship between the analyte and the precipitate. The following example demonstrates this approach for the direct analysis of a single analyte. Example 8.2.1 To determine the amount of magnetite, Fe3O4, in an impure ore, a 1.5419-g sample is dissolved in concentrated HCl, resulting in a mixture of Fe2+and Fe3+. After adding HNO3 to oxidize Fe2+ to Fe3+ and diluting with water, Fe3+ is precipitated as Fe(OH)3 using NH3. Filtering, rinsing, and igniting the precipitate provides 0.8525 g of pure Fe2O3. Calculate the %w/w Fe3O4 in the sample. Solution A conservation of mass requires that the precipitate of Fe2O3 contain all iron originally in the sample of ore. We know there are 2 moles of Fe per mole of Fe2O3 (FW = 159.69 g/mol) and 3 moles of Fe per mole of Fe3O4 (FW = 231.54 g/mol); thus $0.8525 \ \mathrm{g} \ \mathrm{Fe}_{2} \mathrm{O}_{3} \times \frac{2 \ \mathrm{mol} \ \mathrm{Fe}}{159.69 \ \mathrm{g} \ \mathrm{Fe}_{2} \mathrm{O}_{3}} \times \frac{231.54 \ \mathrm{g} \ \mathrm{Fe}_{3} \mathrm{O}_{4}}{3 \ \mathrm{mol} \ \mathrm{Fe}}=0.82405 \ \mathrm{g} \ \mathrm{Fe}_{3} \mathrm{O}_{4} \nonumber$ The % w/w Fe3O4 in the sample, therefore, is $\frac{0.82405 \ \mathrm{g} \ \mathrm{Fe}_{3} \mathrm{O}_{4}}{1.5419 \ \mathrm{g} \ \text { sample }} \times 100=53.44 \% \nonumber$ Exercise 8.2.2 A 0.7336-g sample of an alloy that contains copper and zinc is dissolved in 8 M HCl and diluted to 100 mL in a volumetric flask. In one analysis, the zinc in a 25.00-mL portion of the solution is precipitated as ZnNH4PO4, and isolated as Zn2P2O7, yielding 0.1163 g. The copper in a separate 25.00-mL portion of the solution is treated to precipitate CuSCN, yielding 0.2383 g. Calculate the %w/w Zn and the %w/w Cu in the sample. Answer A conservation of mass requires that all zinc in the alloy is found in the final product, Zn2P2O7. We know there are 2 moles of Zn per mole of Zn2P2O7; thus $0.1163 \ \mathrm{g} \ \mathrm{Zn}_{2} \mathrm{P}_{2} \mathrm{O}_{7} \times \frac{2 \ \mathrm{mol} \ \mathrm{Zn}}{304.70 \ \mathrm{g}\ \mathrm{Zn}_{2} \mathrm{P}_{2} \mathrm{O}_{7}} \times \frac{65.38 \ \mathrm{g} \ \mathrm{Zn}}{\mathrm{mol} \ \mathrm{Zn}}=0.04991 \ \mathrm{g} \ \mathrm{Zn}\nonumber$ This is the mass of Zn in 25% of the sample (a 25.00 mL portion of the 100.0 mL total volume). The %w/w Zn, therefore, is $\frac{0.04991 \ \mathrm{g} \ \mathrm{Zn} \times 4}{0.7336 \ \mathrm{g} \text { sample }} \times 100=27.21 \% \ \mathrm{w} / \mathrm{w} \mathrm{Zn} \nonumber$ For copper, we find that $\begin{array}{c}{0.2383 \ \mathrm{g} \ \mathrm{CuSCN} \times \frac{1 \ \mathrm{mol} \ \mathrm{Zn}}{121.63 \ \mathrm{g} \ \mathrm{CuSCN}} \times \frac{63.55 \ \mathrm{g} \ \mathrm{Cu}}{\mathrm{mol} \ \mathrm{Cu}}=0.1245 \ \mathrm{g} \ \mathrm{Cu}} \ {\frac{0.1245 \ \mathrm{g} \ \mathrm{Cu} \times 4}{0.7336 \ \mathrm{g} \text { sample }} \times 100=67.88 \% \ \mathrm{w} / \mathrm{w} \mathrm{Cu}}\end{array} \nonumber$ In Practice Exercise 8.2.2 the sample contains two analytes. Because we can precipitate each analyte selectively, finding their respective concentrations is a straightforward stoichiometric calculation. But what if we cannot separately precipitate the two analytes? To find the concentrations of both analytes, we still need to generate two precipitates, at least one of which must contain both analytes. Although this complicates the calculations, we can still use a conservation of mass to solve the problem. Example 8.2.2 A 0.611-g sample of an alloy that contains Al and Mg is dissolved and treated to prevent interferences by the alloy’s other constituents. Aluminum and magnesium are precipitated using 8-hydroxyquinoline, which yields a mixed precipitate of Al(C9H6NO)3 and Mg(C9H6NO)2 that weighs 7.815 g. Igniting the precipitate converts it to a mixture of Al2O3 and MgO that weighs 1.002 g. Calculate the %w/w Al and %w/w Mg in the alloy. Solution The masses of the solids provide us with the following two equations. $\mathrm{g} \ \mathrm{Al}\left(\mathrm{C}_{9} \mathrm{H}_{6} \mathrm{NO}\right)_{3}+ \ \mathrm{g} \ \mathrm{Mg}\left(\mathrm{C}_{9} \mathrm{H}_{6} \mathrm{NO}\right)_{2}=7.815 \ \mathrm{g} \nonumber$ $\mathrm{g} \ \mathrm{Al}_{2} \mathrm{O}_{3}+\mathrm{g} \ \mathrm{MgO}=1.002 \ \mathrm{g} \nonumber$ With two equations and four unknowns, we need two additional equations to solve the problem. A conservation of mass requires that all the aluminum in Al(C9H6NO)3 also is in Al2O3; thus $\mathrm{g} \ \mathrm{Al}_{2} \mathrm{O}_{3}=\mathrm{g} \ \mathrm{Al}\left(\mathrm{C}_{9} \mathrm{H}_{6} \mathrm{NO}\right)_{3} \times \frac{1 \ \mathrm{mol} \ \mathrm{Al}}{459.43 \ \mathrm{g} \ \mathrm{Al}\left(\mathrm{C}_{9} \mathrm{H}_{6} \mathrm{NO}\right)_{3}} \times \frac{101.96 \ \mathrm{g} \ \mathrm{Al}_{2} \mathrm{O}_{3}}{2 \ \mathrm{mol} \ \mathrm{Al}_{2} \mathrm{O}_{3}} \nonumber$ $\mathrm{g} \ \mathrm{Al}_{2} \mathrm{O}_{3}=0.11096 \times \mathrm{g} \ \mathrm{Al}\left(\mathrm{C}_{9} \mathrm{H}_{6} \mathrm{NO}\right)_{3} \nonumber$ Using the same approach, a conservation of mass for magnesium gives $\mathrm{g} \ \mathrm{MgO}=\mathrm{g} \ \mathrm{Mg}\left(\mathrm{C}_{9} \mathrm{H}_{6} \mathrm{NO}\right)_{2} \times \frac{1 \ \mathrm{mol} \ \mathrm{Mg}}{312.61 \ \mathrm{g} \ \mathrm{Mg}\left(\mathrm{C}_{9} \mathrm{H}_{6} \mathrm{NO}\right)_{2}} \times \frac{40.304 \ \mathrm{g} \ \mathrm{MgO}}{\mathrm{mol} \ \mathrm{MgO}} \nonumber$ $\mathrm{g} \ \mathrm{MgO}=0.12893 \times \mathrm{g} \ \mathrm{Mg}\left(\mathrm{C}_{9} \mathrm{H}_{6} \mathrm{NO}\right)_{2} \nonumber$ Substituting the equations for g MgO and g Al2O3 into the equation for the combined weights of MgO and Al2O3 leaves us with two equations and two unknowns. $\mathrm{g} \ \mathrm{Al}\left(\mathrm{C}_{9} \mathrm{H}_{6} \mathrm{NO}\right)_{3}+\mathrm{g} \ \mathrm{Mg}\left(\mathrm{C}_{9} \mathrm{H}_{6} \mathrm{NO}\right)_{2}=7.815 \ \mathrm{g} \nonumber$ $0.11096 \times \mathrm{g} \ \mathrm{Al}\left(\mathrm{C}_{9} \mathrm{H}_{6} \mathrm{NO}\right)_{3}+ 0.12893 \times \mathrm{g} \ \mathrm{Mg}\left(\mathrm{C}_{9} \mathrm{H}_{6} \mathrm{NO}\right)_{2}=1.002 \ \mathrm{g} \nonumber$ Multiplying the first equation by 0.11096 and subtracting the second equation gives $-0.01797 \times \mathrm{g} \ \mathrm{Mg}\left(\mathrm{C}_{9} \mathrm{H}_{6} \mathrm{NO}\right)_{2}=-0.1348 \ \mathrm{g} \nonumber$ $\mathrm{g} \ \mathrm{Mg}\left(\mathrm{C}_{9} \mathrm{H}_{6} \mathrm{NO}\right)_{2}=7.504 \ \mathrm{g} \nonumber$ $\mathrm{g} \ \mathrm{Al}\left(\mathrm{C}_{9} \mathrm{H}_{6} \mathrm{NO}\right)_{3}=7.815 \ \mathrm{g}-7.504 \ \mathrm{g} \ \mathrm{Mg}\left(\mathrm{C}, \mathrm{H}_{6} \mathrm{NO}\right)_{2}=0.311 \ \mathrm{g} \nonumber$ Now we can finish the problem using the approach from Example 8.2.1 . A conservation of mass requires that all the aluminum and magnesium in the original sample of Dow metal is in the precipitates of Al(C9H6NO)3 and the Mg(C9H6NO)2. For aluminum, we find that $0.311 \ \mathrm{g} \ \mathrm{Al}\left(\mathrm{C}_{9} \mathrm{H}_{6} \mathrm{NO}\right)_{3} \times \frac{1 \ \mathrm{mol} \ \mathrm{Al}}{459.45 \ \mathrm{g} \ \mathrm{Al}\left(\mathrm{C}_{9} \mathrm{H}_{6} \mathrm{NO}\right)_{3}} \times \frac{26.982 \ \mathrm{g} \ \mathrm{Al}}{\mathrm{mol} \ \mathrm{Al}}=0.01826 \ \mathrm{g} \ \mathrm{Al} \nonumber$ $\frac{0.01826 \ \mathrm{g} \ \mathrm{Al}}{0.611 \ \mathrm{g} \text { sample }} \times 100=2.99 \% \mathrm{w} / \mathrm{w} \mathrm{Al} \nonumber$ and for magnesium we have $7.504 \ \text{g Mg}\left(\mathrm{C}_9 \mathrm{H}_{6} \mathrm{NO}\right)_{2} \times \frac{1 \ \mathrm{mol} \ \mathrm{Mg}}{312.61 \ \mathrm{g} \ \mathrm{Mg}\left(\mathrm{C}_9 \mathrm{H}_{6} \mathrm{NO}\right)_{2}} \times \frac{24.305 \ \mathrm{g} \ \mathrm{Mg}}{\mathrm{mol} \ \mathrm{MgO}}=0.5834 \ \mathrm{g} \ \mathrm{Mg} \nonumber$ $\frac{0.5834 \ \mathrm{g} \ \mathrm{Mg}}{0.611 \ \mathrm{g} \text { sample }} \times 100=95.5 \% \mathrm{w} / \mathrm{w} \mathrm{Mg} \nonumber$ Exercise 8.2.3 A sample of a silicate rock that weighs 0.8143 g is brought into solution and treated to yield a 0.2692-g mixture of NaCl and KCl. The mixture of chloride salts is dissolved in a mixture of ethanol and water, and treated with HClO4, precipitating 0.3314 g of KClO4. What is the %w/w Na2O in the silicate rock? Answer The masses of the solids provide us with the following equations $\mathrm{g} \ \mathrm{NaCl}+\mathrm{g} \ \mathrm{KCl}=0.2692 \ \mathrm{g} \nonumber$ $\mathrm{g} \ \mathrm{KClO}_{4} = 0.3314 \ \mathrm{g} \nonumber$ With two equations are three unknowns—g NaCl, g KCl, and g KClO4—we need one additional equation to solve the problem. A conservation of mass requires that all the potassium originally in the KCl ends up in the KClO4; thus $\text{g KClO}_4 = \text{g KCl} \times \frac{1 \text{ mol Cl}}{74.55 \text{ g KCl}} \times \frac {138.55 \text{ g KClO}_4}{\text{mol Cl}} = 1.8585 \times \text{ g KCl} \nonumber$ Given the mass of KClO4, we use the third equation to solve for the mass of KCl in the mixture of chloride salts $\text{ g KCl} = \frac{\text{g KClO}_4}{1.8585} = \frac{0.3314 \text{ g}}{1.8585} = 0.1783 \text{ g KCl} \nonumber$ The mass of NaCl in the mixture of chloride salts, therefore, is $\text{ g NaCl} = 0.2692 \text{ g} - \text{g KCl} = 0.2692 \text{ g} - 0.1783 \text{ g KCl} = 0.0909 \text{ g NaCl} \nonumber$ Finally, to report the %w/w Na2O in the sample, we use a conservation of mass on sodium to determine the mass of Na2O $0.0909 \text{ g NaCl} \times \frac{1 \text{ mol Na}}{58.44 \text{ g NaCl}} \times \frac{61.98 \text{ g Na}_2\text{O}}{2 \text{ mol Na}} = 0.0482 \text{ g Na}_2\text{O} \nonumber$ giving the %w/w Na2O as $\frac{0.0482 \text{ g Na}_2\text{O}}{0.8143 \text{ g sample}} \times 100 = 5.92\% \text{ w/w Na}_2\text{O} \nonumber$ The previous problems are examples of direct methods of analysis because the precipitate contains the analyte. In an indirect analysis the precipitate forms as a result of a reaction with the analyte, but the analyte is not part of the precipitate. As shown by the following example, despite the additional complexity, we still can use conservation principles to organize our calculations. Example 8.2.3 An impure sample of Na3PO3 that weighs 0.1392 g is dissolved in 25 mL of water. A second solution that contains 50 mL of 3% w/v HgCl2, 20 mL of 10% w/v sodium acetate, and 5 mL of glacial acetic acid is prepared. Adding the solution that contains the sample to the second solution oxidizes $\text{PO}_3^{3-}$ to $\text{PO}_4^{3-}$ and precipitates Hg2Cl2. After digesting, filtering, and rinsing the precipitate, 0.4320 g of Hg2Cl2 is obtained. Report the purity of the original sample as % w/w Na3PO3. Solution This is an example of an indirect analysis because the precipitate, Hg2Cl2, does not contain the analyte, Na3PO3. Although the stoichiometry of the reaction between Na3PO3 and HgCl2 is given earlier in the chapter, let’s see how we can solve the problem using conservation principles. (Although you can write the balanced reactions for any analysis, applying conservation principles can save you a significant amount of time!) The reaction between Na3PO3 and HgCl2 is an oxidation-reduction reaction in which phosphorous increases its oxidation state from +3 in Na3PO3 to +5 in Na3PO4, and in which mercury decreases its oxidation state from +2 in HgCl2 to +1 in Hg2Cl2. A redox reaction must obey a conservation of electrons because all the electrons released by the reducing agent, Na3PO3, must be accepted by the oxidizing agent, HgCl2. Knowing this, we write the following stoichiometric conversion factors: $\frac{2 \ \mathrm{mol} \ e^{-}}{\mathrm{mol} \ \mathrm{Na}_{3} \mathrm{PO}_{3}} \text { and } \frac{1 \mathrm{mol} \ e^{-}}{\mathrm{mol} \ \mathrm{HgCl}_{2}} \nonumber$ Now we are ready to solve the problem. First, we use a conservation of mass for mercury to convert the precipitate’s mass to the moles of HgCl2. $0.4320 \ \mathrm{g} \ \mathrm{Hg}_{2} \mathrm{Cl}_{2} \times \frac{2 \ \mathrm{mol} \ \mathrm{Hg}}{472.09 \ \mathrm{g} \ \mathrm{Hg}_{2} \mathrm{Cl}_{2}} \times \frac{1 \ \mathrm{mol} \ \mathrm{HgCl}_{2}}{\mathrm{mol} \ \mathrm{Hg}}=1.8302 \times 10^{-3} \ \mathrm{mol} \ \mathrm{HgCl}_{2} \nonumber$ Next, we use the conservation of electrons to find the mass of Na3PO3. $1.8302 \times 10^{-3} \ \mathrm{mol} \ \mathrm{HgCl}_{2} \times \frac{1 \ \mathrm{mol} \ e^{-}}{\mathrm{mol} \ \mathrm{HgCl}_{2}} \times \frac{1 \ \mathrm{mol} \ \mathrm{Na}_{3} \mathrm{PO}_{4}}{2 \ \mathrm{mol} \ e^{-}} \times \frac{147.94 \ \mathrm{g} \ \mathrm{Na}_{3} \mathrm{PO}_{3}}{\mathrm{mol} \ \mathrm{Na}_{3} \mathrm{PO}_{3}}=0.13538 \ \mathrm{g} \ \mathrm{Na}_{3} \mathrm{PO}_{3} \nonumber$ Finally, we calculate the %w/w Na3PO3 in the sample. $\frac{0.13538 \ \mathrm{g} \ \mathrm{Na}_{3} \mathrm{PO}_{3}}{0.1392 \ \mathrm{g} \text { sample }} \times 100=97.26 \% \mathrm{w} / \mathrm{w} \mathrm{Na}_{3} \mathrm{PO}_{3} \nonumber$ As you become comfortable using conservation principles, you will see ways to further simplify problems. For example, a conservation of electrons requires that the electrons released by Na3PO3 end up in the product, Hg2Cl2, yielding the following stoichiometric conversion factor: $\frac{2 \ \operatorname{mol} \ \mathrm{Na}_{3} \mathrm{PO}_{3}}{\mathrm{mol} \ \mathrm{Hg}_{2} \mathrm{Cl}_{2}} \nonumber$ This conversion factor provides a direct link between the mass of Hg2Cl2 and the mass of Na3PO3. Exercise 8.2.4 One approach for determining phosphate, $\text{PO}_4^{3-}$, is to precipitate it as ammonium phosphomolybdate, (NH4)3PO4•12MoO3. After we isolate the precipitate by filtration, we dissolve it in acid and precipitate and weigh the molybdate as PbMoO3. Suppose we know that our sample is at least 12.5% Na3PO4 and that we need to recover a minimum of 0.600 g of PbMoO3? What is the minimum amount of sample that we need for each analysis? Answer To find the mass of (NH4)3PO4•12MoO3 that will produce 0.600 g of PbMoO3, we first use a conservation of mass for molybdenum; thus $0.600 \ \mathrm{g} \ \mathrm{PbMoO}_{3} \times \frac{1 \ \mathrm{mol} \ \mathrm{Mo}}{351.2 \ \mathrm{g} \ \mathrm{PbMoO}_{3}} \times \frac{1876.59 \ \mathrm{g} \ \left(\mathrm{NH}_{4}\right)_{3} \mathrm{PO}_{4} \cdot 12 \mathrm{MoO}_{3}}{12 \ \mathrm{mol} \ \mathrm{Mo}}= 0.2672 \ \mathrm{g} \ \left(\mathrm{NH}_{4}\right)_{3} \mathrm{PO}_{4} \cdot 12 \mathrm{MoO}_{3} \nonumber$ Next, to convert this mass of (NH4)3PO4•12MoO3 to a mass of Na3PO4, we use a conservation of mass on $\text{PO}_4^{3-}$. $0.2672 \ \mathrm{g} \ \left(\mathrm{NH}_{4}\right)_{3} \mathrm{PO}_{4} \cdot 12 \mathrm{MoO}_{3} \times \frac{1 \ \mathrm{mol} \ \mathrm{PO}_{4}^{3-}}{1876.59 \ \mathrm{g \ }\left(\mathrm{NH}_{4}\right)_{3} \mathrm{PO}_{4} \cdot 12 \mathrm{MoO}_{3}} \times \frac{163.94 \ \mathrm{g} \ \mathrm{Na}_{3} \mathrm{PO}_{4}}{\mathrm{mol} \ \mathrm{PO}_{4}^{3-}}=0.02334 \ \mathrm{g} \ \mathrm{Na}_{3} \mathrm{PO}_{4} \nonumber$ Finally, we convert this mass of Na3PO4 to the corresponding mass of sample. $0.02334 \ \mathrm{g} \ \mathrm{Na}_{3} \mathrm{PO}_{4} \times \frac{100 \ \mathrm{g} \text { sample }}{12.5 \ \mathrm{g} \ \mathrm{Na}_{3} \mathrm{PO}_{4}}=0.187 \ \mathrm{g} \text { sample } \nonumber$ A sample of 0.187 g is sufficient to guarantee that we recover a minimum of 0.600 g PbMoO3. If a sample contains more than 12.5% Na3PO4, then a 0.187-g sample will produce more than 0.600 g of PbMoO3. Qualitative Applications A precipitation reaction is a useful method for identifying inorganic and organic analytes. Because a qualitative analysis does not require quantitative measurements, the analytical signal is simply the observation that a precipitate forms. Although qualitative applications of precipitation gravimetry have been replaced by spectroscopic methods of analysis, they continue to find application in spot testing for the presence of specific analytes [Jungreis, E. Spot Test Analysis; 2nd Ed., Wiley: New York, 1997]. Any of the precipitants listed in Table 8.2.1 , Table 8.2.3 , and Table 8.2.4 can be used for a qualitative analysis. Evaluating Precipitation Gravimetry Scale of Operation The scale of operation for precipitation gravimetry is limited by the sensitivity of the balance and the availability of sample. To achieve an accuracy of ±0.1% using an analytical balance with a sensitivity of ±0.1 mg, we must isolate at least 100 mg of precipitate. As a consequence, precipitation gravimetry usually is limited to major or minor analytes, in macro or meso samples. The analysis of a trace level analyte or a micro sample requires a microanalytical balance. Accuracy For a macro sample that contains a major analyte, a relative error of 0.1– 0.2% is achieved routinely. The principal limitations are solubility losses, impurities in the precipitate, and the loss of precipitate during handling. When it is difficult to obtain a precipitate that is free from impurities, it often is possible to determine an empirical relationship between the precipitate’s mass and the mass of the analyte by an appropriate calibration. Precision The relative precision of precipitation gravimetry depends on the sample’s size and the precipitate’s mass. For a smaller amount of sample or precipitate, a relative precision of 1–2 ppt is obtained routinely. When working with larger amounts of sample or precipitate, the relative precision extends to several ppm. Few quantitative techniques can achieve this level of precision. Sensitivity For any precipitation gravimetric method we can write the following general equation to relate the signal (grams of precipitate) to the absolute amount of analyte in the sample $\text { g precipitate }=k \times \mathrm{g} \text { analyte } \label{8.13}$ where k, the method’s sensitivity, is determined by the stoichiometry between the precipitate and the analyte. Equation \ref{8.13} assumes we used a suitable blank to correct the signal for any contributions of the reagent to the precipitate’s mass. Consider, for example, the determination of Fe as Fe2O3. Using a conservation of mass for iron, the precipitate’s mass is $\mathrm{g} \ \mathrm{Fe}_{2} \mathrm{O}_{3}=\mathrm{g} \ \mathrm{Fe} \times \frac{1 \ \mathrm{mol} \ \mathrm{Fe}}{\text{AW Fe}} \times \frac{\text{FW Fe}_{2} \mathrm{O}_{3}}{2 \ \mathrm{mol} \ \mathrm{Fe}} \nonumber$ and the value of k is $k=\frac{1}{2} \times \frac{\mathrm{FW} \ \mathrm{Fe}_{2} \mathrm{O}_{3}}{\mathrm{AW} \ \mathrm{Fe}} \label{8.14}$ As we can see from Equation \ref{8.14}, there are two ways to improve a method’s sensitivity. The most obvious way to improve sensitivity is to increase the ratio of the precipitate’s molar mass to that of the analyte. In other words, it helps to form a precipitate with the largest possible formula weight. A less obvious way to improve a method’s sensitivity is indicated by the term of 1/2 in Equation \ref{8.14}, which accounts for the stoichiometry between the analyte and precipitate. We can also improve sensitivity by forming a precipitate that contains fewer units of the analyte. Exercise 8.2.5 Suppose you wish to determine the amount of iron in a sample. Which of the following compounds—FeO, Fe2O3, or Fe3O4—provides the greatest sensitivity? Answer To determine which form has the greatest sensitivity, we use a conservation of mass for iron to find the relationship between the precipitate’s mass and the mass of iron. \begin{aligned} \mathrm{g} \ \mathrm{FeO} &=\mathrm{g} \ \mathrm{Fe} \times \frac{1 \ \mathrm{mol} \ \mathrm{Fe}}{55.85 \ \mathrm{g} \ \mathrm{Fe}} \times \frac{71.84 \ \mathrm{g} \ \mathrm{FeO}}{\mathrm{mol} \ \mathrm{Fe}}=1.286 \times \mathrm{g} \ \mathrm{Fe} \ \mathrm{g} \ \mathrm{Fe}_{2} \mathrm{O}_{3} &=\mathrm{g} \ \mathrm{Fe} \times \frac{1 \ \mathrm{mol} \ \mathrm{Fe}}{55.85 \ \mathrm{g} \ \mathrm{Fe}} \times \frac{159.69 \ \mathrm{g} \ \mathrm{Fe}_{2} \mathrm{O}_{3}}{2 \ \mathrm{mol} \ \mathrm{Fe}}=1.430 \times \mathrm{g} \ \mathrm{Fe} \ \mathrm{g} \ \mathrm{Fe}_{3} \mathrm{O}_{4} &=\mathrm{g} \ \mathrm{Fe} \times \frac{1 \ \mathrm{mol} \ \mathrm{Fe}}{55.85 \ \mathrm{g} \ \mathrm{Fe}} \times \frac{231.53 \ \mathrm{g} \ \mathrm{Fe}_{3} \mathrm{O}_{4}}{3 \ \mathrm{mol} \ \mathrm{Fe}}=1.382 \times \mathrm{g} \ \mathrm{Fe} \end{aligned} \nonumber Of the three choices, the greatest sensitivity is obtained with Fe2O3 because it provides the largest value for k. Selectivity Due to the chemical nature of the precipitation process, precipitants usually are not selective for a single analyte. For example, silver is not a selective precipitant for chloride because it also forms precipitates with bromide and with iodide. Interferents often are a serious problem and must be considered if accurate results are to be obtained. Time, Cost, and Equipment Precipitation gravimetry is time intensive and rarely practical if you have a large number of samples to analyze; however, because much of the time invested in precipitation gravimetry does not require an analyst’s immediate supervision, it is a practical alternative when working with only a few samples. Equipment needs are few—beakers, filtering devices, ovens or burners, and balances—inexpensive, routinely available in most laboratories, and easy to maintain.
textbooks/chem/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/08%3A_Gravimetric_Methods/8.02%3A_Precipitation_Gravimetry.txt
A second approach to gravimetry is to thermally or chemically decompose the sample and measure the resulting change in its mass. Alternatively, we can trap and weigh a volatile decomposition product. Because the release of a volatile species is an essential part of these methods, we classify them collectively as volatilization gravimetric methods of analysis. Theory and Practice Whether an analysis is direct or indirect, volatilization gravimetry usually requires that we know the products of the decomposition reaction. This rarely is a problem for organic compounds, which typically decompose to form simple gases such as CO2, H2O, and N2. For an inorganic compound, however, the products often depend on the decomposition temperature. Thermogravimetry One method for determining the products of a thermal decomposition is to monitor the sample’s mass as a function of temperature, a process called thermogravimetry. Figure 8.3.1 shows a typical thermogram in which each change in mass—each “step” in the thermogram—represents the loss of a volatile product. As the following example illustrates, we can use a thermogram to identify a compound’s decomposition reactions. Example 8.3.1 The thermogram in Figure 8.3.1 shows the mass of a sample of calcium oxalate monohydrate, CaC2O4•H2O, as a function of temperature. The original sample of 17.61 mg was heated from room temperature to 1000oC at a rate of 20oC per minute. For each step in the thermogram, identify the volatilization product and the solid residue that remains. Solution From 100–250oC the sample loses 17.61 mg – 15.44 mg, or 2.17 mg, which is $\frac{2.17 \ \mathrm{mg}}{17.61 \ \mathrm{mg}} \times 100=12.3 \% \nonumber$ of the sample’s original mass. In terms of CaC2O4•H2O, this corresponds to a decrease in the molar mass of $0.123 \times 146.11 \ \mathrm{g} / \mathrm{mol}=18.0 \ \mathrm{g} / \mathrm{mol} \nonumber$ The product’s molar mass and the temperature range for the decomposition, suggest that this is a loss of H2O(g), leaving a residue of CaC2O4. The loss of 3.38 mg from 350–550oC is a 19.2% decrease in the sample’s original mass, or a decrease in the molar mass of $0.192 \times 146.11 \ \mathrm{g} / \mathrm{mol}=28.1 \ \mathrm{g} / \mathrm{mol} \nonumber$ which is consistent with the loss of CO(g) and a residue of CaCO3. Finally, the loss of 5.30 mg from 600-800oC is a 30.1% decrease in the sample’s original mass, or a decrease in molar mass of $0.301 \times 146.11 \ \mathrm{g} / \mathrm{mol}=44.0 \ \mathrm{g} / \mathrm{mol} \nonumber$ This loss in molar mass is consistent with the release of CO2(g), leaving a final residue of CaO. The three decomposition reactions are $\begin{array}{c}{\mathrm{CaC}_{2} \mathrm{O}_{4} \cdot \mathrm{H}_{2} \mathrm{O}(s) \rightarrow \ \mathrm{CaC}_{2} \mathrm{O}_{4}(s)+2 \mathrm{H}_{2} \mathrm{O}(l)} \ {\mathrm{CaC}_{2} \mathrm{O}_{4}(s) \rightarrow \ \mathrm{CaCO}_{3}(s)+\mathrm{CO}(g)} \ {\mathrm{CaCO}_{3}(s) \rightarrow \ \mathrm{CaO}(s)+\mathrm{CO}_{2}(g)}\end{array} \nonumber$ Identifying the products of a thermal decomposition provides information that we can use to develop an analytical procedure. For example, the thermogram in Figure 8.3.1 shows that we must heat a precipitate of CaC2O4•H2O to a temperature between 250 and 400oC if we wish to isolate and weigh CaC2O4. Alternatively, heating the sample to 1000oC allows us to isolate and weigh CaO. Exercise 8.3.1 Under the same conditions as Figure 8.3.1 , the thermogram for a 22.16 mg sample of MgC2O4•H2O shows two steps: a loss of 3.06 mg from 100–250oC and a loss of 12.24 mg from 350–550oC. For each step, identify the volatilization product and the solid residue that remains. Using your results from this exercise and the results from Example 8.3.1 , explain how you can use thermogravimetry to analyze a mixture that contains CaC2O4•H2O and MgC2O4•H2O. You may assume that other components in the sample are inert and thermally stable below 1000oC. Answer From 100–250oC the sample loses 13.8% of its mass, or a loss of $0.138 \times 130.34 \ \mathrm{g} / \mathrm{mol}=18.0 \ \mathrm{g} / \mathrm{mol} \nonumber$ which is consistent with the loss of H2O(g) and a residue of MgC2O4. From 350–550oC the sample loses 55.23% of its original mass, or a loss of $0.5523 \times 130.34 \ \mathrm{g} / \mathrm{mol}=71.99 \ \mathrm{g} / \mathrm{mol} \nonumber$ This weight loss is consistent with the simultaneous loss of CO(g) and CO2(g), leaving a residue of MgO. We can analyze the mixture by heating a portion of the sample to 300oC, 600oC, and 1000oC, recording the mass at each temperature. The loss of mass between 600oC and 1000oC, $\Delta m_2$, is due to the loss of CO2(g) from the decomposition of CaCO3 to CaO, and is proportional to the mass of CaC2O4•H2O in the sample. $\mathrm{g} \ \mathrm{CaC}_{2} \mathrm{O}_{4} \cdot \mathrm{H}_{2} \mathrm{O}=\Delta m_{2} \times \frac{1 \ \mathrm{mol} \ \mathrm{CO}_{2}}{44.01 \ \mathrm{g} \ \mathrm{CO}_{2}} \times \frac{146.11 \ \mathrm{g} \ \mathrm{CaC}_{2} \mathrm{O}_{4} \cdot \mathrm{H}_{2} \mathrm{O}}{\mathrm{mol} \ \mathrm{CO}_{2}} \nonumber$ The change in mass between 300oC and 600oC, $\Delta m_1$, is due to the loss of CO(g) from CaC2O4•H2O and the loss of CO(g) and CO2(g) from MgC2O4•H2O. Because we already know the amount of CaC2O4•H2O in the sample, we can calculate its contribution to $\Delta m_1$. $\left(\Delta m_{1}\right)_{\mathrm{Ca}}=\mathrm{g} \ \mathrm{CaC}_{2} \mathrm{O}_{4} \cdot \mathrm{H}_{2} \mathrm{O}=\Delta m_{2} \times \frac{1 \ \mathrm{mol} \ \mathrm{CO}}{146.11 \ \mathrm{g} \ \mathrm{CaC}_{2} \mathrm{O}_{4} \cdot \mathrm{H}_{2} \mathrm{O}} \times \frac{28.01 \ \mathrm{g} \ \mathrm{CO}}{\mathrm{mol} \ \mathrm{CO}} \nonumber$ The change in mass between 300oC and 600oC due to the decomposition of MgC2O4•H2O $\left(m_{1}\right)_{\mathrm{Mg}}=\Delta m_{1}-\left(\Delta m_{1}\right)_{\mathrm{Ca}} \nonumber$ provides the mass of MgC2O4•H2O in the sample. $\mathrm{g} \ \mathrm{MgC}_{2} \mathrm{O}_{4} \cdot \mathrm{H}_{2} \mathrm{O}=\left(\Delta m_{1}\right)_{\mathrm{Mg}} \times \frac{1 \ \mathrm{mol}\left(\mathrm{CO} \ + \ \mathrm{CO}_{2}\right)}{130.35 \ \mathrm{g} \ \mathrm{MgC}_{2} \mathrm{O}_{4} \cdot \mathrm{H}_{2} \mathrm{O}} \times \frac{78.02 \ \mathrm{g} \ \left(\mathrm{CO} \ + \ \mathrm{CO}_{2}\right)}{\mathrm{mol}\ \left(\mathrm{CO} \ + \ \mathrm{CO}_{2}\right)} \nonumber$ Equipment Depending on the method of analysis, the equipment for volatilization gravimetry may be simple or complex. In the simplest experimental design, we place the sample in a crucible and decompose it at a fixed temperature using a Bunsen burner, a Meker burner, a laboratory oven, or a muffle furnace. The sample’s mass and the mass of the residue are measured using an analytical balance. Trapping and weighing the volatile products of a thermal decomposition requires specialized equipment. The sample is placed in a closed container and heated. As decomposition occurs, a stream of an inert purge-gas sweeps the volatile products through one or more selective absorbent traps. In a thermogravimetric analysis, the sample is placed on a small balance pan attached to one arm of an electromagnetic balance (Figure 8.3.2 ). The sample is lowered into an electric furnace and the furnace’s temperature is increased at a fixed rate of few degrees per minute while monitoring continuously the sample’s weight. The instrument usually includes a gas line for purging the volatile decomposition products out of the furnace, and a heat exchanger to dissipate the heat emitted by the furnace. Figure 8.3.2 . (a) Instrumentation for conducting a thermogravimetric analysis. The balance sits on the top of the instrument with the sample suspended below. A gas line supplies an inert gas that sweeps the volatile decomposition products out of the furnace. The heat exchanger dissipates the heat from the furnace to a reservoir of water. (b) Close-up showing the balance pan, which sits on a moving platform, the thermocouple for monitoring temperature, a hook for lowering the sample pan into the furnace, and the opening to the furnace. After placing a small portion of the sample on the balance pan, the platform rotates over the furnace and transfers the balance pan to a hook that is suspended from the balance. Once the balance pan is in place, the platform rotates back to its initial position. The balance pan and the thermocouple are then lowered into the furnace. Representative Method 8.3.1: Determination of Si in Ores and Alloys The best way to appreciate the theoretical and practical details discussed in this section is to carefully examine a typical volatilization gravimetric method. Although each method is unique, the determination of Si in ores and alloys by forming volatile SiF4 provides an instructive example of a typical procedure. The description here is based on a procedure from Young, R. S. Chemical Analysis in Extractive Metallurgy, Griffen: London, 1971, pp. 302–304. Description of Method Silicon is determined by dissolving the sample in acid and dehydrating to precipitate SiO2. Because a variety of other insoluble oxides also form, the precipitate’s mass is not a direct measure of the amount of silicon in the sample. Treating the solid residue with HF forms volatile SiF4. The decrease in mass following the loss of SiF4 provides an indirect measure of the amount of silicon in the original sample. Procedure Transfer a sample of between 0.5 g and 5.0 g to a platinum crucible along with an excess of Na2CO3, and heat until a melt forms. After cooling, dissolve the residue in dilute HCl. Evaporate the solution to dryness on a steam bath and heat the residue, which contains SiO2 and other solids, for one hour at 110oC. Moisten the residue with HCl and repeat the dehydration. Remove any acid soluble materials from the residue by adding 50 mL of water and 5 mL of concentrated HCl. Bring the solution to a boil and filter through #40 filter paper (note: #40 filter paper is a medium speed, ashless filter paper for filtering crystalline solids). Wash the residue with hot 2% v/v HCl followed by hot water. Evaporate the filtrate to dryness twice and, following the same procedure, treat to remove any acid-soluble materials. Combine the two precipitates and dry and ignite to a constant weight at 1200oC. After cooling, add 2 drops of 50% v/v H2SO4 and 10 mL of HF. Remove the volatile SiF4 by evaporating to dryness on a hot plate. Finally, bring the residue to constant weight by igniting at 1200oC. Questions 1. According to the procedure the sample should weigh between 0.5 g and 5.0 g. How should you decide upon the amount of sample to use? In this procedure the critical measurement is the decrease in mass following the volatilization of SiF4. The reaction responsible for the loss of mass is $\mathrm{SiO}_{2}(s)+4 \mathrm{HF}(a q) \rightarrow \mathrm{SiF}_{4}(g)+2 \mathrm{H}_{2} \mathrm{O}(l ) \nonumber$ Water and excess HF are removed during the final ignition, and do not contribute to the change in mass. The loss in mass, therefore, is equivalent to the mass of SiO2 present after the dehydration step. Every 0.1 g of Si in the original sample results in the loss of 0.21 g of SiO2. How much sample we use depends on what is an acceptable uncertainty when we measure its mass. A 0.5-g sample that is 50% w/w in Si, for example, will lose 0.53 g. If we are using a balance that measures mass to the nearest ±0.1 mg, then the relative uncertainty in mass is approximately ±0.02%; this is a reasonable level of uncertainty for a gravimetric analysis. A 0.5-g sample that is only 5% w/w Si experiences a weight loss of only 0.053 g and has a relative uncertainty of ±0.2%. In this case a larger sample is needed. 2. Why are acid-soluble materials removed before we treat the dehydrated residue with HF? Any acid-soluble materials in the sample will react with HF or H2SO4. If the products of these reactions are volatile, or if they decompose at 1200oC, then the change in mass is not due solely to the volatilization of SiF4. As a result, we will overestimate the amount of Si in our sample. 3. Why is H2SO4 added with the HF? Many samples that contain silicon also contain aluminum and iron, which form Al2O3 and Fe2O3 when we dehydrate the sample. These oxides are potential interferents because they also form volatile fluorides. In the presence of H2SO4, however, aluminum and iron preferentially form non-volatile sulfates, which eventually decompose back to their respective oxides when we heat the residue to 1200oC. As a result, the change in weight after treating with HF and H2SO4 is due only to the loss of SiF4. Quantitative Applications Unlike precipitation gravimetry, which rarely is used as a standard method of analysis, volatilization gravimetric methods continue to play an important role in chemical analysis. Several important examples are discussed below. Inorganic Analysis Determining the inorganic ash content of an organic material, such as a polymer, is an example of a direct volatilization gravimetric analysis. After weighing the sample, it is placed in an appropriate crucible and the organic material carefully removed by combustion, leaving behind the inorganic ash. The crucible that contains the residue is heated to a constant weight using either a burner or an oven before the mass of the inorganic ash is determined. Another example of volatilization gravimetry is the determination of dissolved solids in natural waters and wastewaters. In this method, a sample of water is transferred to a weighing dish and dried to a constant weight at either 103–105oC or at 180oC. Samples dried at the lower temperature retain some occluded water and lose some carbonate as CO2; the loss of organic material, however, is minimal at this temperature. At the higher temperature, the residue is free from occluded water, but the loss of carbonate is greater. In addition, some chloride, nitrate, and organic material is lost through thermal decomposition. In either case, the residue that remains after drying to a constant weight at 500oC is the amount of fixed solids in the sample, and the loss in mass provides an indirect measure of the sample’s volatile solids. Indirect analyses based on the weight of a residue that remains after volatilization are used to determine moisture in a variety of products and to determine silica in waters, wastewaters, and rocks. Moisture is determined by drying a preweighed sample with an infrared lamp or a low temperature oven. The difference between the original weight and the weight after drying equals the mass of water lost. Organic Analysis The most important application of volatilization gravimetry is for the elemental analysis of organic materials. During combustion with pure O2, many elements, such as carbon and hydrogen, are released as gaseous combustion products, such as CO2(g) and H2O(g). Passing the combustion products through preweighed tubes that contain selective absorbents and measuring the increase in each tube’s mass provides a direct analysis for the mass of carbon and hydrogen in the sample. Instead of measuring mass, modern instruments for completing an elemental analysis use gas chromatography (Chapter 12) or infrared spectroscopy (Chapter 10) to monitor the gaseous decomposition products. Alkaline metals and earths in organic materials are determined by adding H2SO4 to the sample before combustion. After combustion is complete, the metal remains behind as a solid residue of metal sulfate. Silver, gold, and platinum are determined by burning the organic sample, leaving a metallic residue of Ag, Au, or Pt. Other metals are determined by adding HNO3 before combustion, which leaves a residue of the metal oxide. Volatilization gravimetry also is used to determine biomass in waters and wastewaters. Biomass is a water quality index that provides an indication of the total mass of organisms contained within a sample of water. A known volume of the sample is passed through a preweighed 0.45-μm membrane filter or a glass-fiber filter and dried at 105oC for 24 h. The residue’s mass provides a direct measure of biomass. If samples are known to contain a substantial amount of dissolved inorganic solids, the residue is ignited at 500oC for one hour, which volatilizes the biomass. The resulting inorganic residue is wetted with distilled water to rehydrate any clay minerals and dried to a constant weight at 105oC. The difference in mass before and after ignition provides an indirect measure of biomass. Quantitative Calculations For some applications, such as determining the amount of inorganic ash in a polymer, a quantitative calculation is straightforward and does not require a balanced chemical reaction. For other applications, however, the relationship between the analyte and the analytical signal depends upon the stoichiometry of any relevant reactions. Once again, a conservation of mass is useful when solving problems. Example 8.3.2 A 101.3-mg sample of an organic compound that contains chlorine is combusted in pure O2. The volatile gases are collected in absorbent traps with the trap for CO2 increasing in mass by 167.6 mg and the trap for H2O increasing in mass by 13.7-mg. A second sample of 121.8 mg is treated with concentrated HNO3, producing Cl2 that reacts with Ag+ to form 262.7 mg of AgCl. Determine the compound’s composition, as well as its empirical formula. Solution A conservation of mass requires that all the carbon in the organic compound is in the CO2 produced during combustion; thus $0.1676 \ \mathrm{g} \ \mathrm{CO}_{2} \times \frac{1 \ \mathrm{mol} \ \mathrm{C}}{44.010 \ \mathrm{g} \ \mathrm{CO}_{2}} \times \frac{12.011 \ \mathrm{g} \ \mathrm{C}}{\mathrm{mol} \ \mathrm{C}}=0.04574 \ \text{g C} \nonumber$ $\frac{0.04574 \ \mathrm{g} \ \mathrm{C}}{0.1013 \ \mathrm{g} \text { sample }} \times 100=45.15 \% \mathrm{w} / \mathrm{w} \ \mathrm{C} \nonumber$ Using the same approach for hydrogen and chlorine, we find that $0.0137 \ \mathrm{g} \ \mathrm{H}_{2} \mathrm{O} \times \frac{2 \ \mathrm{mol} \ \mathrm{H}}{18.015 \ \mathrm{g} \ \mathrm{H}_{2} \mathrm{O}} \times \frac{1.008 \ \mathrm{g} \ \mathrm{H}}{\mathrm{mol} \ \mathrm{H}}=1.533 \times 10^{-3} \mathrm{g} \ \mathrm{H} \nonumber$ $\frac{1.533 \ \times 10^{-3} \mathrm{g} \ \mathrm{H}}{0.1003 \ \mathrm{g} \ \text { sample }} \times 100=1.53 \% \mathrm{w} / \mathrm{w} \ \mathrm{H} \nonumber$ $0.2627 \ \mathrm{g} \ \mathrm{AgCl} \times \frac{1 \ \mathrm{mol} \ \mathrm{Cl}}{143.32 \ \mathrm{g} \ \mathrm{AgCl}} \times \frac{35.455 \ \text{g Cl}}{\mathrm{mol} \ \mathrm{Cl}}=0.06498 \ \mathrm{g} \ \mathrm{Cl} \nonumber$ $\frac{0.06498 \ \mathrm{g} \ \mathrm{Cl}}{0.1218 \ \mathrm{g} \text { sample }} \times 100=53.35 \% \mathrm{w} / \mathrm{w} \ \mathrm{Cl} \nonumber$ Adding together the weight percents for C, H, and Cl gives a total of 100.03%; thus, the compound contains only these three elements. To determine the compound’s empirical formula we note that a gram of sample contains 0.4515 g of C, 0.0153 g of H and 0.5335 g of Cl. Expressing each element in moles gives 0.0376 moles C, 0.0152 moles H and 0.0150 moles Cl. Hydrogen and chlorine are present in a 1:1 molar ratio. The molar ratio of C to moles of H or Cl is $\frac{\mathrm{mol} \ \mathrm{C}}{\mathrm{mol} \text{ H}} =\frac{\mathrm{mol} \ \mathrm{C}}{\mathrm{mol} \ \mathrm{Cl}}=\frac{0.0376}{0.0150}=2.51 \approx 2.5 \nonumber$ Thus, the simplest, or empirical formula for the compound is C5H2Cl2. In an indirect volatilization gravimetric analysis, the change in the sample’s weight is proportional to the amount of analyte in the sample. Note that in the following example it is not necessary to apply a conservation of mass to relate the analytical signal to the analyte. Example 8.3.3 A sample of slag from a blast furnace is analyzed for SiO2 by decomposing a 0.5003-g sample with HCl, leaving a residue with a mass of 0.1414 g. After treating with HF and H2SO4, and evaporating the volatile SiF4, a residue with a mass of 0.0183 g remains. Determine the %w/w SiO2 in the sample. Solution The difference in the residue’s mass before and after volatilizing SiF4 gives the mass of SiO2 in the sample; thus the sample contains $0.1414 \ \mathrm{g}-0.0183 \ \mathrm{g}=0.1231 \ \mathrm{g} \ \mathrm{SiO}_{2} \nonumber$ and the %w/w SiO2 is $\frac{0.1231 \ \mathrm{g} \ \mathrm{Si} \mathrm{O}_{2}}{0.5003 \ \mathrm{g} \text { sample }} \times 100=24.61 \% \mathrm{w} / \mathrm{w} \ \mathrm{SiO}_{2} \nonumber$ Exercise 8.3.2 Heating a 0.3317-g mixture of CaC2O4 and MgC2O4 yields a residue of 0.1794 g at 600oC and a residue of 0.1294 g at 1000oC. Calculate the %w/w CaC2O4 in the sample. You may wish to review your answer to Exercise 8.3.1 as you consider this problem. Answer In Exercise 8.3.1 we developed an equation for the mass of CaC2O4•H2O in a mixture of CaC2O4•H2O, MgC2O4•H2O, and inert materials. Adapting this equation to a sample that contains CaC2O4, MgC2O4, and inert materials is easy; thus $\mathrm{g} \ \mathrm{CaC}_{2} \mathrm{O}_{4}=(0.1794 \ \mathrm{g}-0.1294 \ \mathrm{g}) \times \frac{1 \ \mathrm{mol} \ \mathrm{CO}_{2}}{44.01 \ \mathrm{g} \ \mathrm{CO}_{2}} \times \frac{128.10 \ \mathrm{g} \ \mathrm{CaC}_{2} \mathrm{O}_{4}}{\mathrm{mol} \ \mathrm{CO}_{2}}=0.1455 \ \mathrm{g} \ \mathrm{CaC}_{2} \mathrm{O}_{4} \nonumber$ The %w/w CaC2O4 in the sample is $\frac{0.1455 \ \mathrm{g} \ \mathrm{CaC}_{2} \mathrm{O}_{4}}{0.3317 \ \mathrm{g} \text { sample }} \times 100=43.86 \% \mathrm{w} / \mathrm{w} \mathrm{CaC}_{2} \mathrm{O}_{4} \nonumber$ Finally, for some quantitative applications we can compare the result for a sample to a similar result obtained using a standard. Example 8.3.4 A 26.23-mg sample of MgC2O4•H2O and inert materials is heated to constant weight at 1200oC, leaving a residue that weighs 20.98 mg. A sample of pure MgC2O4•H2O, when treated in the same fashion, undergoes a 69.08% change in its mass. Determine the %w/w MgC2O4•H2O in the sample. Solution The change in the sample’s mass is 5.25 mg, which corresponds to $5.25 \ \mathrm{mg} \operatorname{lost} \times \frac{100.0 \ \mathrm{mg} \ \mathrm{MgC}_{2} \mathrm{O}_{4} \cdot \mathrm{H}_{2} \mathrm{O}}{69.08 \ \mathrm{mg} \text { lost }}=7.60 \ \mathrm{mg} \ \mathrm{MgC}_{2} \mathrm{O}_{4} \cdot \mathrm{H}_{2} \mathrm{O} \nonumber$ The %w/w MgC2O4•H2O in the sample is $\frac{7.60 \ \mathrm{mg} \ \mathrm{MgC}_{2} \mathrm{O}_{4} \cdot \mathrm{H}_{2} \mathrm{O}}{26.23 \ \mathrm{mg} \text { sample }} \times 100=29.0 \% \mathrm{w} / \mathrm{w} \ \mathrm{MgC}_{2} \mathrm{O}_{4} \cdot \mathrm{H}_{2} \mathrm{O} \nonumber$ Evaluating Volatilization Gravimetry The scale of operation, accuracy, and precision of a gravimetric volatilization method is similar to that described in the last section for precipitation gravimetry. The sensitivity of a direct analysis is fixed by the analyte’s chemical form following combustion or volatilization. We can improve the sensitivity of an indirect analysis by choosing conditions that give the largest possible change in mass. For example, the thermogram in Figure 8.3.1 shows us that an indirect analysis for CaC2O4•H2O is more sensitive if we measure the change in mass following ignition at 1000oC than if we ignite the sample at 300oC. Selectivity is not a problem for a direct analysis if we trap the analyte using a selective absorbent trap. A direct analysis based on the residue’s weight following combustion or volatilization is possible if the residue contains only the analyte of interest. As noted earlier, an indirect analysis only is feasible when the change in mass results from the loss of a single volatile product that contains the analyte. Volatilization gravimetric methods are time and labor intensive. Equipment needs are few, except when combustion gases must be trapped, or for a thermogravimetric analysis, when specialized instrumentation is needed.
textbooks/chem/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/08%3A_Gravimetric_Methods/8.03%3A_Volatilization_Gravimetry.txt
Precipitation and volatilization gravimetric methods require that the analyte, or some other species in the sample, participates in a chemical reaction. In a direct precipitation gravimetric analysis, for example, we convert a soluble analyte into an insoluble form that precipitates from solution. In some situations, however, the analyte already is present in a particulate form that is easy to separate from its liquid, gas, or solid matrix. When such a separation is possible, we can determine the analyte’s mass without relying on a chemical reaction. A particulate is any tiny portion of matter, whether it is a speck of dust, a globule of fat, or a molecule of ammonia. For particulate gravimetry we simply need a method to collect the particles and a balance to measure their mass. Theory and Practice There are two methods for separating a particulate analyte from its matrix. The most common method is filtration, in which we separate solid particulates from their gas, liquid, or solid matrix. A second method, which is useful for gas particles, solutes, and solids, is an extraction. Filtration To separate solid particulates from their matrix we use gravity or apply suction from a vacuum pump or an aspirator to pull the sample through a filter. The type of filter we use depends upon the size of the solid particles and the sample’s matrix. Filters for liquid samples are constructed from a variety of materials, including cellulose fibers, glass fibers, cellulose nitrate, and polytetrafluoroethylene (PTFE). Particle retention depends on the size of the filter’s pores. Cellulose fiber filter papers range in pore size from 30 μm to 2–3 μm. Glass fiber filters, manufactured using chemically inert borosilicate glass, are available with pore sizes between 2.5 μm and 0.3 μm. Membrane filters, which are made from a variety of materials, including cellulose nitrate and PTFE, are available with pore sizes from 5.0 μm to 0.1 μm. For additional information, see our earlier discussion in this chapter on filtering precipitates, and the discussion in Chapter 7 of separations based on size. Solid aerosol particulates are collected using either a single-stage or a multiple-stage filter. In a single-stage system, we pull the gas through a single filter, which retains particles larger than the filter’s pore size. To collect samples from a gas line, we place the filter directly in the line. Atmospheric gases are sampled with a high volume sampler that uses a vacuum pump to pull air through the filter at a rate of approximately 75 m3/h. In either case, we can use the same filtering media for liquid samples to collect aerosol particulates. In a multiple-stage system, a series of filtering units separates the particles into two or more size ranges. The particulates in a solid matrix are separated by size using one or more sieves (Figure 8.4.1 ). Sieves are available in a variety of mesh sizes, ranging from approximately 25 mm to 40 μm. By stacking together sieves of different mesh size, we can isolate particulates into several narrow size ranges. Using the sieves in Figure 8.4.1 , for example, we can separate a solid into particles with diameters >1700 μm, with diameters between 1700 μm and 500 μm, with diameters between 500 μm and 250 μm, and those with a diameter <250 μm. Extraction Filtering limits particulate gravimetry to solid analytes that are easy to separate from their matrix. We can extend particulate gravimetry to the analysis of gas phase analytes, solutes, and solids that are difficult to filter if we extract them with a suitable solvent. After the extraction, we evaporate the solvent and determine the analyte’s mass. Alternatively, we can determine the analyte indirectly by measuring the change in the sample’s mass after we extract the analyte. For a more detailed review of extractions, particularly solid-phase extractions, see Chapter 7. Another method for extracting an analyte from its matrix is by adsorption onto a solid substrate, by absorption into a thin polymer film or chemical film coated on a solid substrate, or by chemically binding to a suitable receptor that is covalently bound to a solid substrate (Figure 8.4.2 ). Adsorption, absorption, and binding occur at the interface between the solution that contains the analyte and the substrate’s surface, the thin film, or the receptor. Although the amount of extracted analyte is too small to measure using a conventional balance, it can be measured using a quartz crystal microbalance. The measurement of mass using a quartz crystal microbalance takes advantage of the piezoelectric effect [(a) Ward, M. D.; Buttry, D. A. Science 1990, 249, 1000–1007; (b) Grate, J. W.; Martin, S. J. ; White, R. M. Anal. Chem. 1993, 65, 940A–948A; (c) Grate, J. W.; Martin, S. J. ; White, R. M. Anal. Chem. 1993, 65, 987A–996A.]. The application of an alternating electrical field across a quartz crystal induces an oscillatory vibrational motion in the crystal. Every quartz crystal vibrates at a characteristic resonant frequency that depends on the crystal’s properties, including the mass per unit area of any material coated on the crystal’s surface. The change in mass following adsorption, absorption, or binding of the analyte is determined by monitoring the change in the quartz crystal’s characteristic resonant frequency. The exact relationship between the change in frequency and mass is determined by a calibration curve. If you own a wristwatch, there is a good chance that its operation relies on a quartz crystal. The piezoelectric properties of quartz were discovered in 1880 by Paul-Jacques Currie and Pierre Currie. Because the oscillation frequency of a quartz crystal is so precise, it quickly found use in the keeping of time. The first quartz clock was built in 1927 at the Bell Telephone labs, and Seiko introduced the first quartz wristwatches in 1969. Quantitative Applications Particulate gravimetry is important in the environmental analysis of water, air, and soil samples. The analysis for suspended solids in water samples, for example, is accomplished by filtering an appropriate volume of a well-mixed sample through a glass fiber filter and drying the filter to constant weight at 103–105oC. The microbiological testing of water also uses particulate gravimetry. One example is the analysis for coliform bacteria in which an appropriate volume of sample is passed through a sterilized 0.45-μm membrane filter. The filter is placed on a sterilized absorbent pad that is saturated with a culturing medium and incubated for 22–24 hours at 35 ± 0.5oC. Coliform bacteria are identified by the presence of individual bacterial colonies that form during the incubation period (Figure 8.4.3 ). As with qualitative applications of precipitation gravimetry, the signal in this case is a visual observation of the number of colonies rather than a measurement of mass. Total airborne particulates are determined using a high-volume air sampler equipped with either a cellulose fiber or a glass fiber filter. Samples from urban environments require approximately 1 h of sampling time, but samples from rural environments require substantially longer times. Grain size distributions for sediments and soils are used to determine the amount of sand, silt, and clay in a sample. For example, a grain size of 2 mm serves as the boundary between gravel and sand. The grain size for the sand–silt and the silt–clay boundaries are 1/16 mm and 1/256 mm, respectively. Several standard quantitative analytical methods for agricultural products are based on measuring the sample’s mass following a selective solvent extraction. For example, the crude fat content in chocolate is determined by extracting with ether for 16 hours in a Soxhlet extractor. After the extraction is complete, the ether is allowed to evaporate and the residue is weighed after drying at 100oC. This analysis also can be accomplished indirectly by weighing a sample before and after extracting with supercritical CO2. Quartz crystal microbalances equipped with thin film polymer films or chemical coatings have found numerous quantitative applications in environmental analysis. Methods are reported for the analysis of a variety of gaseous pollutants, including ammonia, hydrogen sulfide, ozone, sulfur dioxide, and mercury. Biochemical particulate gravimetric sensors also have been developed. For example, a piezoelectric immunosensor has been developed that shows a high selectivity for human serum albumin, and is capable of detecting microgram quantities [Muratsugu, M.; Ohta, F.; Miya, Y.; Hosokawa, T.; Kurosawa, S.; Kamo, N.; Ikeda, H. Anal. Chem. 1993, 65, 2933–2937]. Quantitative Calculations The result of a quantitative analysis by particulate gravimetry is just the ratio, using appropriate units, of the amount of analyte relative to the amount of sample. Example 8.4.1 A 200.0-mL sample of water is filtered through a pre-weighed glass fiber filter. After drying to constant weight at 105oC, the filter is found to have increased in mass by 48.2 mg. Determine the sample’s total suspended solids. Solution One ppm is equivalent to one mg of analyte per liter of solution; thus, the total suspended solids for the sample is $\frac{48.2 \ \mathrm{mg} \text { solids }}{0.2000 \ \mathrm{L} \text { sample }}=241 \ \mathrm{ppm} \text { solids } \nonumber$ Evaluating Particulate Gravimetry The scale of operation and the detection limit for particulate gravimetry can be extended beyond that of other gravimetric methods by increasing the size of the sample taken for analysis. This usually is impracticable for other gravimetric methods because it is difficult to manipulate a larger sample through the individual steps of the analysis. With particulate gravimetry, however, the part of the sample that is not analyte is removed when filtering or extracting. Consequently, particulate gravimetry easily is extended to the analysis of trace-level analytes. Except for methods that rely on a quartz crystal microbalance, particulate gravimetry uses the same balances as other gravimetric methods, and is capable of achieving similar levels of accuracy and precision. Because particulate gravimetry is defined in terms of the mass of the particle themselves, the sensitivity of the analysis is given by the balance’s sensitivity. Selectivity, on the other hand, is determined either by the filter’s pore size or by the properties of the extracting phase. Because it requires a single step, particulate gravimetric methods based on filtration generally require less time, labor and capital than other gravimetric methods.
textbooks/chem/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/08%3A_Gravimetric_Methods/8.04%3A_Particulate_Gravimetry.txt
1. Starting with the equilibrium constant expressions for reaction 8.2.1, and for reaction 8.2.3, reaction 8.2.4, and reaction 8.2.5, verify that equation 8.2.7 is correct. 2. Equation 8.2.7 explains how the solubility of AgCl varies as a function of the equilibrium concentration of Cl. Derive a similar equation that describes the solubility of AgCl as a function of the equilibrium concentration of Ag+. Graph the resulting solubility function and compare it to that shown in figure 8.2.1. 3. Construct a solubility diagram for Zn(OH)2 that takes into account the following soluble zinc-hydroxide complexes: Zn(OH)+, $\text{Zn(OH)}_3^-$, and $\text{Zn(OH)}_4^{2-}$. What is the optimum pH for the quantitative precipitation of Zn(OH)2? For your solubility diagram, plot log(S) on the y-axis and pH on the x-axis. See the appendices for relevant equilibrium constants. 4. Starting with equation 8.2.10, verify that equation 8.2.11 is correct. 5. For each of the following precipitates, use a ladder diagram to identify the pH range where the precipitates has its lowest solubility? See the appendices for relevant equilibrium constants. (a) CaC2O4; (b) PbCrO4; (c) BaSO4; (d) SrCO3; (e) ZnS 6. Mixing solutions of 1.5 M KNO3 and 1.5 M HClO4 produces a precipitate of KClO4. If permanganate ions are present, an inclusion of KMnO4 is possible. Shown below are descriptions of two experiments in which KClO4 is precipitated in the presence of $\text{MnO}_4^-$. Explain why the experiments lead to the different results shown in the figure below. Experiment (a). Place 1 mL of 1.5 M KNO3 in a test tube, add 3 drops of 0.1 M KMnO4, and swirl to mix. Add 1 mL of 1.5 M HClO4 dropwise, agitating the solution between drops. Destroy the excess KMnO4 by adding 0.1 M NaHSO3 dropwise. The resulting precipitate of KClO4 has an intense purple color. Experiment (b). Place 1 mL of 1.5 M HClO4 in a test tube, add 3 drops of 0.1 M KMnO4, and swirl to mix. Add 1 mL of 1.5 M KNO3 dropwise, agitating the solution between drops. Destroy the excess KMnO4 by adding 0.1 M NaHSO3 dropwise. The resulting precipitate of KClO4 has a pale purple in color. 7. Mixing solutions of Ba(SCN)2 and MgSO4 produces a precipitate of BaSO4. Shown below are the descriptions and results for three experiments using different concentrations of Ba(SCN)2 and MgSO4. Explain why these experiments produce different results. Experiment 1. When equal volumes of 3.5 M Ba(SCN)2 and 3.5 M MgSO4 are mixed, a gelatinous precipitate forms immediately. Experiment 2. When equal volumes of 1.5 M Ba(SCN)2 and 1.5 M MgSO4 are mixed, a curdy precipitate forms immediately. Individual particles of BaSO4 are seen as points under a magnification of $1500 \times$ (a particle size less than 0.2 μm). Experiment 3. When equal volumes of 0.5 mM Ba(SCN)2 and 0.5 mM MgSO4 are mixed, the complete precipitation of BaSO4 requires 2–3 h. Individual crystals of BaSO4 obtain lengths of approximately 5 μm. 8. Aluminum is determined gravimetrically by precipitating Al(OH)3 and isolating Al2O3. A sample that contains approximately 0.1 g of Al is dissolved in 200 mL of H2O, and 5 g of NH4Cl and a few drops of methyl red indicator are added (methyl red is red at pH levels below 4 and yellow at pH levels above 6). The solution is heated to boiling and 1:1 NH3 is added dropwise until the indicator turns yellow, precipitating Al(OH)3. The precipitate is held at the solution’s boiling point for several minutes before filtering and rinsing with a hot solution of 2% w/v NH4NO3. The precipitate is then ignited at 1000–1100oC, forming Al2O3. (a) Cite at least two ways in which this procedure encourages the formation of larger particles of precipitate. (b) The ignition step is carried out carefully to ensure the quantitative conversion of Al(OH)3 to Al2O3. What is the effect of an incomplete conversion on the %w/w Al? (c) What is the purpose of adding NH4Cl and methyl red indicator? (d) An alternative procedure for aluminum involves isolating and weighing the precipitate as the 8-hydroxyquinolate, Al(C9H6NO)3. Why might this be a more advantageous form of Al for a gravimetric analysis? Are there any disadvantages? 9. Calcium is determined gravimetrically by precipitating CaC2O4•H2O and isolating CaCO3. After dissolving a sample in 10 mL of water and 15 mL of 6 M HCl, the resulting solution is heated to boiling and a warm solution of excess ammonium oxalate is added. The solution is maintained at 80oC and 6 M NH3 is added dropwise, with stirring, until the solution is faintly alkaline. The resulting precipitate and solution are removed from the heat and allowed to stand for at least one hour. After testing the solution for completeness of precipitation, the sample is filtered, rinsed with 0.1% w/v ammonium oxalate, and dried for one hour at 100–120oC. The precipitate is transferred to a muffle furnace where it is converted to CaCO3 by drying at 500 ± 25oC until constant weight. (a) Why is the precipitate of CaC2O4•H2O converted to CaCO3? (b) In the final step, if the sample is heated at too high of a temperature some CaCO3 is converted to CaO. What effect would this have on the reported %w/w Ca? (c) Why is the precipitant, (NH4)2C2O4, added to a hot, acidic solution instead of a cold, alkaline solution? 10. Iron is determined gravimetrically by precipitating as Fe(OH)3 and igniting to Fe2O3. After dissolving a sample in 50 mL of H2O and 10 mL of 6 M HCl, any Fe2+ is converted Fe3+ by oxidizing with 1–2 mL of concentrated HNO3. The sample is heated to remove the oxides of nitrogen and the solution is diluted to 200 mL. After bringing the solution to a boil, Fe(OH)3 is precipitated by slowly adding 1:1 NH3 until an odor of NH3 is detected. The solution is boiled for an additional minute and the precipitate allowed to settle. The precipitate is then filtered and rinsed with several portions of hot 1% w/v NH4NO3 until no Cl is found in the wash water. Finally, the precipitate is ignited to constant weight at 500–550oC and weighed as Fe2O3. (a) If ignition is not carried out under oxidizing conditions (plenty of O2 present), the final product may contain Fe3O4. What effect will this have on the reported %w/w Fe? (b) The precipitate is washed with a dilute solution of NH4NO3. Why is NH4NO3 added to the wash water? (c) Why does the procedure call for adding NH3 until the odor of ammonia is detected? (d) Describe how you might test the filtrate for Cl. 11. Sinha and Shome described a gravimetric method for molybdenum in which it is precipitated as MoO2(C13H10NO2)2 using n-benzoyl-phenylhydroxylamine, C13H11NO2, as the precipitant [Sinha, S. K.; Shome, S. C. Anal. Chim. Acta 1960, 24, 33–36]. The precipitate is weighed after igniting to MoO3. As part of their study, the authors determined the optimum conditions for the analysis. Samples that contained 0.0770 g of Mo each were taken through the procedure while varying the temperature, the amount of precipitant added, and the pH of the solution. The solution volume was held constant at 300 mL for all experiments. A summary of their results is shown in the following table. temperature (°C) mass (g) of preciptant volume (mL) of 10 M HCl mass (g) of MoO3 30 0.20 0.9 0.0675 30 0.30 0.9 0.1014 30 0.35 0.9 0.1140 30 0.42 0.9 0.1155 30 0.42 0.3 0.1150 30 0.42 18.0 0.1152 30 0.42 48.0 0.1160 30 0.42 75.0 0.1159 50 0.42 0.9 0.1156 75 0.42 0.9 0.1158 80 0.42 0.9 0.1129 Based on these results, discuss the optimum conditions for determining Mo by this method. Express your results for the precipitant as the minimum %w/v in excess, needed to ensure a quantitative precipitation. 12. A sample of an impure iron ore is approximately 55% w/w Fe. If the amount of Fe in the sample is determined gravimetrically by isolating it as Fe2O3, what mass of sample is needed to ensure that we isolate at least 1.0 g of Fe2O3? 13. The concentration of arsenic in an insecticide is determined gravimetrically by precipitating it as MgNH4AsO4 and isolating it as Mg2As2O7. Determine the %w/w As2O3 in a 1.627-g sample of insecticide if it yields 106.5 mg of Mg2As2O7. 14. After preparing a sample of alum, K2SO4•Al2(SO4)3•24H2O, an analyst determines its purity by dissolving a 1.2931-g sample and precipitating the aluminum as Al(OH)3. After filtering, rinsing, and igniting, 0.1357 g of Al2O3 is obtained. What is the purity of the alum preparation? 15. To determine the amount of iron in a dietary supplement, a random sample of 15 tablets with a total weight of 20.505 g is ground into a fine powder. A 3.116-g sample is dissolved and treated to precipitate the iron as Fe(OH)3. The precipitate is collected, rinsed, and ignited to a constant weight as Fe2O3, yielding 0.355 g. Report the iron content of the dietary supplement as g FeSO4•7H2O per tablet. 16. A 1.4639-g sample of limestone is analyzed for Fe, Ca, and Mg. The iron is determined as Fe2O3 yielding 0.0357 g. Calcium is isolated as CaSO4, yielding a precipitate of 1.4058 g, and Mg is isolated as 0.0672 g of Mg2P2O7. Report the amount of Fe, Ca, and Mg in the limestone sample as %w/w Fe2O3, %w/w CaO, and %w/w MgO. 17. The number of ethoxy groups (CH3CH2O–) in an organic compound is determined by the following two reactions. $\mathrm{R}\left(\mathrm{OCH}_{2} \mathrm{CH}_{3}\right)_{x}+x \mathrm{HI} \rightarrow \mathrm{R}(\mathrm{OH})_{x}+x \mathrm{CH}_{3} \mathrm{CH}_{2} \mathrm{I} \nonumber$ $\mathrm{CH}_{3} \mathrm{CH}_{2} \mathrm{I}+\mathrm{Ag}^{+}+\mathrm{H}_{2} \mathrm{O} \rightarrow \operatorname{AgI}(s)+\mathrm{CH}_{3} \mathrm{CH}_{2} \mathrm{OH}\nonumber$ A 36.92-mg sample of an organic compound with an approximate molecular weight of 176 is treated in this fashion, yielding 0.1478 g of AgI. How many ethoxy groups are there in each molecule of the compound? 18. A 516.7-mg sample that contains a mixture of K2SO4 and (NH4)2SO4 is dissolved in water and treated with BaCl2, precipitating the $\text{SO}_4^{2-}$ as BaSO4. The resulting precipitate is isolated by filtration, rinsed free of impurities, and dried to a constant weight, yielding 863.5 mg of BaSO4. What is the %w/w K2SO4 in the sample? 19. The amount of iron and manganese in an alloy is determined by precipitating the metals with 8-hydroxyquinoline, C9H7NO. After weighing the mixed precipitate, the precipitate is dissolved and the amount of 8-hydroxyquinoline determined by another method. In a typical analysis a 127.3-mg sample of an alloy containing iron, manganese, and other metals is dissolved in acid and treated with appropriate masking agents to prevent an interference from other metals. The iron and manganese are precipitated and isolated as Fe(C9H6NO)3 and Mn(C9H6NO)2, yielding a total mass of 867.8 mg. The amount of 8-hydroxyquinolate in the mixed precipitate is determined to be 5.276 mmol. Calculate the %w/w Fe and %w/w Mn in the alloy. 20. A 0.8612-g sample of a mixture of NaBr, NaI, and NaNO3 is analyzed by adding AgNO3 and precipitating a 1.0186-g mixture of AgBr and AgI. The precipitate is then heated in a stream of Cl2, which converts it to 0.7125 g of AgCl. Calculate the %w/w NaNO3 in the sample. 21. The earliest determinations of elemental atomic weights were accomplished gravimetrically. To determine the atomic weight of manganese, a carefully purified sample of MnBr2 weighing 7.16539 g is dissolved and the Br precipitated as AgBr, yielding 12.53112 g. What is the atomic weight for Mn if the atomic weights for Ag and Br are taken to be 107.868 and 79.904, respectively? 22. While working as a laboratory assistant you prepared 0.4 M solutions of AgNO3, Pb(NO3)2, BaCl2, KI and Na2SO4. Unfortunately, you became distracted and forgot to label the solutions before leaving the laboratory. Realizing your error, you label the solutions A–E and perform all possible binary mixtures of the five solutions, obtaining the results shown in the figure below (key: NP means no precipitate formed, W means a white precipitate formed, and Y means a yellow precipitate formed). Identify solutions A–E. A B C D E A NP Y NP W B Y W W C NP NP D W 23. A solid sample has approximately equal amounts of two or more of the following soluble salts: AgNO3, ZnCl2, K2CO3, MgSO4, Ba(C2H3O2)2, and NH4NO3. A sample of the solid, sufficient to give at least 0.04 moles of any single salt, is added to 100 mL of water, yielding a white precipitate and a clear solution. The precipitate is collected and rinsed with water. When a portion of the precipitate is placed in dilute HNO3 it completely dissolves, leaving a colorless solution. A second portion of the precipitate is placed in dilute HCl, yielding a solid and a clear solution; when its filtrate is treated with excess NH3, a white precipitate forms. Identify the salts that must be present in the sample, the salts that must be absent, and the salts for which there is insufficient information to make this determination [Adapted from Sorum, C. H.; Lagowski, J. J. Introduction to Semimicro Qualitative Analysis, Prentice-Hall: Englewood Cliffs, N. J., 5th Ed., 1977, p. 285]. 24. Two methods have been proposed for the analysis of pyrite, FeS2, in impure samples of the ore. In the first method, the sulfur in FeS2 is determined by oxidizing it to $\text{SO}_4^{2-}$ and precipitating it as BaSO4. In the second method, the iron in FeS2 is determined by precipitating the iron as Fe(OH)3 and isolating it as Fe2O3. Which of these methods provides the more sensitive determination for pyrite? What other factors should you consider in choosing between these methods? 25. A sample of impure pyrite that is approximately 90–95% w/w FeS2 is analyzed by oxidizing the sulfur to $\text{SO}_4^{2-}$ and precipitating it as BaSO4. How many grams of the sample should you take to ensure that you obtain at least 1.0 g of BaSO4? 26. A series of samples that contain any possible combination of KCl, NaCl, and NH4Cl is to be analyzed by adding AgNO3 and precipitating AgCl. What is the minimum volume of 5% w/v AgNO3 necessary to precipitate completely the chloride in any 0.5-g sample? 27. If a precipitate of known stoichiometry does not form, a gravimetric analysis is still feasible if we can establish experimentally the mole ratio between the analyte and the precipitate. Consider, for example, the precipitation gravimetric analysis of Pb as PbCrO4 [Grote, F. Z. Anal. Chem. 1941, 122, 395–398]. (a) For each gram of Pb, how many grams of PbCrO4 will form, assuming the reaction is stoichiometric? (b) In a study of this procedure, Grote found that 1.568 g of PbCrO4 formed for each gram of Pb. What is the apparent stoichiometry between Pb and PbCrO4? (c) Does failing to account for the actual stoichiometry lead to a positive determinate error or a negative determinate error? 28. Determine the uncertainty for the gravimetric analysis described in example 8.2.1. The expected accuracy for a gravimetric method is 0.1– 0.2%. What additional sources of error might account for the difference between your estimated uncertainty and the expected accuracy? 29. A 38.63-mg sample of potassium ozonide, KO3, is heated to 70oC for 1 h, undergoing a weight loss of 7.10 mg. A 29.6-mg sample of impure KO3 experiences a 4.86-mg weight loss when treated under similar condition. What is the %w/w KO3 in the sample? 30. The water content of an 875.4-mg sample of cheese is determined with a moisture analyzer. What is the %w/w H2O in the cheese if the final mass was found to be 545.8 mg? 31. Representative Method 8.3.1 describes a procedure for determining Si in ores and alloys. In this analysis a weight loss of 0.21 g corresponds to 0.1 g of Si. Show that this relationship is correct. 32. The iron in an organometallic compound is determined by treating a 0.4873-g sample with HNO3 and heating to volatilize the organic material. After ignition, the residue of Fe2O3 weighs 0.2091 g. (a) What is the %w/w Fe in this compound? (b) The carbon and hydrogen in a second sample of the compound are determined by a combustion analysis. When a 0.5123-g sample is carried through the analysis, 1.2119 g of CO2 and 0.2482 g of H2O re collected. What are the %w/w C and %w/w H in this compound and what is the compound’s empirical formula? 33. A polymer’s ash content is determined by placing a weighed sample in a Pt crucible previously brought to a constant weight. The polymer is melted using a Bunsen burner until the volatile vapor ignites and then allowed to burn until a non-combustible residue remain. The residue then is brought to constant weight at 800oC in a muffle furnace. The following data were collected for two samples of a polymer resin. polymer A g crucible g crucible + polymer g crucible + ash replicate 1 19.1458 21.2287 19.7717 replicate 2 15.9193 17.9522 16.5310 replicate 3 15.6992 17.6660 16.2909 polymer B g crucible g crucible + polymer g crucible + ash replicate 1 19.1457 21.0693 19.7187 replicate 2 15.6991 17.8273 16.3327 replicate 3 15.9196 17.9037 16.5110 (a) For each polymer, determine the mean and the standard deviation for the %w/w ash. (b) Is there any evidence at $\alpha = 0.05$ for a significant difference between the two polymers? See the appendices for statistical tables. 34. In the presence of water vapor the surface of zirconia, ZrO2, chemically adsorbs H2O, forming surface hydroxyls, ZrOH (additional water is physically adsorbed as H2O). When heated above 200oC, the surface hydroxyls convert to H2O(g), releasing one molecule of water for every two surface hydroxyls. Below 200oC only physically absorbed water is lost. Nawrocki, et al. used thermogravimetry to determine the density of surface hydroxyls on a sample of zirconia that was heated to 700oC and cooled in a desiccator containing humid N2 [Nawrocki, J.; Carr, P. W.; Annen, M. J.; Froelicher, S. Anal. Chim. Acta 1996, 327, 261–266]. Heating the sample from 200oC to 900oC released 0.006 g of H2O for every gram of dehydroxylated ZrO2. Given that the zirconia had a surface area of 33 m2/g and that one molecule of H2O forms two surface hydroxyls, calculate the density of surface hydroxyls in μmol/m2. 35. The concentration of airborne particulates in an industrial workplace is determined by pulling the air for 20 min through a single-stage air sampler equipped with a glass-fiber filter at a rate of 75 m3/h. At the end of the sampling period, the filter’s mass is found to have increased by 345.2 mg. What is the concentration of particulates in the air sample in mg/m3 and mg/L? 36. The fat content of potato chips is determined indirectly by weighing a sample before and after extracting the fat with supercritical CO2. The following data were obtained for the analysis of potato chips [Fat Determination by SFE, ISCO, Inc. Lincoln, NE]. sample number initial mass (g) final mass (g) 1 1.1661 0.9253 2 1.1723 0.9252 3 1.2525 0.9850 4 1.2280 0.9562 5 1.2837 1.0119 (a) Determine the mean and standard deviation for the %w/w fat. (b) This sample of potato chips is known to have a fat content of 22.7% w/w. Is there any evidence for a determinate error at $\alpha = 0.05$? See the appendices for statistical tables. 37. Delumyea and McCleary reported results for the %w/w organic material in sediment samples collected at different depths from a cove on the St. Johns River in Jacksonville, FL [17 Delumyea, R. D.; McCleary, D. L. J. Chem. Educ. 1993, 70, 172–173]. After collecting a sediment core, they sectioned it into 2-cm increments. Each increment was treated using the following procedure: • the sediment was placed in 50 mL of deionized water and the resulting slurry filtered through preweighed filter paper • the filter paper and the sediment were placed in a preweighed evaporating dish and dried to a constant weight in an oven at 110oC • the evaporating dish with the filter paper and the sediment were transferred to a muffle furnace where the filter paper and any organic material in the sample were removed by ashing • the inorganic residue remaining after ashing was weighed Using the following data, determine the %w/w organic matter as a function of the average depth for each increment. Prepare a plot showing how the %w/w organic matter varies with depth and comment on your results. depth (cm) mass filter paper (g) mass dish (g) mass filter paper, dish and sediment after drying (g) mass filter paper, dish, and sediment after ashing (g) 0–2 1.590 43.21 52.10 49.49 2–4 1.745 40.62 48.83 46.00 4–6 1.619 41.23 52.86 47.84 6–8 1.611 42.10 50.59 47.13 8–10 1.658 43.62 51.88 47.53 10–12 1.628 43.24 49.45 45.31 12–14 1.633 43.08 47.92 44.20 14–16 1.630 43.96 58.31 55.53 16–18 1.636 43.36 54.37 52.75 38. Yao, et al. described a method for the quantitative analysis based on its reaction with I2 [Yao, S. F.; He, F. J. Nie, L. H. Anal. Chim. Acta 1992, 268, 311–314]. $\mathrm{CS}\left(\mathrm{NH}_{2}\right)_{2}+4 \mathrm{I}_{2}+6 \mathrm{H}_{2} \mathrm{O} \longrightarrow\left(\mathrm{NH}_{4}\right)_{2} \mathrm{SO}_{4}+8 \mathrm{HI}+\mathrm{CO}_{2} \nonumber$ The procedure calls for placing a 100-μL aqueous sample that contains thiourea in a 60-mL separatory funnel and adding 10 mL of a pH 7 buffer and 10 mL of 12 μM I2 in CCl4. The contents of the separatory funnel are shaken and the organic and aqueous layers allowed to separate. The organic layer, which contains the excess I2, is transferred to the surface of a piezoelectric crystal on which a thin layer of Au has been deposited. After allowing the I2 to adsorb to the Au, the CCl4 is removed and the crystal’s frequency shift, $\Delta f$, measured. The following data is reported for a series of thiourea standards. [thiourea] (M) $\Delta f$ (Hz) [thiourea] (M) $\Delta f$ (Hz) $3.00 \times 10^{-7}$ 74.6 $1.50 \times 10^{-6}$ 327 $5.00 \times 10^{-7}$ 120 $2.50 \times 10^{-6}$ 543 $7.00 \times 10^{-7}$ 159 $3.50 \times 10^{-6}$ 789 $9.00 \times 10^{-7}$ 205 $5.00 \times 10^{-6}$ 1089 (a) Characterize this method with respect to the scale of operation shown in figure 3.4.1 of Chapter 3. (b) Prepare a calibration curve and use a regression analysis to determine the relationship between the crystal’s frequency shift and the concentration of thiourea. (c) If a sample that contains an unknown amount of thiourea gives a $\Delta f$ of 176 Hz, what is the molar concentration of thiourea in the sample? (d) What is the 95% confidence interval for the concentration of thiourea in this sample assuming one replicate? See the appendices for statistical tables.
textbooks/chem/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/08%3A_Gravimetric_Methods/8.05%3A_Problems.txt
The following set of experiments introduce students to the applications of gravimetry. • Burrows, H. D.; Ellis, H. A.; Odilora, C. A. “The Dehydrochlorination of PVC,” J. Chem. Educ. 1995, 72, 448–450. • Carmosini, N.; Ghoreshy, S. Koether, M. C. “The Gravimetric Analysis of Nickel Using a Microwave Oven,” J. Chem. Educ. 1997, 74, 986–987. • Harris, T. M. “Revitalizing the Gravimetric Determination in Quantitative Analysis Laboratory,” J. Chem. Educ. 1995, 72, 355–356. • Henrickson, C. H.; Robinson, P. R. “Gravimetric Determination of Calcium as CaC2O4•H2O,” J. Chem. Educ. 1979, 56, 341–342. • Shaver, L. A. “Determination of Phosphates by the Gravimetric Quimociac Technique,” J. Chem. Educ. 2008, 85, 1097–1098. • Snow, N. H.; Dunn, M.; Patel, S. “Determination of Crude Fat in Food Products by Supercritical Fluid Extraction and Gravimetric Analysis,” J. Chem. Educ. 1997, 74, 1108–1111. • Thompson, R. Q.; Ghadiali, M. “Microwave Drying of Precipitates for Gravimetric Analysis,” J. Chem. Educ. 1993, 70, 170–171. • Wynne, A. M. “The Thermal Decomposition of Urea,” J. Chem. Educ. 1987, 64, 180–182. The following resources provide a general history of gravimetry. • A History of Analytical Chemistry; Laitinen, H. A.; Ewing, G. W., Eds.; The Division of Analytical Chemistry of the American Chemical Society: Washington, D. C., 1977, pp. 10–24. • Beck, C. M. “Classical Analysis: A Look at the Past, Present, and Future,” Anal. Chem. 1991, 63, 993A–1003A; Anal. Chem. 1994, 66, 224A–239A Consult the following texts for additional examples of inorganic and organic gravimetric methods include the following texts. • Bassett, J.; Denney, R. C.; Jeffery, G. H.; Mendham, J. Vogel’s Textbook of Quantitative Inorganic Analysis, Longman: London, 4th Ed., 1981. • Erdey, L. Gravimetric Analysis, Pergamon: Oxford, 1965. • Steymark, A. Quantitative Organic Microanalysis, The Blakiston Co.: NY, 1951. • Wendlandt, W. W. Thermal Methods of Analysis, 2nd Ed. Wiley: NY. 1986. For a review of isotope dilution mass spectrometry see the following article. • Fassett, J. D.; Paulsen, P. J. “Isotope Dilution Mass Spectrometry for Accurate Elemental Analysis,” Anal. Chem. 1989, 61, 643A–649A. 8.07: Chapter Summary and Key Terms Chapter Summary In a gravimetric analysis, a measurement of mass or a change in mass provides quantitative information about the analyte. The most common form of gravimetry uses a precipitation reaction to generate a product whose mass is proportional to the amount of analyte. In many cases the precipitate includes the analyte; however, an indirect analysis in which the analyte causes the precipitation of another compound also is possible. Precipitation gravimetric procedures must be carefully controlled to produce precipitates that are easy to filter, free from impurities, and of known stoichiometry. In volatilization gravimetry, thermal or chemical energy decomposes the sample containing the analyte. The mass of residue that remains after decomposition, the mass of volatile products collected using a suitable trap, or a change in mass due to the loss of volatile material are all gravimetric measurements. When the analyte is already present in a particulate form that is easy to separate from its matrix, then a particulate gravimetric analysis is feasible. Examples include the determination of dissolved solids and the determination of fat in foods. Key Terms coagulation definitive technique electrogravimetry ignition occlusion precipitant relative supersaturation surface adsorbate volatilization gravimetry conservation of mass digestion gravimetry inclusion particulate gravimetry precipitation gravimetry reprecipitation thermogram coprecipitate direct analysis homogeneous precipitation indirect analysis peptization quartz crystal microbalance supernatant thermogravimetry
textbooks/chem/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/08%3A_Gravimetric_Methods/8.06%3A_Additional_Resources.txt
Titrimetry, in which volume serves as the analytical signal, first appears as an analytical method in the early eighteenth century. Titrimetric methods were not well received by the analytical chemists of that era because they could not duplicate the accuracy and precision of a gravimetric analysis. Not surprisingly, few standard texts from that era include titrimetric methods of analysis. Precipitation gravimetry first developed as an analytical method without a general theory of precipitation. An empirical relationship between a precipitate’s mass and the mass of analyte in a sample—what analytical chemists call a gravimetric factor—was determined experimentally by taking a known mass of analyte through the procedure. Today, we recognize this as an early example of an external standardization. Gravimetric factors were not calculated using the stoichiometry of a precipitation reaction because chemical formulas and atomic weights were not yet available! Unlike gravimetry, the development and acceptance of titrimetry required a deeper understanding of stoichiometry, of thermodynamics, and of chemical equilibria. By the 1900s, the accuracy and precision of titrimetric methods were comparable to that of gravimetric methods, establishing titrimetry as an accepted analytical technique. • 9.1: Overview of Titrimetry In titrimetry we add a reagent, called the titrant, to a solution that contains another reagent, called the titrand, and allow them to react. Despite their difference in chemistry, all titrations share several common features. Before we consider individual titrimetric methods in greater detail, let’s take a moment to consider some of these similarities. • 9.2: Acid–Base Titrations In the overview to this chapter we noted that a titration’s end point should coincide with its equivalence point. To understand the relationship between an acid–base titration’s end point and its equivalence point we must know how the titrand’s pH changes during a titration. • 9.3: Complexation Titrations The earliest examples of metal–ligand complexation titrations are Liebig’s determinations, in the 1850s, of cyanide and chloride using, respectively, $\text{Ag}^+$ and $\text{Hg}^{2+}$ as the titrant. Practical applications were slow to develop because many metals and ligands form a series of metal–ligand complexes. In 1945, Schwarzenbach introduced EDTA as a titrant. The availability of a ligand that gives a single endpoint made complexation titrimetry a practical analytical method. • 9.4: Redox Titrations Analytical titrations using oxidation–reduction reactions were introduced shortly after the development of acid–base titrimetry. A titrant can serve as its own indicator if its oxidized and its reduced forms differ significantly in color, which initially limited redox titrations to a few titrants. Other titrants require a separate indicator. The first such indicator, diphenylamine, was introduced in the 1920s. Other redox indicators soon followed increasing the applicability of redox titrimetry. • 9.5: Precipitation Titrations Thus far in this chapter we have examined titrimetric methods based on acid–base, complexation, and oxidation–reduction reactions. A reaction in which the analyte and titrant form an insoluble precipitate also can serve as the basis for a titration. We call this type of titration a precipitation titration. • 9.6: Problems End-of-chapter problems to test your understanding of the topics in this chapter. • 9.7: Additional Resources A compendium of resources to accompany topics in this chapter. • 9.8: Chapter Summary and Key Terms Summary of this chapter's main topics and a list of key terms introduced in this chapter. 09: Titrimetric Methods In titrimetry we add a reagent, called the titrant, to a solution that contains another reagent, called the titrand, and allow them to react. The type of reaction provides us with a simple way to divide titrimetry into four categories: acid–base titrations, in which an acidic or basic titrant reacts with a titrand that is a base or an acid; complexometric titrations , which are based on metal–ligand complexation; redox titrations, in which the titrant is an oxidizing or reducing agent; and precipitation titrations, in which the titrand and titrant form a precipitate. We will deliberately avoid the term analyte at this point in our introduction to titrimetry. Although in most titrations the analyte is the titrand, there are circumstances where the analyte is the titrant. Later, when we discuss specific titrimetric methods, we will use the term analyte where appropriate. Despite their difference in chemistry, all titrations share several common features. Before we consider individual titrimetric methods in greater detail, let’s take a moment to consider some of these similarities. As you work through this chapter, this overview will help you focus on the similarities between different titrimetric methods. You will find it easier to understand a new analytical method when you can see its relationship to other similar methods. Equivalence Points and End Points If a titration is to give an accurate result we must combine the titrand and the titrant in stoichiometrically equivalent amounts. We call this stoichiometric mixture the equivalence point. Unlike precipitation gravimetry, where we add the precipitant in excess, an accurate titration requires that we know the exact volume of titrant at the equivalence point, Veq. The product of the titrant’s equivalence point volume and its molarity, MT, is equal to the moles of titrant that react with the titrand. $\text { moles titrant }=M_{T} \times V_{e q} \nonumber$ If we know the stoichiometry of the titration reaction, then we can calculate the moles of titrand. Unfortunately, for most titration reactions there is no obvious sign when we reach the equivalence point. Instead, we stop adding the titrant at an end point of our choosing. Often this end point is a change in the color of a substance, called an indicator, that we add to the titrand’s solution. The difference between the end point’s volume and the equivalence point’s volume is a determinate titration error. If the end point and the equivalence point volumes coincide closely, then this error is insignificant and is safely ignored. Clearly, selecting an appropriate end point is of critical importance. Volume as a Signal Instead of measuring the titrant’s volume, we may choose to measure its mass. Although generally we can measure mass more precisely than we can measure volume, the simplicity of a volumetric titration makes it the more popular choice. Almost any chemical reaction can serve as a titrimetric method provided that it meets the following four conditions. The first condition is that we must know the stoichiometry between the titrant and the titrand. If this is not the case, then we cannot convert the moles of titrant used to reach the end point to the moles of titrand in our sample. Second, the titration reaction effectively must proceed to completion; that is, the stoichiometric mixing of the titrant and the titrand must result in their complete reaction. Third, the titration reaction must occur rapidly. If we add the titrant faster than it can react with the titrand, then the end point and the equivalence point will differ significantly. Finally, we must have a suitable method for accurately determining the end point. These are significant limitations and, for this reason, there are several common titration strategies. Depending on how we are detecting the endpoint, we may stop the titration too early or too late. If the end point is a function of the titrant’s concentration, then adding the titrant too quickly leads to an early end point. On the other hand, if the end point is a function of the titrand's concentration, then the end point exceeds the equivalence point. A simple example of a titration is an analysis for Ag+ using thiocyanate, SCN, as a titrant. $\mathrm{Ag}^{+}(a q)+\mathrm{SCN}^{-}(a q)\rightleftharpoons\mathrm{Ag}(\mathrm{SCN})(s) \nonumber$ This reaction occurs quickly and with a known stoichiometry, which satisfies two of our requirements. To indicate the titration’s end point, we add a small amount of Fe3+ to the analyte’s solution before we begin the titration. When the reaction between Ag+ and SCN is complete, formation of the red-colored Fe(SCN)2+ complex signals the end point. This is an example of a direct titration since the titrant reacts directly with the analyte. This is an example of a precipitation titration. You will find more information about precipitation titrations later in this chapter. If the titration’s reaction is too slow, if a suitable indicator is not available, or if there is no useful direct titration reaction, then an indirect analysis may be possible. Suppose you wish to determine the concentration of formaldehyde, H2CO, in an aqueous solution. The oxidation of H2CO by $\text{I}_3^-$ $\mathrm{H}_{2} \mathrm{CO}(a q)+\mathrm{I}_{3}^-(a q)+3 \mathrm{OH}^{-}(a q)\rightleftharpoons\mathrm{HCO}_{2}^{-}(a q)+3 \mathrm{I}^{-}(a q)+2 \mathrm{H}_{2} \mathrm{O}(1) \nonumber$ is a useful reaction, but it is too slow for a titration. If we add a known excess of $\text{I}_3^-$ and allow its reaction with H2CO to go to completion, we can titrate the unreacted $\text{I}_3^-$ with thiosulfate, $\text{S}_2\text{O}_3^{2-}$. $\mathrm{I}_{3}^{-}(a q)+2 \mathrm{S}_{2} \mathrm{O}_{3}^{2-}(a q)\rightleftharpoons\mathrm{S}_{4} \mathrm{O}_{6}^{2-}(a q)+3 \mathrm{I}^{-}(a q) \nonumber$ The difference between the initial amount of $\text{I}_3^-$ and the amount in excess gives us the amount of $\text{I}_3^-$ that reacts with the formaldehyde. This is an example of a back titration. This is an example of a redox titration. You will find more information about redox titrations later in this chapter. Calcium ions play an important role in many environmental systems. A direct analysis for Ca2+ might take advantage of its reaction with the ligand ethylenediaminetetraacetic acid (EDTA), which we represent here as Y4–. $\mathrm{Ca}^{2+}(a q)+\mathrm{Y}^{4-}(a q)\rightleftharpoons\mathrm{CaY}^{2-}(a q) \nonumber$ Unfortunately, for most samples this titration does not have a useful indicator. Instead, we react the Ca2+ with an excess of MgY2– $\mathrm{Ca}^{2+}(a q)+\mathrm{MgY}^{2-}(a q)\rightleftharpoons\mathrm{Ca} \mathrm{Y}^{2-}(a q)+\mathrm{Mg}^{2+}(a q) \nonumber$ releasing an amount of Mg2+ equivalent to the amount of Ca2+ in the sample. Because the titration of Mg2+ with EDTA $\mathrm{Mg}^{2+}(a q)+\mathrm{Y}^{4-}(a q)\rightleftharpoons\mathrm{MgY}^{2-}(a q) \nonumber$ has a suitable end point, we can complete the analysis. The amount of EDTA used in the titration provides an indirect measure of the amount of Ca2+ in the original sample. Because the species we are titrating was displaced by the analyte, we call this a displacement titration. MgY2– is the Mg2+–EDTA metal–ligand complex. You can prepare a solution of MgY2– by combining equimolar solutions of Mg2+ and EDTA. This is an example of a complexation titration. You will find more information about complexation titrations later in this chapter. If a suitable reaction with the analyte does not exist it may be possible to generate a species that we can titrate. For example, we can determine the sulfur content of coal by using a combustion reaction to convert sulfur to sulfur dioxide $\mathrm{S}(s)+\mathrm{O}_{2}(g) \rightarrow \mathrm{SO}_{2}(g) \nonumber$ and then convert the SO2 to sulfuric acid, H2SO4, by bubbling it through an aqueous solution of hydrogen peroxide, H2O2. $\mathrm{SO}_{2}(g)+\mathrm{H}_{2} \mathrm{O}_{2}(a q) \longrightarrow \mathrm{H}_{2} \mathrm{SO}_{4}(a q) \nonumber$ Titrating H2SO4 with NaOH $\mathrm{H}_{2} \mathrm{SO}_{4}(a q)+2 \mathrm{NaOH}(a q)\rightleftharpoons2 \mathrm{H}_{2} \mathrm{O}(l )+\mathrm{Na}_{2} \mathrm{SO}_{4}(a q) \nonumber$ provides an indirect determination of sulfur. This is an example of an acid–base titration. You will find more information about acid–base titrations later in this chapter. Titration Curves To find a titration’s end point, we need to monitor some property of the reaction that has a well-defined value at the equivalence point. For example, the equivalence point for a titration of HCl with NaOH occurs at a pH of 7.0. A simple method for finding the equivalence point is to monitor the titration mixture’s pH using a pH electrode, stopping the titration when we reach a pH of 7.0. Alternatively, we can add an indicator to the titrand’s solution that changes color at a pH of 7.0. Why a pH of 7.0 is the equivalence point for this titration is a topic we will cover later in the section on acid–base titrations. Suppose the only available indicator changes color at a pH of 6.8. Is the difference between this end point and the equivalence point small enough that we safely can ignore the titration error? To answer this question we need to know how the pH changes during the titration. A titration curve provides a visual picture of how a property of the titration reaction changes as we add the titrant to the titrand. The titration curve in Figure 9.1.1 , for example, was obtained by suspending a pH electrode in a solution of 0.100 M HCl (the titrand) and monitoring the pH while adding 0.100 M NaOH (the titrant). A close examination of this titration curve should convince you that an end point pH of 6.8 produces a negligible titration error. Selecting a pH of 11.6 as the end point, however, produces an unacceptably large titration error. For the titration curve in Figure 9.1.1 , the volume of titrant to reach a pH of 6.8 is 24.99995 mL, a titration error of $-2.00 \times 10^{-4}$% relative to the equivalence point of 25.00 mL. Typically, we can read the volume only to the nearest ±0.01 mL, which means this uncertainty is too small to affect our results. The volume of titrant to reach a pH of 11.6 is 27.07 mL, or a titration error of +8.28%. This is a significant error. The shape of the titration curve in Figure 9.1.1 is not unique to an acid–base titration. Any titration curve that follows the change in concentration of a species in the titration reaction (plotted logarithmically) as a function of the titrant’s volume has the same general sigmoidal shape. Several additional examples are shown in Figure 9.1.2 . The titrand’s or the titrant’s concentration is not the only property we can use to record a titration curve. Other parameters, such as the temperature or absorbance of the titrand’s solution, may provide a useful end point signal. Many acid–base titration reactions, for example, are exothermic. As the titrant and the titrand react, the temperature of the titrand’s solution increases. Once we reach the equivalence point, further additions of titrant do not produce as exothermic a response. Figure 9.1.3 shows a typical thermometric titration curve where the intersection of the two linear segments indicates the equivalence point. The Buret The only essential equipment for an acid–base titration is a means for delivering the titrant to the titrand’s solution. The most common method for delivering titrant is a buret (Figure 9.1.4 ), which is a long, narrow tube with graduated markings and equipped with a stopcock for dispensing the titrant. The buret’s small internal diameter provides a better defined meniscus, making it easier to read precisely the titrant’s volume. Burets are available in a variety of sizes and tolerances (Table 9.1.1 ), with the choice of buret determined by the needs of the analysis. You can improve a buret’s accuracy by calibrating it over several intermediate ranges of volumes using the method described in Chapter 5 for calibrating pipets. Calibrating a buret corrects for variations in the buret’s internal diameter. Table 9.1.1 . Specifications for Volumetric Burets volume (mL) class subdivision (mL) tolerance ($\pm$) 5 A 0.01 ±0.01 B 0.01 ±0.01 10 A 0.02 ±0.02 B 0.02 ±0.04 25 A 0.1 ±0.03 B 0.1 ±0.06 50 A 0.1 ±0.05 B 0.1 ±0.10 100 A 0.2 ±0.10 B 0.2 ±0.20 An automated titration uses a pump to deliver the titrant at a constant flow rate (Figure 9.1.5 ). Automated titrations offer the additional advantage of using a microcomputer for data storage and analysis.
textbooks/chem/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/09%3A_Titrimetric_Methods/9.01%3A_Overview_of_Titrimetry.txt
Before 1800, most acid–base titrations used H2SO4, HCl, or HNO3 as acidic titrants, and K2CO3 or Na2CO3 as basic titrants. A titration’s end point was determined using litmus as an indicator, which is red in acidic solutions and blue in basic solutions, or by the cessation of CO2 effervescence when neutralizing $\text{CO}_3^{2-}$. Early examples of acid–base titrimetry include determining the acidity or alkalinity of solutions, and determining the purity of carbonates and alkaline earth oxides. The determination of acidity and alkalinity continue to be important applications of acid–base titrimetry. We will take a closer look at these applications later in this section. Three limitations slowed the development of acid–base titrimetry: the lack of a strong base titrant for the analysis of weak acids, the lack of suitable indicators, and the absence of a theory of acid–base reactivity. The introduction, in 1846, of NaOH as a strong base titrant extended acid–base titrimetry to the determination of weak acids. The synthesis of organic dyes provided many new indicators. Phenolphthalein, for example, was first synthesized by Bayer in 1871 and used as an indicator for acid–base titrations in 1877. Despite the increased availability of indicators, the absence of a theory of acid–base reactivity made it difficult to select an indicator. The development of equilibrium theory in the late 19th century led to significant improvements in the theoretical understanding of acid–base chemistry, and, in turn, of acid–base titrimetry. Sørenson’s establishment of the pH scale in 1909 provided a rigorous means to compare indicators. The determination of acid–base dissociation constants made it possible to calculate a theoretical titration curve, as outlined by Bjerrum in 1914. For the first time analytical chemists had a rational method for selecting an indicator, making acid–base titrimetry a useful alternative to gravimetry. Acid–Base Titration Curves In the overview to this chapter we noted that a titration’s end point should coincide with its equivalence point. To understand the relationship between an acid–base titration’s end point and its equivalence point we must know how the titrand’s pH changes during a titration. In this section we will learn how to calculate a titration curve using the equilibrium calculations from Chapter 6. We also will learn how to sketch a good approximation of any acid–base titration curve using a limited number of simple calculations. Titrating Strong Acids and Strong Bases For our first titration curve, let’s consider the titration of 50.0 mL of 0.100 M HCl using a titrant of 0.200 M NaOH. When a strong base and a strong acid react the only reaction of importance is $\mathrm{H}_{3} \mathrm{O}^{+}(a q)+\mathrm{OH}^{-}(a q) \rightarrow 2 \mathrm{H}_{2} \mathrm{O}(\mathrm{l}) \label{9.1}$ Although we have not written reaction \ref{9.1} as an equilibrium reaction, it is at equilibrium; however, because its equilibrium constant is large—it is (Kw)–1 or $1.00 \times 10^{14}$—we can treat reaction \ref{9.1} as though it goes to completion. The first task is to calculate the volume of NaOH needed to reach the equivalence point, Veq. At the equivalence point we know from reaction \ref{9.1} that \begin{aligned} \text { moles } \mathrm{HCl}=& \text { moles } \mathrm{NaOH} \ M_{a} \times V_{a} &=M_{b} \times V_{b} \end{aligned} \nonumber where the subscript ‘a’ indicates the acid, HCl, and the subscript ‘b’ indicates the base, NaOH. The volume of NaOH needed to reach the equivalence point is $V_{e q}=V_{b}=\frac{M_{a} V_{a}}{M_{b}}=\frac{(0.100 \ \mathrm{M})(50.0 \ \mathrm{mL})}{(0.200 \ \mathrm{M})}=25.0 \ \mathrm{mL} \nonumber$ Before the equivalence point, HCl is present in excess and the pH is determined by the concentration of unreacted HCl. At the start of the titration the solution is 0.100 M in HCl, which, because HCl is a strong acid, means the pH is $\mathrm{pH}=-\log \left[\mathrm{H}_{3} \mathrm{O}^{+}\right]=-\log \left[\text{HCl} \right] = -\log (0.100)=1.00 \nonumber$ After adding 10.0 mL of NaOH the concentration of excess HCl is $[\text{HCl}] = \frac {(\text{mol HCl})_\text{initial} - (\text{mol NaOH})_\text{added}} {\text{total volume}} = \frac {M_a V_a - M_b V_b} {V_a + V_b} \nonumber$ $[\mathrm{HCl}]=\frac{(0.100 \ \mathrm{M})(50.0 \ \mathrm{mL})-(0.200 \ \mathrm{M})(10.0 \ \mathrm{mL})}{50.0 \ \mathrm{mL}+10.0 \ \mathrm{mL}}=0.0500 \ \mathrm{M} \nonumber$ and the pH increases to 1.30. At the equivalence point the moles of HCl and the moles of NaOH are equal. Since neither the acid nor the base is in excess, the pH is determined by the dissociation of water. $\begin{array}{c}{K_{w}=1.00 \times 10^{-14}=\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]\left[\mathrm{OH}^{-}\right]=\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]^{2}} \ {\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]=1.00 \times 10^{-7}}\end{array} \nonumber$ Thus, the pH at the equivalence point is 7.00. For volumes of NaOH greater than the equivalence point, the pH is determined by the concentration of excess OH. For example, after adding 30.0 mL of titrant the concentration of OH is $[\text{OH}^-] = \frac {(\text{mol NaOH})_\text{added} - (\text{mol HCl})_\text{initial}} {\text{total volume}} = \frac {M_b V_b - M_a V_a} {V_a + V_b} \nonumber$ $\left[\mathrm{OH}^{-}\right]=\frac{(0.200 \ \mathrm{M})(30.0 \ \mathrm{mL})-(0.100 \ \mathrm{M})(50.0 \ \mathrm{mL})}{30.0 \ \mathrm{mL}+50.0 \ \mathrm{mL}}=0.0125 \ \mathrm{M} \nonumber$ To find the concentration of H3O+ we use the Kw expression $\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]=\frac{K_{\mathrm{w}}}{\left[\mathrm{OH}^{-}\right]}=\frac{1.00 \times 10^{-14}}{0.0125}=8.00 \times 10^{-13} \ \mathrm{M} \nonumber$ to find that the pH is 12.10. Table 9.2.1 and Figure 9.2.1 show additional results for this titration curve. You can use this same approach to calculate the titration curve for the titration of a strong base with a strong acid, except the strong base is in excess before the equivalence point and the strong acid is in excess after the equivalence point. Table 9.2.1 . Titration of 50.0 mL of 0.100 M HCl with 0.200 M NaOH volume of NaOH (mL) pH volume of NaOH (mL) pH 0.00 1.00 26.0 11.42 5.00 1.14 28.0 11.89 10.0 1.30 30.0 12.10 15.0 1.51 35.0 12.37 20.0 1.85 40.0 12.52 22.0 2.08 45.0 12.63 24.0 2.57 50.0 12.70 25.0 7.00 Exercise 9.2.1 Construct a titration curve for the titration of 25.0 mL of 0.125 M NaOH with 0.0625 M HCl. Answer The volume of HCl needed to reach the equivalence point is $V_{e q}=V_{a}=\frac{M_{b} V_{b}}{M_{a}}=\frac{(0.125 \ \mathrm{M})(25.0 \ \mathrm{mL})}{(0.0625 \ \mathrm{M})}=50.0 \ \mathrm{mL} \nonumber$ Before the equivalence point, NaOH is present in excess and the pH is determined by the concentration of unreacted OH. For example, after adding 10.0 mL of HCl $\begin{array}{c}{\left[\mathrm{OH}^{-}\right]=\frac{(0.125 \ \mathrm{M})(25.0 \ \mathrm{mL})-(0.0625 \mathrm{M})(10.0 \ \mathrm{mL})}{25.0 \ \mathrm{mL}+10.0 \ \mathrm{mL}}=0.0714 \ \mathrm{M}} \ {\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]=\frac{K_{w}}{\left[\mathrm{OH}^{-}\right]}=\frac{1.00 \times 10^{-14}}{0.0714 \ \mathrm{M}}=1.40 \times 10^{-13} \ \mathrm{M}}\end{array} \nonumber$ the pH is 12.85. For the titration of a strong base with a strong acid the pH at the equivalence point is 7.00. For volumes of HCl greater than the equivalence point, the pH is determined by the concentration of excess HCl. For example, after adding 70.0 mL of titrant the concentration of HCl is $[\mathrm{HCl}]=\frac{(0.0625 \ \mathrm{M})(70.0 \ \mathrm{mL})-(0.125 \ \mathrm{M})(25.0 \ \mathrm{mL})}{70.0 \ \mathrm{mL}+25.0 \ \mathrm{mL}}=0.0132 \ \mathrm{M} \nonumber$ giving a pH of 1.88. Some additional results are shown here. volume of HCl (mL) pH volume of HCl (mL) pH 0 13.10 60 2.13 10 12.85 70 1.88 20 12.62 80 1.75 30 12.36 90 1.66 40 11.98 100 1.60 50 7.00 Titrating a Weak Acid with a Strong Base For this example, let’s consider the titration of 50.0 mL of 0.100 M acetic acid, CH3COOH, with 0.200 M NaOH. Again, we start by calculating the volume of NaOH needed to reach the equivalence point; thus $\operatorname{mol} \ \mathrm{CH}_{3} \mathrm{COOH}=\mathrm{mol} \ \mathrm{NaOH} \nonumber$ $M_{a} \times V_{a}=M_{b} \times V_{b} \nonumber$ $V_{e q}=V_{b}=\frac{M_{a} V_{a}}{M_{b}}=\frac{(0.100 \ \mathrm{M})(50.0 \ \mathrm{mL})}{(0.200 \ \mathrm{M})}=25.0 \ \mathrm{mL} \nonumber$ Before we begin the titration the pH is that for a solution of 0.100 M acetic acid. Because acetic acid is a weak acid, we calculate the pH using the method outlined in Chapter 6 $\mathrm{CH}_{3} \mathrm{COOH}(a q)+\mathrm{H}_{2} \mathrm{O}(l)\rightleftharpoons\mathrm{H}_{3} \mathrm{O}^{+}(a q)+\mathrm{CH}_{3} \mathrm{COO}^{-}(a q) \nonumber$ $K_{a}=\frac{\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]\left[\mathrm{CH}_{3} \mathrm{COO}^-\right]}{\left[\mathrm{CH}_{3} \mathrm{COOH}\right]}=\frac{(x)(x)}{0.100-x}=1.75 \times 10^{-5} \nonumber$ finding that the pH is 2.88. Adding NaOH converts a portion of the acetic acid to its conjugate base, CH3COO. $\mathrm{CH}_{3} \mathrm{COOH}(a q)+\mathrm{OH}^{-}(a q) \longrightarrow \mathrm{H}_{2} \mathrm{O}(l)+\mathrm{CH}_{3} \mathrm{COO}^{-}(a q) \label{9.2}$ Because the equilibrium constant for reaction \ref{9.2} is quite large $K=K_{\mathrm{a}} / K_{\mathrm{w}}=1.75 \times 10^{9} \nonumber$ we can treat the reaction as if it goes to completion. Any solution that contains comparable amounts of a weak acid, HA, and its conjugate weak base, A, is a buffer. As we learned in Chapter 6, we can calculate the pH of a buffer using the Henderson–Hasselbalch equation. $\mathrm{pH}=\mathrm{p} K_{\mathrm{a}}+\log \frac{\left[\mathrm{A}^{-}\right]}{[\mathrm{HA}]} \nonumber$ Before the equivalence point the concentration of unreacted acetic acid is $\left[\text{CH}_3\text{COOH}\right] = \frac {(\text{mol CH}_3\text{COOH})_\text{initial} - (\text{mol NaOH})_\text{added}} {\text{total volume}} = \frac {M_a V_a - M_b V_b} {V_a + V_b} \nonumber$ and the concentration of acetate is $[\text{CH}_3\text{COO}^-] = \frac {(\text{mol NaOH})_\text{added}} {\text{total volume}} = \frac {M_b V_b} {V_a + V_b} \nonumber$ For example, after adding 10.0 mL of NaOH the concentrations of CH3COOH and CH3COO are $\left[\mathrm{CH}_{3} \mathrm{COOH}\right]=\frac{(0.100 \ \mathrm{M})(50.0 \ \mathrm{mL})-(0.200 \ \mathrm{M})(10.0 \ \mathrm{mL})}{50.0 \ \mathrm{mL}+10.0 \ \mathrm{mL}} = 0.0500 \text{ M} \nonumber$ $\left[\mathrm{CH}_{3} \mathrm{COO}^{-}\right]=\frac{(0.200 \ \mathrm{M})(10.0 \ \mathrm{mL})}{50.0 \ \mathrm{mL}+10.0 \ \mathrm{mL}}=0.0333 \ \mathrm{M} \nonumber$ which gives us a pH of $\mathrm{pH}=4.76+\log \frac{0.0333 \ \mathrm{M}}{0.0500 \ \mathrm{M}}=4.58 \nonumber$ At the equivalence point the moles of acetic acid initially present and the moles of NaOH added are identical. Because their reaction effectively proceeds to completion, the predominate ion in solution is CH3COO, which is a weak base. To calculate the pH we first determine the concentration of CH3COO $\left[\mathrm{CH}_{3} \mathrm{COO}^-\right]=\frac{(\mathrm{mol} \ \mathrm{NaOH})_{\mathrm{added}}}{\text { total volume }}= \frac{(0.200 \ \mathrm{M})(25.0 \ \mathrm{mL})}{50.0 \ \mathrm{mL}+25.0 \ \mathrm{mL}}=0.0667 \ \mathrm{M} \nonumber$ Alternatively, we can calculate acetate’s concentration using the initial moles of acetic acid; thus $\left[\mathrm{CH}_{3} \mathrm{COO}^{-}\right]=\frac{\left(\mathrm{mol} \ \mathrm{CH}_{3} \mathrm{COOH}\right)_{\mathrm{initial}}}{\text { total volume }} = \frac{(0.100 \ \mathrm{M})(50.0 \ \mathrm{mL})}{50.0 \ \mathrm{mL}+25.0 \ \mathrm{mL}} = 0.0667 \text{ M} \nonumber$ Next, we calculate the pH of the weak base as shown earlier in Chapter 6 $\mathrm{CH}_{3} \mathrm{COO}^{-}(a q)+\mathrm{H}_{2} \mathrm{O}(l)\rightleftharpoons\mathrm{OH}^{-}(a q)+\mathrm{CH}_{3} \mathrm{COOH}(a q) \nonumber$ $K_{\mathrm{b}}=\frac{\left[\mathrm{OH}^{-}\right]\left[\mathrm{CH}_{3} \mathrm{COOH}\right]}{\left[\mathrm{CH}_{3} \mathrm{COO}^{-}\right]}=\frac{(x)(x)}{0.0667-x}=5.71 \times 10^{-10} \nonumber$ $x=\left[\mathrm{OH}^{-}\right]=6.17 \times 10^{-6} \ \mathrm{M} \nonumber$ $\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]=\frac{K_{\mathrm{w}}}{\left[\mathrm{OH}^{-}\right]}=\frac{1.00 \times 10^{-14}}{6.17 \times 10^{-6}}=1.62 \times 10^{-9} \ \mathrm{M} \nonumber$ finding that the pH at the equivalence point is 8.79. After the equivalence point, the titrant is in excess and the titration mixture is a dilute solution of NaOH. We can calculate the pH using the same strategy as in the titration of a strong acid with a strong base. For example, after adding 30.0 mL of NaOH the concentration of OH is $\left[\mathrm{OH}^{-}\right]=\frac{(0.200 \ \mathrm{M})(30.0 \ \mathrm{mL})-(0.100 \ \mathrm{M})(50.0 \ \mathrm{mL})}{30.0 \ \mathrm{mL}+50.0 \ \mathrm{mL}}=0.0125 \ \mathrm{M} \nonumber$ $\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]=\frac{K_{\mathrm{w}}}{\left[\mathrm{OH}^{-}\right]}=\frac{1.00 \times 10^{-14}}{0.0125}=8.00 \times 10^{-13} \ \mathrm{M} \nonumber$ giving a pH of 12.10. Table 9.2.2 and Figure 9.2.2 show additional results for this titration. You can use this same approach to calculate the titration curve for the titration of a weak base with a strong acid, except the initial pH is determined by the weak base, the pH at the equivalence point by its conjugate weak acid, and the pH after the equivalence point by excess strong acid. Table 9.2.2 . Titration of 50.0 mL of 0.100 M Acetic Acid with 0.200 M NaOH volume of HCl (mL) pH volume of HCl (mL) pH 0.00 2.88 26.0 11.43 5.00 4.16 28.0 11.89 10.0 4.58 30.0 12.10 15.0 4.94 35.0 12.37 20.0 5.36 40.0 12.52 22.0 5.63 45.0 12.63 24.0 6.14 50.0 12.70 25.0 8.79 Exercise 9.2.2 Construct a titration curve for the titration of 25.0 mL of 0.125 M NH3 with 0.0625 M HCl. Answer The volume of HCl needed to reach the equivalence point is $V_{a q}=V_{a}=\frac{M_{b} V_{b}}{M_{a}}=\frac{(0.125 \ \mathrm{M})(25.0 \ \mathrm{mL})}{(0.0625 \ \mathrm{M})}=50.0 \ \mathrm{mL} \nonumber$ Before adding HCl the pH is that for a solution of 0.100 M NH3. $K_{\mathrm{b}}=\frac{[\mathrm{OH}^-]\left[\mathrm{NH}_{4}^{+}\right]}{\left[\mathrm{NH}_{3}\right]}=\frac{(x)(x)}{0.125-x}=1.75 \times 10^{-5} \nonumber$ $x=\left[\mathrm{OH}^{-}\right]=1.48 \times 10^{-3} \ \mathrm{M} \nonumber$ $\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]=\frac{K_{\mathrm{w}}}{[\mathrm{OH}^-]}=\frac{1.00 \times 10^{-14}}{1.48 \times 10^{-3} \ \mathrm{M}}=6.76 \times 10^{-12} \ \mathrm{M} \nonumber$ The pH at the beginning of the titration, therefore, is 11.17. Before the equivalence point the pH is determined by an $\text{NH}_3/\text{NH}_4^+$ buffer. For example, after adding 10.0 mL of HCl $\left[\mathrm{NH}_{3}\right]=\frac{(0.125 \ \mathrm{M})(25.0 \ \mathrm{mL})-(0.0625 \ \mathrm{M})(10.0 \ \mathrm{mL})}{25.0 \ \mathrm{mL}+10.0 \ \mathrm{mL}}=0.0714 \ \mathrm{M} \nonumber$ $\left[\mathrm{NH}_{4}^{+}\right]=\frac{(0.0625 \ \mathrm{M})(10.0 \ \mathrm{mL})}{25.0 \ \mathrm{mL}+10.0 \ \mathrm{mL}}=0.0179 \ \mathrm{M} \nonumber$ $\mathrm{pH}=9.244+\log \frac{0.0714 \ \mathrm{M}}{0.0179 \ \mathrm{M}}=9.84 \nonumber$ At the equivalence point the predominate ion in solution is $\text{NH}_4^+$. To calculate the pH we first determine the concentration of $\text{NH}_4^+$ $\left[\mathrm{NH}_{4}^{+}\right]=\frac{(0.125 \ \mathrm{M})(25.0 \ \mathrm{mL})}{25.0 \ \mathrm{mL}+50.0 \ \mathrm{mL}}=0.0417 \ \mathrm{M} \nonumber$ and then calculate the pH $K_{\mathrm{a}}=\frac{\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]\left[\mathrm{NH}_{3}\right]}{\left[\mathrm{NH}_{4}^{+}\right]}=\frac{(x)(x)}{0.0417-x}=5.70 \times 10^{-10} \nonumber$ obtaining a value of 5.31. After the equivalence point, the pH is determined by the excess HCl. For example, after adding 70.0 mL of HCl $[\mathrm{HCl}]=\frac{(0.0625 \ \mathrm{M})(70.0 \ \mathrm{mL})-(0.125 \ \mathrm{M})(25.0 \ \mathrm{mL})}{70.0 \ \mathrm{mL}+25.0 \ \mathrm{mL}}=0.0132 \ \mathrm{M} \nonumber$ and the pH is 1.88. Some additional results are shown here. volume of HCl (mL) pH volume of HCl (mL) pH 0 11.17 60 2.13 10 9.84 70 1.88 20 9.42 80 1.75 30 9.07 90 1.66 40 8.64 100 1.60 50 5.31 We can extend this approach for calculating a weak acid–strong base titration curve to reactions that involve multiprotic acids or bases, and mixtures of acids or bases. As the complexity of the titration increases, however, the necessary calculations become more time consuming. Not surprisingly, a variety of algebraic and spreadsheet approaches are available to aid in constructing titration curves. The following papers provide information on algebraic approaches to calculating titration curves: (a) Willis, C. J. J. Chem. Educ. 1981, 58, 659–663; (b) Nakagawa, K. J. Chem. Educ. 1990, 67, 673–676; (c) Gordus, A. A. J. Chem. Educ. 1991, 68, 759–761; (d) de Levie, R. J. Chem. Educ. 1993, 70, 209–217; (e) Chaston, S. J. Chem. Educ. 1993, 70, 878–880; (f) de Levie, R. Anal. Chem. 1996, 68, 585–590. The following papers provide information on the use of spreadsheets to generate titration curves: (a) Currie, J. O.; Whiteley, R. V. J. Chem. Educ. 1991, 68, 923–926; (b) Breneman, G. L.; Parker, O. J. J. Chem. Educ. 1992, 69, 46–47; (c) Carter, D. R.; Frye, M. S.; Mattson, W. A. J. Chem. Educ. 1993, 70, 67–71; (d) Freiser, H. Concepts and Calculations in Analytical Chemistry, CRC Press: Boca Raton, 1992. Sketching an Acid–Base Titration Curve To evaluate the relationship between a titration’s equivalence point and its end point we need to construct only a reasonable approximation of the exact titration curve. In this section we demonstrate a simple method for sketching an acid–base titration curve. Our goal is to sketch the titration curve quickly, using as few calculations as possible. Let’s use the titration of 50.0 mL of 0.100 M CH3COOH with 0.200 M NaOH to illustrate our approach. This is the same example that we used to develop the calculations for a weak acid–strong base titration curve. You can review the results of that calculation in Table 9.2.2 and in Figure 9.2.2 . We begin by calculating the titration’s equivalence point volume, which, as we determined earlier, is 25.0 mL. Next we draw our axes, placing pH on the y-axis and the titrant’s volume on the x-axis. To indicate the equivalence point volume, we draw a vertical line that intersects the x-axis at 25.0 mL of NaOH. Figure 9.2.3 a shows the first step in our sketch. Before the equivalence point the titrand’s pH is determined by a buffer of acetic acid, CH3COOH, and acetate, CH3COO. Although we can calculate a buffer’s pH using the Henderson–Hasselbalch equation, we can avoid this calculation by making a simple assumption. You may recall from Chapter 6 that a buffer operates over a pH range that extends approximately ±1 pH unit on either side of the weak acid’s pKa value. The pH is at the lower end of this range, pH = pKa – 1, when the weak acid’s concentration is $10 \times$ greater than that of its conjugate weak base. The buffer reaches its upper pH limit, pH = pKa + 1, when the weak acid’s concentration is $10 \times$ smaller than that of its conjugate weak base. When we titrate a weak acid or a weak base, the buffer spans a range of volumes from approximately 10% of the equivalence point volume to approximately 90% of the equivalence point volume. The actual values are 9.09% and 90.9%, but for our purpose, using 10% and 90% is more convenient; that is, after all, one advantage of an approximation! Figure 9.2.3 b shows the second step in our sketch. First, we superimpose acetic acid’s ladder diagram on the y-axis, including its buffer range, using its pKa value of 4.76. Next, we add two points, one for the pH at 10% of the equivalence point volume (a pH of 3.76 at 2.5 mL) and one for the pH at 90% of the equivalence point volume (a pH of 5.76 at 22.5 mL). The third step is to add two points after the equivalence point. The pH after the equivalence point is fixed by the concentration of excess titrant, NaOH. Calculating the pH of a strong base is straightforward, as we saw earlier. Figure 9.2.3 c includes points (see Table 9.2.2 ) for the pH after adding 30.0 mL and after adding 40.0 mL of NaOH. Next, we draw a straight line through each pair of points, extending each line through the vertical line that represents the equivalence point’s volume (Figure 9.2.3 d). Finally, we complete our sketch by drawing a smooth curve that connects the three straight-line segments (Figure 9.2.3 e). A comparison of our sketch to the exact titration curve (Figure 9.2.3 f) shows that they are in close agreement. Exercise 9.2.3 Sketch a titration curve for the titration of 25.0 mL of 0.125 M NH3 with 0.0625 M HCl and compare to the result from Exercise 9.2.2 . Answer The figure below shows a sketch of the titration curve. The black dots and curve are the approximate sketch of the titration curve. The points in red are the calculations from Exercise 9.2.2 . The two black points before the equivalence point (VHCl = 5 mL, pH = 10.24 and VHCl = 45 mL, pH= 8.24) are plotted using the pKa of 9.244 for $\text{NH}_4^+$. The two black points after the equivalence point (VHCl = 60 mL, pH = 2.13 and VHCl = 80 mL, pH= 1.75 ) are from the answer to Exercise 9.2.2 . As shown in the following example, we can adapt this approach to any acid–base titration, including those where exact calculations are more challenging, including the titration of polyprotic weak acids and bases, and the titration of mixtures of weak acids or weak bases. Example 9.2.1 Sketch titration curves for the following two systems: (a) the titration of 50.0 mL of 0.050 M H2A, a diprotic weak acid with a pKa1 of 3 and a pKa2 of 7; and (b) the titration of a 50.0 mL mixture that contains 0.075 M HA, a weak acid with a pKa of 3, and 0.025 M HB, a weak acid with a pKa of 7. For both titrations, assume that the titrant is 0.10 M NaOH. Solution Figure 9.2.4 a shows the titration curve for H2A, including the ladder diagram for H2A on the y-axis, the two equivalence points at 25.0 mL and at 50.0 mL, two points before each equivalence point, two points after the last equivalence point, and the straight-lines used to sketch the final titration curve. Before the first equivalence point the pH is controlled by a buffer of H2A and HA. An HA/A2– buffer controls the pH between the two equivalence points. After the second equivalence point the pH reflects the concentration of excess NaOH. Figure 9.2.4 b shows the titration curve for the mixture of HA and HB. Again, there are two equivalence points; however, in this case the equivalence points are not equally spaced because the concentration of HA is greater than that for HB. Because HA is the stronger of the two weak acids it reacts first; thus, the pH before the first equivalence point is controlled by a buffer of HA and A. Between the two equivalence points the pH reflects the titration of HB and is determined by a buffer of HB and B. After the second equivalence point excess NaOH determines the pH. Exercise 9.2.4 Sketch the titration curve for 50.0 mL of 0.050 M H2A, a diprotic weak acid with a pKa1 of 3 and a pKa2 of 4, using 0.100 M NaOH as the titrant. The fact that pKa2 falls within the buffer range of pKa1 presents a challenge that you will need to consider. Answer The figure below shows a sketch of the titration curve. The titration curve has two equivalence points, one at 25.0 mL $(\text{H}_2\text{A} \rightarrow \text{HA}^-)$ and one at 50.0 mL ($\text{HA}^- \rightarrow \text{A}^{2-}$). In sketching the curve, we plot two points before the first equivalence point using the pKa1 of 3 for H2A $V_{\mathrm{HCl}}=2.5 \ \mathrm{mL}, \mathrm{pH}=2 \text { and } V_{\mathrm{HCl}}=22.5 \ \mathrm{mL}, \mathrm{pH}=4 \nonumber$ two points between the equivalence points using the pKa2 of 5 for HA $V_{\mathrm{HCl}}=27.5 \ \mathrm{mL}, \mathrm{pH}=3, \text { and } V_{\mathrm{HCl}}=47.5 \ \mathrm{mL}, \mathrm{pH}=5 \nonumber$ and two points after the second equivalence point $V_{\mathrm{HCl}}=70 \ \mathrm{mL}, \mathrm{pH}=12.22 \text { and } V_{\mathrm{HCl}}=90 \ \mathrm{mL}, \mathrm{pH}=12.46 \nonumber$ Drawing a smooth curve through these points presents us with the following dilemma—the pH appears to increase as the titrant’s volume approaches the first equivalence point and then appears to decrease as it passes through the first equivalence point. This is, of course, absurd; as we add NaOH the pH cannot decrease. Instead, we model the titration curve before the second equivalence point by drawing a straight line from the first point (VHCl = 2.5 mL, pH = 2) to the fourth point (VHCl = 47.5 mL, pH= 5), ignoring the second and third points. The results is a reasonable approximation of the exact titration curve. Selecting and Evaluating the End Point Earlier we made an important distinction between a titration’s end point and its equivalence point. The difference between these two terms is important and deserves repeating. An equivalence point, which occurs when we react stoichiometrically equal amounts of the analyte and the titrant, is a theoretical not an experimental value. A titration’s end point is an experimental result that represents our best estimate of the equivalence point. Any difference between a titration’s equivalence point and its corresponding end point is a source of determinate error. Where is the Equivalence Point? Earlier we learned how to calculate the pH at the equivalence point for the titration of a strong acid with a strong base, and for the titration of a weak acid with a strong base. We also learned how to sketch a titration curve with only a minimum of calculations. Can we also locate the equivalence point without performing any calculations. The answer, as you might guess, often is yes! For most acid–base titrations the inflection point—the point on a titration curve that has the greatest slope—very nearly coincides with the titration’s equivalence point. The red arrows in Figure 9.2.4 , for example, identify the equivalence points for the titration curves in Example 9.2.1 . An inflection point actually precedes its corresponding equivalence point by a small amount, with the error approaching 0.1% for weak acids and weak bases with dissociation constants smaller than 10–9, or for very dilute solutions [Meites, L.; Goldman, J. A. Anal. Chim. Acta 1963, 29, 472–479]. The principal limitation of an inflection point is that it must be present and easy to identify. For some titrations the inflection point is missing or difficult to find. Figure 9.2.5 , for example, demonstrates the affect of a weak acid’s dissociation constant, Ka, on the shape of its titration curve. An inflection point is visible, even if barely so, for acid dissociation constants larger than 10–9, but is missing when Ka is 10–11. An inflection point also may be missing or difficult to see if the analyte is a multiprotic weak acid or weak base with successive dissociation constants that are similar in magnitude. To appreciate why this is true let’s consider the titration of a diprotic weak acid, H2A, with NaOH. During the titration the following two reactions occur. $\mathrm{H}_{2} \mathrm{A}(a q)+\mathrm{OH}^{-}(a q) \longrightarrow \mathrm{H}_{2} \mathrm{O}(l)+\mathrm{HA}^{-}(a q) \label{9.3}$ $\mathrm{HA}^{-}(a q)+\mathrm{OH}^{-}(a q) \rightarrow \mathrm{H}_{2} \mathrm{O}(l)+\mathrm{A}^{2-}(a q) \label{9.4}$ To see two distinct inflection points, reaction \ref{9.3} must essentially be complete before reaction \ref{9.4} begins. Figure 9.2.6 shows titration curves for three diprotic weak acids. The titration curve for maleic acid, for which Ka1 is approximately $20000 \times$ larger than Ka2, has two distinct inflection points. Malonic acid, on the other hand, has acid dissociation constants that differ by a factor of approximately 690. Although malonic acid’s titration curve shows two inflection points, the first is not as distinct as the second. Finally, the titration curve for succinic acid, for which the two Ka values differ by a factor of only $27 \times$, has only a single inflection point that corresponds to the neutralization of $\text{HC}_2\text{H}_4\text{O}_4^-$ to $\text{C}_2\text{H}_4\text{O}_4^{2-}$. In general, we can detect separate inflection points when successive acid dissociation constants differ by a factor of at least 500 (a $\Delta$Ka of at least 2.7). The same holds true for mixtures of weak acids or mixtures of weak bases. To detect separate inflection points when titrating a mixture of weak acids, their pKa values must differ by at least a factor of 500. Finding the End Point with an Indicator One interesting group of weak acids and weak bases are organic dyes. Because an organic dye has at least one highly colored conjugate acid–base species, its titration results in a change in both its pH and its color. We can use this change in color to indicate the end point of a titration provided that it occurs at or near the titration’s equivalence point. As an example, let’s consider an indicator for which the acid form, HIn, is yellow and the base form, In, is red. The color of the indicator’s solution depends on the relative concentrations of HIn and In. To understand the relationship between pH and color we use the indicator’s acid dissociation reaction $\mathrm{HIn}(a q)+\mathrm{H}_{2} \mathrm{O}(l)\rightleftharpoons \mathrm{H}_{3} \mathrm{O}^{+}(a q)+\operatorname{In}^{-}(a q) \nonumber$ and its equilibrium constant expression. $K_{\mathrm{a}}=\frac{\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]\left[\mathrm{In}^{-}\right]}{[\mathrm{HIn}]} \label{9.5}$ Taking the negative log of each side of Equation \ref{9.5}, and rearranging to solve for pH leaves us with a equation that relates the solution’s pH to the relative concentrations of HIn and In. $\mathrm{pH}=\mathrm{p} K_{\mathrm{a}}+\log \frac{[\mathrm{In}^-]}{[\mathrm{HIn}]} \label{9.6}$ If we can detect HIn and In with equal ease, then the transition from yellow-to-red (or from red-to-yellow) reaches its midpoint, which is orange, when the concentrations of HIn and In are equal, or when the pH is equal to the indicator’s pKa. If the indicator’s pKa and the pH at the equivalence point are identical, then titrating until the indicator turns orange is a suitable end point. Unfortunately, we rarely know the exact pH at the equivalence point. In addition, determining when the concentrations of HIn and In are equal is difficult if the indicator’s change in color is subtle. We can establish the range of pHs over which the average analyst observes a change in the indicator’s color by making two assumptions: that the indicator’s color is yellow if the concentration of HIn is $10 \times$ greater than that of In and that its color is red if the concentration of HIn is $10 \times$ smaller than that of In. Substituting these inequalities into Equation \ref{9.6} $\begin{array}{l}{\mathrm{pH}=\mathrm{p} K_{\mathrm{a}}+\log \frac{1}{10}=\mathrm{p} K_{\mathrm{a}}-1} \ {\mathrm{pH}=\mathrm{p} K_{\mathrm{a}}+\log \frac{10}{1}=\mathrm{p} K_{\mathrm{a}}+1}\end{array} \nonumber$ shows that the indicator changes color over a pH range that extends ±1 unit on either side of its pKa. As shown in Figure 9.2.7 , the indicator is yel-ow when the pH is less than pKa – 1 and it is red when the pH is greater than pKa + 1. For pH values between pKa – 1 and pKa + 1 the indicator’s color passes through various shades of orange. The properties of several common acid–base indicators are listed in Table 9.2.3 . Table 9.2.3 . Properties of Selected Acid–Base Indicators indicator acid color base color pH range pKa cresol red red yellow 0.2–1.8 thymol blue red yellow 1.2–2.8 1.7 bromothymol blue yellow blue 3.0–4.6 4.1 methyl orange red yellow 3.4–4.4 3.7 Congo red blue red 3.0–5.0 bromocresol green yellow blue 3.8–5.4 4.7 methyl red red yellow 4.2–6.3 5.0 bromocresol purple yellow purple 5.2–6.8 6.1 litmus red blue 5.0–8.0 bromothymol blue yellow blue 6.0–7.6 7.1 phenol red yellow blue 6.8–8.4 7.8 cresol red yellow red 7.2–8.8 8.2 thymol blue yellow red 8.0–9.6 8.9 phenolphthalein colorless red 8.3–10.0 9.6 alizarin yellow R yellow orange-red 10.1–12.0 You may wonder why an indicator’s pH range, such as that for phenolphthalein, is not equally distributed around its pKa value. The explanation is simple. Figure 9.2.7 presents an idealized view in which our sensitivity to the indicator’s two colors is equal. For some indicators only the weak acid or the weak base is colored. For other indicators both the weak acid and the weak base are colored, but one form is easier to see. In either case, the indicator’s pH range is skewed in the direction of the indicator’s less colored form. Thus, phenolphthalein’s pH range is skewed in the direction of its colorless form, shifting the pH range to values lower than those suggested by Figure 9.2.7 . The relatively broad range of pHs over which an indicator changes color places additional limitations on its ability to signal a titration’s end point. To minimize a determinate titration error, the indicator’s entire pH range must fall within the rapid change in pH near the equivalence point. For example, in Figure 9.2.8 we see that phenolphthalein is an appropriate indicator for the titration of 50.0 mL of 0.050 M acetic acid with 0.10 M NaOH. Bromothymol blue, on the other hand, is an inappropriate indicator because its change in color begins well before the initial sharp rise in pH, and, as a result, spans a relatively large range of volumes. The early change in color increases the probability of obtaining an inaccurate result, and the range of possible end point volumes increases the probability of obtaining imprecise results. Exercise 9.2.5 Suggest a suitable indicator for the titration of 25.0 mL of 0.125 M NH3 with 0.0625 M NaOH. You constructed a titration curve for this titration in Exercise 9.2.2 and Exercise 9.2.3 . Answer The pH at the equivalence point is 5.31 (see Exercise 9.2.2 ) and the sharp part of the titration curve extends from a pH of approximately 7 to a pH of approximately 4. Of the indicators in Table 9.2.3 , methyl red is the best choice because its pKa value of 5.0 is closest to the equivalence point’s pH and because the pH range of 4.2–6.3 for its change in color will not produce a significant titration error. Finding the End Point by Monitoring pH An alternative approach for locating a titration’s end point is to monitor the titration’s progress using a sensor whose signal is a function of the analyte’s concentration. The result is a plot of the entire titration curve, which we can use to locate the end point with a minimal error. A pH electrode is the obvious sensor for monitoring an acid–base titration and the result is a potentiometric titration curve. For example, Figure 9.2.9 a shows a small portion of the potentiometric titration curve for the titration of 50.0 mL of 0.050 M CH3COOH with 0.10 M NaOH, which focuses on the region that contains the equivalence point. The simplest method for finding the end point is to locate the titration curve’s inflection point, which is shown by the arrow. This is also the least accurate method, particularly if the titration curve has a shallow slope at the equivalence point. See Chapter 11 for more details about pH electrodes. Another method for locating the end point is to plot the first derivative of the titration curve, which gives its slope at each point along the x-axis. Examine Figure 9.2.9 a and consider how the titration curve’s slope changes as we approach, reach, and pass the equivalence point. Because the slope reaches its maximum value at the inflection point, the first derivative shows a spike at the equivalence point (Figure 9.2.9 b). The second derivative of a titration curve can be more useful than the first derivative because the equivalence point intersects the volume axis. Figure 9.2.9 c shows the resulting titration curve. Suppose we have the following three points on our titration curve: volume (mL) pH 23.65 6.00 23.91 6.10 24.13 6.20 Mathematically, we can approximate the first derivative as $\Delta \text{pH} / \Delta V$, where $\Delta \text{pH}$ is the change in pH between successive additions of titrant. Using the first two points, the first derivative is $\frac{\Delta \mathrm{pH}}{\Delta V}=\frac{6.10-6.00}{23.91-23.65}=0.385 \nonumber$ which we assign to the average of the two volumes, or 23.78 mL. For the second and third points, the first derivative is 0.455 and the average volume is 24.02 mL. volume (mL) $\Delta \text{pH}$ 23.78 0.385 24.02 0.455 We can approximate the second derivative as $\Delta (\Delta \text{pH} / \Delta V) / \Delta V$, or $\Delta^2 \text{pH} / \Delta V^2$. Using the two points from our calculation of the first derivative, the second derivative is $\frac{\Delta^{2} \mathrm{p} \mathrm{H}}{\Delta V^{2}}=\frac{0.455-0.385}{24.02-23.78}=0.292 \nonumber$ which we assign to the average of the two volumes, or 23.90 mL. Note that calculating the first derivative comes at the expense of losing one piece of information (three points become two points), and calculating the second derivative comes at the expense of losing two pieces of information. Derivative methods are particularly useful when titrating a sample that contains more than one analyte. If we rely on indicators to locate the end points, then we usually must complete separate titrations for each analyte so that we can see the change in color for each end point. If we record the titration curve, however, then a single titration is sufficient. The precision with which we can locate the end point also makes derivative methods attractive for an analyte that has a poorly defined normal titration curve. Derivative methods work well only if we record sufficient data during the rapid increase in pH near the equivalence point. This usually is not a problem if we use an automatic titrator, such as the one seen earlier in Figure 9.1.5. Because the pH changes so rapidly near the equivalence point—a change of several pH units over a span of several drops of titrant is not unusual—a manual titration does not provide enough data for a useful derivative titration curve. A manual titration does contain an abundance of data during the more gently rising portions of the titration curve before and after the equivalence point. This data also contains information about the titration curve’s equivalence point. Consider again the titration of acetic acid, CH3COOH, with NaOH. At any point during the titration acetic acid is in equilibrium with H3O+ and CH3COO $\mathrm{CH}_{3} \mathrm{COOH}(a q)+\mathrm{H}_{2} \mathrm{O}(l )\rightleftharpoons\mathrm{H}_{3} \mathrm{O}^{+}(a q)+\mathrm{CH}_{3} \mathrm{COO}^{-}(a q) \nonumber$ for which the equilibrium constant is $K_{a}=\frac{\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]\left[\mathrm{CH}_{3} \mathrm{COO}^{-}\right]}{\left[\mathrm{CH}_{3} \mathrm{COOH}\right]} \nonumber$ Before the equivalence point the concentrations of CH3COOH and CH3COO are $[\text{CH}_3\text{COOH}] = \frac {(\text{mol CH}_3\text{COOH})_\text{initial} - (\text{mol NaOH})_\text{added}} {\text{total volume}} = \frac {M_a V_a - M_b V_b} {V_a + V_b} \nonumber$ $[\text{CH}_3\text{COO}^-] = \frac {(\text{mol NaOH})_\text{added}} {\text{total volume}} = \frac {M_b V_b} {V_a + V_b} \nonumber$ Substituting these equations into the Ka expression and rearranging leaves us with $K_{\mathrm{a}}=\frac{\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]\left(M_{b} V_{b}\right) /\left(V_{a}+V_{b}\right)}{\left\{M_{a} V_{a}-M_{b} V_{b}\right\} /\left(V_{a}+V_{b}\right)} \nonumber$ $K_{a} M_{a} V_{a}-K_{a} M_{b} V_{b}=\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]\left(M_{b} V_{b}\right) \nonumber$ $\frac{K_{a} M_{a} V_{a}}{M_{b}}-K_{a} V_{b}=\left[\mathrm{H}_{3} \mathrm{O}^{+}\right] V_{b} \nonumber$ Finally, recognizing that the equivalence point volume is $V_{eq}=\frac{M_{a} V_{a}}{M_{b}} \nonumber$ leaves us with the following equation. $\left[\mathrm{H}_{3} \mathrm{O}^{+}\right] \times V_{b}=K_{\mathrm{a}} V_{eq}-K_{\mathrm{a}} V_{b} \nonumber$ For volumes of titrant before the equivalence point, a plot of $V_b \times [\text{H}_3\text{O}^+]$ versus Vb is a straight-line with an x-intercept of Veq and a slope of –Ka. Figure 9.2.9 d shows a typical result. This method of data analysis, which converts a portion of a titration curve into a straight-line, is a Gran plot. Values of Ka determined by this method may have a substantial error if the effect of activity is ignored. See Chapter 6.9 for a discussion of activity. Finding the End Point by Monitoring Temperature The reaction between an acid and a base is exothermic. Heat generated by the reaction is absorbed by the titrand, which increases its temperature. Monitoring the titrand’s temperature as we add the titrant provides us with another method for recording a titration curve and identifying the titration’s end point (Figure 9.2.10 ). Before we add the titrant, any change in the titrand’s temperature is the result of warming or cooling as it equilibrates with the surroundings. Adding titrant initiates the exothermic acid–base reaction and increases the titrand’s temperature. This part of a thermometric titration curve is called the titration branch. The temperature continues to rise with each addition of titrant until we reach the equivalence point. After the equivalence point, any change in temperature is due to the titrant’s enthalpy of dilution and the difference between the temperatures of the titrant and titrand. Ideally, the equivalence point is a distinct intersection of the titration branch and the excess titrant branch. As shown in Figure 9.2.10 , however, a thermometric titration curve usually shows curvature near the equivalence point due to an incomplete neutralization reaction or to the excessive dilution of the titrand and the titrant during the titration. The latter problem is minimized by using a titrant that is 10–100 times more concentrated than the analyte, although this results in a very small end point volume and a larger relative error. If necessary, the end point is found by extrapolation. Although not a common method for monitoring an acid–base titration, a thermometric titration has one distinct advantage over the direct or indirect monitoring of pH. As discussed earlier, the use of an indicator or the monitoring of pH is limited by the magnitude of the relevant equilibrium constants. For example, titrating boric acid, H3BO3, with NaOH does not provide a sharp end point when monitoring pH because boric acid’s Ka of $5.8 \times 10^{-10}$ is too small (Figure 9.2.11 a). Because boric acid’s enthalpy of neutralization is fairly large, –42.7 kJ/mole, its thermometric titration curve provides a useful endpoint (Figure 9.2.11 b). Titrations in Nonaqueous Solvents Thus far we have assumed that the titrant and the titrand are aqueous solutions. Although water is the most common solvent for acid–base titrimetry, switching to a nonaqueous solvent can improve a titration’s feasibility. For an amphoteric solvent, SH, the autoprotolysis constant, Ks, relates the concentration of its protonated form, $\text{SH}_2^+$, to its deprotonated form, S \begin{aligned} 2 \mathrm{SH} &\rightleftharpoons\mathrm{SH}_{2}^{+}+\mathrm{S}^{-} \ K_{\mathrm{s}} &=\left[\mathrm{SH}_{2}^{+}\right][\mathrm{S}^-] \end{aligned} \nonumber and the solvent’s pH and pOH are $\begin{array}{l}{\mathrm{pH}=-\log \left[\mathrm{SH}_{2}^{+}\right]} \ {\mathrm{pOH}=-\log \left[\mathrm{S}^{-}\right]}\end{array} \nonumber$ You should recognize that Kw is just specific form of Ks when the solvent is water. The most important limitation imposed by Ks is the change in pH during a titration. To understand why this is true, let’s consider the titration of 50.0 mL of $1.0 \times 10^{-4}$ M HCl using $1.0 \times 10^{-4}$ M NaOH as the titrant. Before the equivalence point, the pH is determined by the untitrated strong acid. For example, when the volume of NaOH is 90% of Veq, the concentration of H3O+ is $\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]=\frac{M_{a} V_{a}-M_{b} V_{b}}{V_{a}+V_{b}} = \frac{\left(1.0 \times 10^{-4} \ \mathrm{M}\right)(50.0 \ \mathrm{mL})-\left(1.0 \times 10^{-4} \ \mathrm{M}\right)(45.0 \ \mathrm{mL})}{50.0 \ \mathrm{mL}+45.0 \ \mathrm{mL}} = 5.3 \times 10^{-6} \ \mathrm{M} \nonumber$ and the pH is 5.3. When the volume of NaOH is 110% of Veq, the concentration of OH is $\left[\mathrm{OH}^{-}\right]=\frac{M_{b} V_{b}-M_{a} V_{a}}{V_{a}+V_{b}} = \frac{\left(1.0 \times 10^{-4} \ \mathrm{M}\right)(55.0 \ \mathrm{mL})-\left(1.0 \times 10^{-4} \ \mathrm{M}\right)(50.0 \ \mathrm{mL})}{55.0 \ \mathrm{mL}+50.0 \ \mathrm{mL}} = 4.8 \times 10^{-6} \ \mathrm{M} \nonumber$ and the pOH is 5.3. The titrand’s pH is $\mathrm{pH}=\mathrm{p} K_{w}-\mathrm{pOH}=14.0-5.3=8.7 \nonumber$ and the change in the titrand’s pH as the titration goes from 90% to 110% of Veq is $\Delta \mathrm{pH}=8.7-5.3=3.4 \nonumber$ If we carry out the same titration in a nonaqueous amphiprotic solvent that has a Ks of $1.0 \times 10^{-20}$, the pH after adding 45.0 mL of NaOH is still 5.3. However, the pH after adding 55.0 mL of NaOH is $\mathrm{pH}=\mathrm{p} K_{s}-\mathrm{pOH}=20.0-5.3=14.7 \nonumber$ In this case the change in pH $\Delta \mathrm{pH}=14.7-5.3=9.4 \nonumber$ is significantly greater than that obtained when the titration is carried out in water. Figure 9.2.12 shows the titration curves in both the aqueous and the nonaqueous solvents. Another parameter that affects the feasibility of an acid–base titration is the titrand’s dissociation constant. Here, too, the solvent plays an important role. The strength of an acid or a base is a relative measure of how easy it is to transfer a proton from the acid to the solvent or from the solvent to the base. For example, HF, with a Ka of $6.8 \times 10^{-4}$, is a better proton donor than CH3COOH, for which Ka is $1.75 \times 10^{-5}$. The strongest acid that can exist in water is the hydronium ion, H3O+. HCl and HNO3 are strong acids because they are better proton donors than H3O+ and essentially donate all their protons to H2O, leveling their acid strength to that of H3O+. In a different solvent HCl and HNO3 may not behave as strong acids. If we place acetic acid in water the dissociation reaction $\mathrm{CH}_{3} \mathrm{COOH}(a q)+\mathrm{H}_{2} \mathrm{O}( l)\rightleftharpoons\mathrm{H}_{3} \mathrm{O}^{+}(a q)+\mathrm{CH}_{3} \mathrm{COO}^{-}(a q) \nonumber$ does not proceed to a significant extent because CH3COO is a stronger base than H2O and H3O+ is a stronger acid than CH3COOH. If we place acetic acid in a solvent that is a stronger base than water, such as ammonia, then the reaction $\mathrm{CH}_{3} \mathrm{COOH}+\mathrm{NH}_{3}\rightleftharpoons\mathrm{NH}_{4}^{+}+\mathrm{CH}_{3} \mathrm{COO}^{-} \nonumber$ proceeds to a greater extent. In fact, both HCl and CH3COOH are strong acids in ammonia. All other things being equal, the strength of a weak acid increases if we place it in a solvent that is more basic than water, and the strength of a weak base increases if we place it in a solvent that is more acidic than water. In some cases, however, the opposite effect is observed. For example, the pKb for NH3 is 4.75 in water and it is 6.40 in the more acidic glacial acetic acid. In contradiction to our expectations, NH3 is a weaker base in the more acidic solvent. A full description of the solvent’s effect on the pKa of weak acid or the pKb of a weak base is beyond the scope of this text. You should be aware, however, that a titration that is not feasible in water may be feasible in a different solvent. Representative Method 9.2.1: Determination of Protein in Bread The best way to appreciate the theoretical and the practical details discussed in this section is to carefully examine a typical acid–base titrimetric method. Although each method is unique, the following description of the determination of protein in bread provides an instructive example of a typical procedure. The description here is based on Method 13.86 as published in Official Methods of Analysis, 8th Ed., Association of Official Agricultural Chemists: Washington, D. C., 1955. Description of the Methods This method is based on a determination of %w/w nitrogen using the Kjeldahl method. The protein in a sample of bread is oxidized to $\text{NH}_4^+$ using hot concentrated H2SO4. After making the solution alkaline, which converts $\text{NH}_4^+$ to NH3, the ammonia is distilled into a flask that contains a known amount of HCl. The amount of unreacted HCl is determined by a back titration using a standard strong base titrant. Because different cereal proteins contain similar amounts of nitrogen—on average there are 5.7 g protein for every gram of nitrogen—we multiply the experimentally determined %w/w N by a factor of 5.7 gives the %w/w protein in the sample. Procedure Transfer a 2.0-g sample of bread, which previously has been air-dried and ground into a powder, to a suitable digestion flask along with 0.7 g of a HgO catalyst, 10 g of K2SO4, and 25 mL of concentrated H2SO4. Bring the solution to a boil. Continue boiling until the solution turns clear and then boil for at least an additional 30 minutes. After cooling the solution below room temperature, remove the Hg2+ catalyst by adding 200 mL of H2O and 25 mL of 4% w/v K2S. Add a few Zn granules to serve as boiling stones and 25 g of NaOH. Quickly connect the flask to a distillation apparatus and distill the NH3 into a collecting flask that contains a known amount of standardized HCl. The tip of the condenser must be placed below the surface of the strong acid. After the distillation is complete, titrate the excess strong acid with a standard solution of NaOH using methyl red as an indicator (Figure 9.2.13 ). Questions 1. Oxidizing the protein converts all of its nitrogen to $\text{NH}_4^+$. Why is the amount of nitrogen not determined by directly titrating the $\text{NH}_4^+$ with a strong base? There are two reasons for not directly titrating the ammonium ion. First, because $\text{NH}_4^+$ is a very weak acid (its Ka is $5.6 \times 10^{-10}$), its titration with NaOH has a poorly-defined end point. Second, even if we can determine the end point with acceptable accuracy and precision, the solution also contains a substantial concentration of unreacted H2SO4. The presence of two acids that differ greatly in concentration makes for a difficult analysis. If the titrant’s concentration is similar to that of H2SO4, then the equivalence point volume for the titration of $\text{NH}_4^+$ is too small to measure reliably. On the other hand, if the titrant’s concentration is similar to that of $\text{NH}_4^+$, the volume needed to neutralize the H2SO4 is unreasonably large. 2. Ammonia is a volatile compound as evidenced by the strong smell of even dilute solutions. This volatility is a potential source of determinate error. Is this determinate error negative or positive? Any loss of NH3 is loss of nitrogen and, therefore, a loss of protein. The result is a negative determinate error. 3. Identify the steps in this procedure that minimize the determinate error from the possible loss of NH3. Three specific steps minimize the loss of ammonia: (1) the solution is cooled below room temperature before we add NaOH; (2) after we add NaOH, the digestion flask is quickly connected to the distillation apparatus; and (3) we place the condenser’s tip below the surface of the HCl to ensure that the NH3 reacts with the HCl before it is lost through volatilization. 4. How does K2S remove Hg2+, and why is its removal important? Adding sulfide precipitates Hg2+ as HgS. This is important because NH3 forms stable complexes with many metal ions, including Hg2+. Any NH3 that reacts with Hg2+ is not collected during distillation, providing another source of determinate error. Quantitative Applications Although many quantitative applications of acid–base titrimetry have been replaced by other analytical methods, a few important applications continue to find use. In this section we review the general application of acid–base titrimetry to the analysis of inorganic and organic compounds, with an emphasis on applications in environmental and clinical analysis. First, however, we discuss the selection and standardization of acidic and basic titrants. Selecting and Standardizing a Titrant The most common strong acid titrants are HCl, HClO4, and H2SO4. Solutions of these titrants usually are prepared by diluting a commercially available concentrated stock solution. Because the concentration of a concentrated acid is known only approximately, the titrant’s concentration is determined by standardizing against one of the primary standard weak bases listed in Table 9.2.4 . The nominal concentrations of the concentrated stock solutions are 12.1 M HCl, 11.7 M HClO4, and 18.0 M H2SO4. The actual concentrations of these acids are given as %w/v and vary slightly from lot-to-lot. Table 9.2.4 . Selected Primary Standards for Standardizing Strong Acid and Strong Base Titrants titrant type primary standard titration reaction comment strong acid Na2CO3 $\mathrm{Na}_{2} \mathrm{CO}_{3}+2 \mathrm{H}_{3} \mathrm{O}^{+} \rightarrow \mathrm{H}_{2} \mathrm{CO}_{3}+2 \mathrm{Na}^{+}+2 \mathrm{H}_{2} \mathrm{O}$ a strong acid (HOCH2)3CNH2 $\left(\mathrm{HOCH}_{2}\right)_{3} \mathrm{CNH}_{2}+\mathrm{H}_{3} \mathrm{O}^{+} \longrightarrow\left(\mathrm{HOCH}_{2}\right)_{3} \mathrm{CNH}_{3}^{+}+\mathrm{H}_{2} \mathrm{O}$ b strong acid Na2B4O7 $\mathrm{Na}_{2} \mathrm{B}_{4} \mathrm{O}_{7}+2 \mathrm{H}_{3} \mathrm{O}^{+}+3 \mathrm{H}_{2} \mathrm{O} \rightarrow 2 \mathrm{Na}^{+}+4 \mathrm{H}_{3} \mathrm{BO}_{3}$ strong base KHC8H4O4 $\mathrm{KHC}_{8} \mathrm{H}_{4} \mathrm{O}_{4}+\mathrm{OH}^{-} \rightarrow \mathrm{K}^{+}+\mathrm{C}_{8} \mathrm{H}_{4} \mathrm{O}_{4}^{-}+\mathrm{H}_{2} \mathrm{O}$ c strong base C6H5COOH $\mathrm{C}_{6} \mathrm{H}_{5} \mathrm{COOH}+\mathrm{OH}^{-} \rightarrow \mathrm{C}_{6} \mathrm{H}_{5} \mathrm{COO}^{-}+\mathrm{H}_{2} \mathrm{O}$ d strong base KH(IO3)2 $\mathrm{KH}\left(\mathrm{IO}_{3}\right)_{2}+\mathrm{OH}^{-} \rightarrow \mathrm{K}^{+}+2 \mathrm{IO}_{3}^{-}+\mathrm{H}_{2} \mathrm{O}$ (a) The end point for this titration is improved by titrating to the second equivalence point, boiling the solution to expel CO2, and retitrating to the second equivalence point. The reaction in this case is $\mathrm{Na}_{2} \mathrm{CO}_{3}+2 \mathrm{H}_{3} \mathrm{O}^{+} \rightarrow \mathrm{CO}_{2}+2 \mathrm{Na}^{+}+3 \mathrm{H}_{2} \mathrm{O} \nonumber$ (b) Tris-(hydroxymethyl)aminomethane often goes by the shorter name of TRIS or THAM. (c) Potassium hydrogen phthalate often goes by the shorter name of KHP. (d) Because it is not very soluble in water, dissolve benzoic acid in a small amount of ethanol before diluting with water. The most common strong base titrant is NaOH, which is available both as an impure solid and as an approximately 50% w/v solution. Solutions of NaOH are standardized against any of the primary weak acid standards listed in Table $\PageIndex[4|$. Using NaOH as a titrant is complicated by potential contamination from the following reaction between dissolved CO2 and OH. $\mathrm{CO}_{2}(a q)+2 \mathrm{OH}^{-}(a q) \rightarrow \mathrm{CO}_{3}^{2-}(a q)+\mathrm{H}_{2} \mathrm{O}( l) \label{9.7}$ Any solution in contact with the atmosphere contains a small amount of CO2(aq) from the equilibrium $\mathrm{CO}_{2}(g)\rightleftharpoons\mathrm{CO}_{2}(a q) \nonumber$ During the titration, NaOH reacts both with the titrand and with CO2, which increases the volume of NaOH needed to reach the titration’s end point. This is not a problem if the end point pH is less than 6. Below this pH the $\text{CO}_3^{2-}$ from reaction \ref{9.7} reacts with H3O+ to form carbonic acid. $\mathrm{CO}_{3}^{2-}(a q)+2 \mathrm{H}_{3} \mathrm{O}^{+}(a q) \rightarrow 2 \mathrm{H}_{2} \mathrm{O}(l)+\mathrm{H}_{2} \mathrm{CO}_{3}(a q). \label{9.8}$ Combining reaction \ref{9.7} and reaction \ref{9.8} gives an overall reaction that does not include OH. $\mathrm{CO}_{2}(a q)+\mathrm{H}_{2} \mathrm{O}(l ) \longrightarrow \mathrm{H}_{2} \mathrm{CO}_{3}(a q) \nonumber$ Under these conditions the presence of CO2 does not affect the quantity of OH used in the titration and is not a source of determinate error. If the end point pH is between 6 and 10, however, the neutralization of $\text{CO}_3^{2-}$ requires one proton $\mathrm{CO}_{3}^{2-}(a q)+\mathrm{H}_{3} \mathrm{O}^{+}(a q) \rightarrow \mathrm{H}_{2} \mathrm{O}(l)+\mathrm{HCO}_{3}^{-}(a q) \nonumber$ and the net reaction between CO2 and OH is $\mathrm{CO}_{2}(a q)+\mathrm{OH}^{-}(a q) \rightarrow \mathrm{HCO}_{3}^{-}(a q) \nonumber$ Under these conditions some OH is consumed in neutralizing CO2, which results in a determinate error. We can avoid the determinate error if we use the same end point pH for both the standardization of NaOH and the analysis of our analyte, although this is not always practical. Solid NaOH is always contaminated with carbonate due to its contact with the atmosphere, and we cannot use it to prepare a carbonate-free solution of NaOH. Solutions of carbonate-free NaOH are prepared from 50% w/v NaOH because Na2CO3 is insoluble in concentrated NaOH. When CO2 is absorbed, Na2CO3 precipitates and settles to the bottom of the container, which allow access to the carbonate-free NaOH. When pre- paring a solution of NaOH, be sure to use water that is free from dissolved CO2. Briefly boiling the water expels CO2; after it cools, the water is used to prepare carbonate-free solutions of NaOH. A solution of carbonate-free NaOH is relatively stable if we limit its contact with the atmosphere. Standard solutions of sodium hydroxide are not stored in glass bottles as NaOH reacts with glass to form silicate; instead, store such solutions in polyethylene bottles. Inorganic Analysis Acid–base titrimetry is a standard method for the quantitative analysis of many inorganic acids and bases. A standard solution of NaOH is used to determine the concentration of inorganic acids, such as H3PO4 or H3AsO4, and inorganic bases, such as Na2CO3 are analyzed using a standard solution of HCl. If an inorganic acid or base that is too weak to be analyzed by an aqueous acid–base titration, it may be possible to complete the analysis by adjusting the solvent or by an indirect analysis. For example, when analyzing boric acid, H3BO3, by titrating with NaOH, accuracy is limited by boric acid’s small acid dissociation constant of $5.8 \times 10^{-10}$. Boric acid’s Ka value increases to $1.5 \times 10^{-4}$ in the presence of mannitol, because it forms a stable complex with the borate ion, which results is a sharper end point and a more accurate titration. Similarly, the analysis of ammonium salts is limited by the ammonium ion’ small acid dissociation constant of $5.7 \times 10^{-10}$. We can determine $\text{NH}_4^+$ indirectly by using a strong base to convert it to NH3, which is removed by distillation and titrated with HCl. Because NH3 is a stronger weak base than $\text{NH}_4^+$ is a weak acid (its Kb is $1.58 \times 10^{-5}$), the titration has a sharper end point. We can analyze a neutral inorganic analyte if we can first convert it into an acid or a base. For example, we can determine the concentration of $\text{NO}_3^-$ by reducing it to NH3 in a strongly alkaline solution using Devarda’s alloy, a mixture of 50% w/w Cu, 45% w/w Al, and 5% w/w Zn. $3 \mathrm{NO}_{3}^{-}(a q)+8 \mathrm{Al}(s)+5 \mathrm{OH}^{-}(a q)+2 \mathrm{H}_{2} \mathrm{O}(l) \rightarrow 8 \mathrm{AlO}_{2}^{-}(a q)+3 \mathrm{NH}_{3}(a q) \nonumber$ The NH3 is removed by distillation and titrated with HCl. Alternatively, we can titrate $\text{NO}_3^-$ as a weak base by placing it in an acidic nonaqueous solvent, such as anhydrous acetic acid, and using HClO4 as a titrant. Acid–base titrimetry continues to be listed as a standard method for the determination of alkalinity, acidity, and free CO2 in waters and wastewaters. Alkalinity is a measure of a sample’s capacity to neutralize acids. The most important sources of alkalinity are OH, $\text{HCO}_3^-$, and $\text{CO}_3^{2-}$, although other weak bases, such as phosphate, may contribute to the overall alkalinity. Total alkalinity is determined by titrating to a fixed end point pH of 4.5 (or to the bromocresol green end point) using a standard solution of HCl or H2SO4. Results are reported as mg CaCO3/L. Although a variety of strong bases and weak bases may contribute to a sample’s alkalinity, a single titration cannot distinguish between the possible sources. Reporting the total alkalinity as if CaCO3 is the only source provides a means for comparing the acid-neutralizing capacities of different samples. When the sources of alkalinity are limited to OH, $\text{HCO}_3^-$, and $\text{CO}_3^{2-}$, separate titrations to a pH of 4.5 (or the bromocresol green end point) and a pH of 8.3 (or the phenolphthalein end point) allow us to determine which species are present and their respective concentrations. Titration curves for OH, $\text{HCO}_3^-$, and $\text{CO}_3^{2-}$are shown in Figure 9.2.14 . For a solution that contains OH alkalinity only, the volume of strong acid needed to reach each of the two end points is identical (Figure 9.2.14 a). When the only source of alkalinity is $\text{CO}_3^{2-}$, the volume of strong acid needed to reach the end point at a pH of 4.5 is exactly twice that needed to reach the end point at a pH of 8.3 (Figure 9.2.14 b). If a solution contains $\text{HCO}_3^-$ alkalinity only, the volume of strong acid needed to reach the end point at a pH of 8.3 is zero, but that for the pH 4.5 end point is greater than zero (Figure 9.2.14 c). A mixture of OH and $\text{CO}_3^{2-}$ or a mixture of $\text{HCO}_3^-$ and $\text{CO}_3^{2-}$ also is possible. Consider, for example, a mixture of OH and $\text{CO}_3^{2-}$. The volume of strong acid to titrate OH is the same whether we titrate to a pH of 8.3 or a pH of 4.5. Titrating $\text{CO}_3^{2-}$ to a pH of 4.5, however, requires twice as much strong acid as titrating to a pH of 8.3. Consequently, when we titrate a mixture of these two ions, the volume of strong acid needed to reach a pH of 4.5 is less than twice that needed to reach a pH of 8.3. For a mixture of $\text{HCO}_3^-$ and $\text{CO}_3^{2-}$ the volume of strong acid needed to reach a pH of 4.5 is more than twice that needed to reach a pH of 8.3. Table 9.2.5 summarizes the relationship between the sources of alkalinity and the volumes of titrant needed to reach the two end points. A mixture of OH and $\text{HCO}_3^-$ is unstable with respect to the formation of $\text{CO}_3^{2-}$. Problem 15 in the end-of-chapter problems asks you to explain why this is true. Table 9.2.5 . Relationship Between End Point Volumes and Sources of Alkalinity source of alkalinity relationship between end point volumes OH $V_{\mathrm{pH} \ 4.5}=V_{\mathrm{pH} \ 8.3}$ $\text{CO}_3^{2-}$ $V_{\mathrm{pH} \ 4.5}=2 \times V_{\mathrm{pH} \ 8.3}$ $\text{HCO}_3^-$ $V_{\mathrm{pH} \ 4.5}>0 ; V_{\mathrm{pH} \ 8.3}=0$ OHand $\text{CO}_3^{2-}$ $V_{\mathrm{pH} \ 4.5}<2 \times V_{\mathrm{pH} \ 8.3}$ $\text{CO}_3^{2-}$ and $\text{HCO}_3^-$ $V_{\mathrm{pH} \ 4.5}>2 \times V_{\mathrm{pH} \ 8.3}$ Acidity is a measure of a water sample’s capacity to neutralize base and is divided into strong acid and weak acid acidity. Strong acid acidity from inorganic acids such as HCl, HNO3, and H2SO4 is common in industrial effluents and in acid mine drainage. Weak acid acidity usually is dominated by the formation of H2CO3 from dissolved CO2, but also includes contributions from hydrolyzable metal ions such as Fe3+, Al3+, and Mn2+. In addition, weak acid acidity may include a contribution from organic acids. Acidity is determined by titrating with a standard solution of NaOH to a fixed pH of 3.7 (or the bromothymol blue end point) and to a fixed pH of 8.3 (or the phenolphthalein end point). Titrating to a pH of 3.7 provides a measure of strong acid acidity, and titrating to a pH of 8.3 provides a measure of total acidity. Weak acid acidity is the difference between the total acidity and the strong acid acidity. Results are expressed as the amount of CaCO3 that can be neutralized by the sample’s acidity. An alternative approach for determining strong acid and weak acid acidity is to obtain a potentiometric titration curve and use a Gran plot to determine the two equivalence points. This approach has been used, for example, to determine the forms of acidity in atmospheric aerosols [Ferek, R. J.; Lazrus, A. L.; Haagenson, P. L.; Winchester, J. W. Environ. Sci. Technol. 1983, 17, 315–324]. As is the case with alkalinity, acidity is reported as mg CaCO3/L. Water in contact with either the atmosphere or with carbonate-bearing sediments contains free CO2 in equilibrium with CO2(g) and with aqueous H2CO3, $\text{HCO}_3^-$ and $\text{CO}_3^{2-}$. The concentration of free CO2 is determined by titrating with a standard solution of NaOH to the phenolphthalein end point, or to a pH of 8.3, with results reported as mg CO2/L. This analysis essentially is the same as that for the determination of total acidity and is used only for water samples that do not contain strong acid acidity. Free CO2 is the same thing as CO2(aq). Organic Analysis Acid–base titrimetry continues to have a small, but important role for the analysis of organic compounds in pharmaceutical, biochemical, agricultur- al, and environmental laboratories. Perhaps the most widely employed acid–base titration is the Kjeldahl analysis for organic nitrogen. Examples of analytes determined by a Kjeldahl analysis include caffeine and saccharin in pharmaceutical products, proteins in foods, and the analysis of nitrogen in fertilizers, sludges, and sediments. Any nitrogen present in a –3 oxidation state is oxidized quantitatively to $\text{NH}_4^+$. Because some aromatic heterocyclic compounds, such as pyridine, are difficult to oxidize, a catalyst is used to ensure a quantitative oxidation. Nitrogen in other oxidation states, such as nitro and azo nitrogens, are oxidized to N2, which results in a negative determinate error. Including a reducing agent, such as salicylic acid, converts this nitrogen to a –3 oxidation state, eliminating this source of error. Table 9.2.6 provides additional examples in which an element is converted quantitatively into a titratable acid or base. Table 9.2.6 . Selected Elemental Analyses Based on an Acid–Base Titration element convert to... reaction producing titratable species titration details N NH3 (g) NH3 (aq) + HCl (aq) $\rightarrow$ $\text{NH}_4^+$ (aq) + Cl(aq) add HCl in excess and back titrate with NaOH S SO2(g) SO3 (g) + H2O2 (aq) $\rightarrow$ H2SO4 (aq) titrate H2SO4 with NaOH C CO2 (g) CO2 (g) + Ba(OH)2 (aq) $\rightarrow$ BaCO3 (s) + H2O (l) add excess Ba(OH)2 and back titrate with HCl Cl HCl (g) titrate HCl with NaOH F SiF4 (g) 3SiF4 (aq) + 2H2O (l) $\rightarrow$ 2H3SiF6 (aq) + SiO2 (s) titrate H2SiF6 with NaOH the species that is titrated is shown in bold Several organic functional groups are weak acids or weak bases. Carboxylic (–COOH), sulfonic (–SO3H) and phenolic (–C6H5OH) functional groups are weak acids that are titrated successfully in either aqueous or non-aqueous solvents. Sodium hydroxide is the titrant of choice for aqueous solutions. Nonaqueous titrations often are carried out in a basic solvent, such as ethylenediamine, using tetrabutylammonium hydroxide, (C4H9)4NOH, as the titrant. Aliphatic and aromatic amines are weak bases that are titrated using HCl in aqueous solutions, or HClO4 in glacial acetic acid. Other functional groups are analyzed indirectly following a reaction that produces or consumes an acid or base. Typical examples are shown in Table 9.2.7 . Table 9.2.7 . Selected Acid–Base Titrimetric Methods for Organic Functional Groups Based on the Production or Consumption of Acid or Base functional group reaction producing titratable species titration details ester RCOOR' (aq) + OH (aq) $\rightarrow$ RCOO (aq) + HOR' (aq) titrate OH with HCl carbonyl R2CO (aq) + NH4OH•HCl (aq) $\rightarrow$ R2CNOH (aq) + HCl (aq) + H2O (l) titrate HCl with NaOH alcohol [1]: (CH3CO)2O + ROH $\rightarrow$ CH3COOR + CH3COOH [2]: (CH3CO)2) + H2O $\rightarrow$ 2CH3COOH titrate CH3COOH with NaOH; a blank titration of acetic anhydride, (CH3CO)2O, corrects for the contribution of reaction [2] the species that is titrated is shown in bold for alcohols, reaction [1] is carried out in pyridine to prevent the hydrolysis of acetic anhydride by water. After reaction [1] is complete, water is added to covert any unreacted acetic anhydride to acetic acid (reaction [2]) Many pharmaceutical compounds are weak acids or weak bases that are analyzed by an aqueous or a nonaqueous acid–base titration; examples include salicylic acid, phenobarbital, caffeine, and sulfanilamide. Amino acids and proteins are analyzed in glacial acetic acid using HClO4 as the titrant. For example, a procedure for determining the amount of nutritionally available protein uses an acid–base titration of lysine residues [(a) Molnár-Perl, I.; Pintée-Szakács, M. Anal. Chim. Acta 1987, 202, 159–166; (b) Barbosa, J.; Bosch, E.; Cortina, J. L.; Rosés, M. Anal. Chim. Acta 1992, 256, 177–181]. Quantitative Calculations The quantitative relationship between the titrand and the titrant is determined by the titration reaction’s stoichiometry. If the titrand is polyprotic, then we must know to which equivalence point we are titrating. The following example illustrates how we can use a ladder diagram to determine a titration reaction’s stoichiometry. Example 9.2.2 A 50.00-mL sample of a citrus drink requires 17.62 mL of 0.04166 M NaOH to reach the phenolphthalein end point. Express the sample’s acidity as grams of citric acid, C6H8O7, per 100 mL. Solution Because citric acid is a triprotic weak acid, we first must determine if the phenolphthalein end point corresponds to the first, second, or third equivalence point. Citric acid’s ladder diagram is shown in Figure 9.2.15 a. Based on this ladder diagram, the first equivalence point is between a pH of 3.13 and a pH of 4.76, the second equivalence point is between a pH of 4.76 and a pH of 6.40, and the third equivalence point is greater than a pH of 6.40. Because phenolphthalein’s end point pH is 8.3–10.0 (see Table 9.2.3 ), the titration must proceed to the third equivalence point and the titration reaction is $\mathrm{C}_{6} \mathrm{H}_{8} \mathrm{O}_{7}(a q)+3 \mathrm{OH}^{-}(a q) \longrightarrow \mathrm{C}_{6} \mathrm{H}_{5} \mathrm{O}_{7}^{3-}(a q)+3 \mathrm{H}_{2} \mathrm{O}(l) \nonumber$ To reach the equivalence point, each mole of citric acid consumes three moles of NaOH; thus $(0.04166 \ \mathrm{M} \ \mathrm{NaOH})(0.01762 \ \mathrm{L} \ \mathrm{NaOH})=7.3405 \times 10^{-4} \ \mathrm{mol} \ \mathrm{NaOH} \nonumber$ $7.3405 \times 10^{-4} \ \mathrm{mol} \ \mathrm{NaOH} \times \frac{1 \ \mathrm{mol} \ \mathrm{C}_{6} \mathrm{H}_{8} \mathrm{O}_{7}}{3 \ \mathrm{mol} \ \mathrm{NaOH}}= 2.4468 \times 10^{-4} \ \mathrm{mol} \ \mathrm{C}_{6} \mathrm{H}_{8} \mathrm{O}_{7} \nonumber$ $2.4468 \times 10^{-4} \ \mathrm{mol} \ \mathrm{C}_{6} \mathrm{H}_{8} \mathrm{O}_{7} \times \frac{192.1 \ \mathrm{g} \ \mathrm{C}_{6} \mathrm{H}_{8} \mathrm{O}_{7}}{\mathrm{mol} \ \mathrm{C}_{6} \mathrm{H}_{8} \mathrm{O}_{7}}=0.04700 \ \mathrm{g} \ \mathrm{C}_{6} \mathrm{H}_{8} \mathrm{O}_{7} \nonumber$ Because this is the amount of citric acid in a 50.00 mL sample, the concentration of citric acid in the citrus drink is 0.09400 g/100 mL. The complete titration curve is shown in Figure 9.2.15 b. Exercise 9.2.6 Your company recently received a shipment of salicylic acid, C7H6O3, for use in the production of acetylsalicylic acid (aspirin). You can accept the shipment only if the salicylic acid is more than 99% pure. To evaluate the shipment’s purity, you dissolve a 0.4208-g sample in water and titrate to the phenolphthalein end point, using 21.92 mL of 0.1354 M NaOH. Report the shipment’s purity as %w/w C7H6O3. Salicylic acid is a diprotic weak acid with pKa values of 2.97 and 13.74. Answer Because salicylic acid is a diprotic weak acid, we must first determine to which equivalence point it is being titrated. Using salicylic acid’s pKa values as a guide, the pH at the first equivalence point is between 2.97 and 13.74, and the second equivalence points is at a pH greater than 13.74. From Table 9.2.3 , phenolphthalein’s end point is in the pH range 8.3–10.0. The titration, therefore, is to the first equivalence point for which the moles of NaOH equal the moles of salicylic acid; thus $(0.1354 \ \mathrm{M})(0.02192 \ \mathrm{L})=2.968 \times 10^{-3} \ \mathrm{mol} \ \mathrm{NaOH} \nonumber$ $2.968 \times 10^{-3} \ \mathrm{mol} \ \mathrm{NaOH} \times \frac{1 \ \mathrm{mol} \ \mathrm{C}_{7} \mathrm{H}_{6} \mathrm{O}_{3}}{\mathrm{mol} \ \mathrm{NaOH}} \times \frac{138.12 \ \mathrm{g} \ \mathrm{C}_{7} \mathrm{H}_{6} \mathrm{O}_{3}}{\mathrm{mol} \ \mathrm{C}_{7} \mathrm{H}_{6} \mathrm{O}_{3}}=0.4099 \ \mathrm{g} \ \mathrm{C}_{7} \mathrm{H}_{6} \mathrm{O}_{3} \nonumber$ $\frac{0.4099 \ \mathrm{g} \ \mathrm{C}_{7} \mathrm{H}_{6} \mathrm{O}_{3}}{0.4208 \ \mathrm{g} \text { sample }} \times 100=97.41 \ \% \mathrm{w} / \mathrm{w} \ \mathrm{C}_{7} \mathrm{H}_{6} \mathrm{O}_{3} \nonumber$ Because the purity of the sample is less than 99%, we reject the shipment. In an indirect analysis the analyte participates in one or more preliminary reactions, one of which produces or consumes acid or base. Despite the additional complexity, the calculations are straightforward. Example 9.2.3 The purity of a pharmaceutical preparation of sulfanilamide, C6H4N2O2S, is determined by oxidizing the sulfur to SO2 and bubbling it through H2O2 to produce H2SO4. The acid is titrated to the bromothymol blue end point using a standard solution of NaOH. Calculate the purity of the preparation given that a 0.5136-g sample requires 48.13 mL of 0.1251 M NaOH. Solution The bromothymol blue end point has a pH range of 6.0–7.6. Sulfuric acid is a diprotic acid, with a pKa2 of 1.99 (the first Ka value is very large and the acid dissociation reaction goes to completion, which is why H2SO4 is a strong acid). The titration, therefore, proceeds to the second equivalence point and the titration reaction is $\mathrm{H}_{2} \mathrm{SO}_{4}(a q)+2 \mathrm{OH}^{-}(a q) \longrightarrow 2 \mathrm{H}_{2} \mathrm{O}(l)+\mathrm{SO}_{4}^{2-}(a q) \nonumber$ Using the titration results, there are $(0.1251 \ \mathrm{M} \ \mathrm{NaOH})(0.04813 \ \mathrm{L} \ \mathrm{NaOH})=6.021 \times 10^{-3} \ \mathrm{mol} \ \mathrm{NaOH} \nonumber$ $6.012 \times 10^{-3} \text{ mol NaOH} \times \frac{1 \text{ mol} \mathrm{H}_{2} \mathrm{SO}_{4}} {2 \text{ mol NaOH}} = 3.010 \times 10^{-3} \text{ mol} \mathrm{H}_{2} \mathrm{SO}_{4} \nonumber$ $3.010 \times 10^{-3} \ \mathrm{mol} \ \mathrm{H}_{2} \mathrm{SO}_{4} \times \frac{1 \ \mathrm{mol} \text{ S}}{\mathrm{mol} \ \mathrm{H}_{2} \mathrm{SO}_{4}} \times \ \frac{1 \ \mathrm{mol} \ \mathrm{C}_{6} \mathrm{H}_{4} \mathrm{N}_{2} \mathrm{O}_{2} \mathrm{S}}{\mathrm{mol} \text{ S}} \times \frac{168.17 \ \mathrm{g} \ \mathrm{C}_{6} \mathrm{H}_{4} \mathrm{N}_{2} \mathrm{O}_{2} \mathrm{S}}{\mathrm{mol} \ \mathrm{C}_{6} \mathrm{H}_{4} \mathrm{N}_{2} \mathrm{O}_{2} \mathrm{S}}= 0.5062 \ \mathrm{g} \ \mathrm{C}_{6} \mathrm{H}_{4} \mathrm{N}_{2} \mathrm{O}_{2} \mathrm{S} \nonumber$ produced when the SO2 is bubbled through H2O2. Because all the sulfur in H2SO4 comes from the sulfanilamide, we can use a conservation of mass to determine the amount of sulfanilamide in the sample. $\frac{0.5062 \ \mathrm{g} \ \mathrm{C}_{6} \mathrm{H}_{4} \mathrm{N}_{2} \mathrm{O}_{2} \mathrm{S}}{0.5136 \ \mathrm{g} \text { sample }} \times 100=98.56 \ \% \mathrm{w} / \mathrm{w} \ \mathrm{C}_{6} \mathrm{H}_{4} \mathrm{N}_{2} \mathrm{O}_{2} \mathrm{S} \nonumber$ Exercise 9.2.7 The concentration of NO2 in air is determined by passing the sample through a solution of H2O2, which oxidizes NO2 to HNO3, and titrating the HNO3 with NaOH. What is the concentration of NO2, in mg/L, if a 5.0 L sample of air requires 9.14 mL of 0.01012 M NaOH to reach the methyl red end point Answer The moles of HNO3 produced by pulling the sample through H2O2 is $(0.01012 \ \mathrm{M})(0.00914 \ \mathrm{L}) \times \frac{1 \ \mathrm{mol} \ \mathrm{HNO}_{3}}{\mathrm{mol} \ \mathrm{NaOH}}=9.25 \times 10^{-5} \ \mathrm{mol} \ \mathrm{HNO}_{3} \nonumber$ A conservation of mass on nitrogen requires that each mole of NO2 produces one mole of HNO3; thus, the mass of NO2 in the sample is $9.25 \times 10^{-5} \ \mathrm{mol} \ \mathrm{HNO}_{3} \times \frac{1 \ \mathrm{mol} \ \mathrm{NO}_{2}}{\mathrm{mol} \ \mathrm{HNO}_{3}} \times \frac{46.01 \ \mathrm{g} \ \mathrm{NO}_{2}}{\mathrm{mol} \ \mathrm{NO}_{2}}=4.26 \times 10^{-3} \ \mathrm{g} \ \mathrm{NO}_{2} \nonumber$ and the concentration of NO2 is $\frac{4.26 \times 10^{-3} \ \mathrm{g} \ \mathrm{NO}_{2}}{5 \ \mathrm{L} \text { air }} \times \frac{1000 \ \mathrm{mg}}{\mathrm{g}}=0.852 \ \mathrm{mg} \ \mathrm{NO}_{2} \ \mathrm{L} \text { air } \nonumber$ For a back titration we must consider two acid–base reactions. Again, the calculations are straightforward. Example 9.2.4 The amount of protein in a sample of cheese is determined by a Kjeldahl analysis for nitrogen. After digesting a 0.9814-g sample of cheese, the nitrogen is oxidized to $\text{NH}_4^+$, converted to NH3 with NaOH, and the NH3 distilled into a collection flask that contains 50.00 mL of 0.1047 M HCl. The excess HCl is back titrated with 0.1183 M NaOH, requiring 22.84 mL to reach the bromothymol blue end point. Report the %w/w protein in the cheese assuming there are 6.38 grams of protein for every gram of nitrogen in most dairy products. Solution The HCl in the collection flask reacts with two bases $\mathrm{HCl}(a q)+\mathrm{NH}_{3}(a q) \rightarrow \mathrm{NH}_{4}^{+}(a q)+\mathrm{Cl}^{-}(a q) \nonumber$ $\mathrm{HCl}(a q)+\mathrm{OH}^{-}(a q) \rightarrow \mathrm{H}_{2} \mathrm{O}(l)+\mathrm{Cl}^{-}(a q) \nonumber$ The collection flask originally contains $(0.1047 \ \mathrm{M \ HCl})(0.05000 \ \mathrm{L \ HCl})=5.235 \times 10^{-3} \mathrm{mol} \ \mathrm{HCl} \nonumber$ of which $(0.1183 \ \mathrm{M} \ \mathrm{NaOH})(0.02284 \ \mathrm{L} \ \mathrm{NaOH}) \times \frac{1 \ \mathrm{mol} \ \mathrm{HCl}}{\mathrm{mol} \ \mathrm{NaOH}}=2.702 \times 10^{-3} \ \mathrm{mol} \ \mathrm{HCl} \nonumber$ react with NaOH. The difference between the total moles of HCl and the moles of HCl that react with NaOH is the moles of HCl that react with NH3. $5.235 \times 10^{-3} \ \mathrm{mol} \ \mathrm{HCl}-2.702 \times 10^{-3} \ \mathrm{mol} \ \mathrm{HCl} =2.533 \times 10^{-3} \ \mathrm{mol} \ \mathrm{HCl} \nonumber$ Because all the nitrogen in NH3 comes from the sample of cheese, we use a conservation of mass to determine the grams of nitrogen in the sample. $2.533 \times 10^{-3} \ \mathrm{mol} \ \mathrm{HCl} \times \frac{1 \ \mathrm{mol} \ \mathrm{NH}_{3}}{\mathrm{mol} \ \mathrm{HCl}} \times \frac{14.01 \ \mathrm{g} \ \mathrm{N}}{\mathrm{mol} \ \mathrm{NH}_{3}}=0.03549 \ \mathrm{g} \ \mathrm{N} \nonumber$ The mass of protein, therefore, is $0.03549 \ \mathrm{g} \ \mathrm{N} \times \frac{6.38 \ \mathrm{g} \text { protein }}{\mathrm{g} \ \mathrm{N}}=0.2264 \ \mathrm{g} \text { protein } \nonumber$ and the % w/w protein is $\frac{0.2264 \ \mathrm{g} \text { protein }}{0.9814 \ \mathrm{g} \text { sample }} \times 100=23.1 \ \% \mathrm{w} / \mathrm{w} \text { protein } \nonumber$ Exercise 9.2.8 Limestone consists mainly of CaCO3, with traces of iron oxides and other metal oxides. To determine the purity of a limestone, a 0.5413-g sample is dissolved using 10.00 mL of 1.396 M HCl. After heating to expel CO2, the excess HCl was titrated to the phenolphthalein end point, requiring 39.96 mL of 0.1004 M NaOH. Report the sample’s purity as %w/w CaCO3. Answer The total moles of HCl used in this analysis is $(1.396 \ \mathrm{M})(0.01000 \ \mathrm{L})=1.396 \times 10^{-2} \ \mathrm{mol} \ \mathrm{HCl} \nonumber$ Of the total moles of HCl $(0.1004 \ \mathrm{M} \ \mathrm{NaOH})(0.03996 \ \mathrm{L}) \times \frac{1 \ \mathrm{mol} \ \mathrm{HCl}}{\mathrm{mol} \ \mathrm{NaOH}} =4.012 \times 10^{-3} \ \mathrm{mol} \ \mathrm{HCl} \nonumber$ are consumed in the back titration with NaOH, which means that $1.396 \times 10^{-2} \ \mathrm{mol} \ \mathrm{HCl}-4.012 \times 10^{-3} \ \mathrm{mol} \ \mathrm{HCl} \ =9.95 \times 10^{-3} \ \mathrm{mol} \ \mathrm{HCl} \nonumber$ react with the CaCO3. Because $\text{CO}_3^{2-}$ is dibasic, each mole of CaCO3 consumes two moles of HCl; thus $9.95 \times 10^{-3} \ \mathrm{mol} \ \mathrm{HCl} \times \frac{1 \ \mathrm{mol} \ \mathrm{CaCO}_{3}}{2 \ \mathrm{mol} \ \mathrm{HCl}} \times \ \frac{100.09 \ \mathrm{g} \ \mathrm{CaCO}_{3}}{\mathrm{mol} \ \mathrm{CaCO}_{3}}=0.498 \ \mathrm{g} \ \mathrm{CaCO}_{3} \nonumber$ $\frac{0.498 \ \mathrm{g} \ \mathrm{CaCO}_{3}}{0.5143 \ \mathrm{g} \text { sample }} \times 100=96.8 \ \% \mathrm{w} / \mathrm{w} \ \mathrm{CaCO}_{3} \nonumber$ Earlier we noted that we can use an acid–base titration to analyze a mixture of acids or bases by titrating to more than one equivalence point. The concentration of each analyte is determined by accounting for its contribution to each equivalence point. Example 9.2.5 The alkalinity of natural waters usually is controlled by OH, $\text{HCO}_3^-$, and $\text{CO}_3^{2-}$, present singularly or in combination. Titrating a 100.0-mL sample to a pH of 8.3 requires 18.67 mL of 0.02812 M HCl. A second 100.0-mL aliquot requires 48.12 mL of the same titrant to reach a pH of 4.5. Identify the sources of alkalinity and their concentrations in milligrams per liter. Solution Because the volume of titrant to reach a pH of 4.5 is more than twice that needed to reach a pH of 8.3, we know from Table 9.2.5 , that the sample’s alkalinity is controlled by $\text{CO}_3^{2-}$ and $\text{HCO}_3^-$. Titrating to a pH of 8.3 neutralizes $\text{CO}_3^{2-}$ to $\text{HCO}_3^-$ $\mathrm{CO}_{3}^{2-}(a q)+\mathrm{HCl}(a q) \rightarrow \mathrm{HCO}_{3}^{-}(a q)+\mathrm{Cl}^{-}(a q) \nonumber$ but there is no reaction between the titrant and $\text{HCO}_3^-$ (see Figure 9.2.14 ). The concentration of $\text{CO}_3^{2-}$ in the sample, therefore, is ${(0.02812 \ \mathrm{M \ HCl})(0.01867 \ \mathrm{L \ HCl}) \times} {\frac{1 \ \mathrm{mol} \ \mathrm{CO}_3^{2-}}{\mathrm{mol} \ \mathrm{HCl}}=5.250 \times 10^{-4} \ \mathrm{mol} \ \mathrm{CO}_{3}^{2-}} \nonumber$ $\frac{5.250 \times 10^{-4} \ \mathrm{mol} \ \mathrm{CO}_{3}^{2-}}{0.1000 \ \mathrm{L}} \times \frac{60.01 \ \mathrm{g} \ \mathrm{CO}_{3}^{2-}}{\mathrm{mol} \ \mathrm{CO}_{3}^{2-}} \times \frac{1000 \ \mathrm{mg}}{\mathrm{g}}=315.1 \ \mathrm{mg} / \mathrm{L} \nonumber$ Titrating to a pH of 4.5 neutralizes $\text{CO}_3^{2-}$ to H2CO3 and neutralizes $\text{HCO}_3^-$ to H2CO3 (see Figure 9.2.14 ). $\begin{array}{l}{\mathrm{CO}_{3}^{2-}(a q)+2 \mathrm{HCl}(a q) \rightarrow \mathrm{H}_{2} \mathrm{CO}_{3}(a q)+2 \mathrm{Cl}^{-}(a q)} \ {\mathrm{HCO}_{3}^{-}(a q)+\mathrm{HCl}(a q) \rightarrow \mathrm{H}_{2} \mathrm{CO}_{3}(a q)+\mathrm{Cl}^{-}(a q)}\end{array} \nonumber$ Because we know how many moles of $\text{CO}_3^{2-}$ are in the sample, we can calculate the volume of HCl it consumes. ${5.250 \times 10^{-4} \ \mathrm{mol} \ \mathrm{CO}_{3}^{2-} \times \frac{2 \ \mathrm{mol} \ \mathrm{HCl}}{\mathrm{mol} \ \mathrm{CO}_{3}^{2-}} \times} {\frac{1 \ \mathrm{L} \ \mathrm{HCl}}{0.02812 \ \mathrm{mol} \ \mathrm{HCl}} \times \frac{1000 \ \mathrm{mL}}{\mathrm{L}}=37.34 \ \mathrm{mL} \ \mathrm{HCl}} \nonumber$ This leaves 48.12 mL–37.34 mL, or 10.78 mL of HCl to react with $\text{HCO}_3^-$. The amount of $\text{HCO}_3^-$ in the sample is ${(0.02812 \ \mathrm{M \ HCl})(0.01078 \ \mathrm{L} \ \mathrm{HCl}) \times} {\frac{1 \ \mathrm{mol} \ \mathrm{H} \mathrm{CO}_{3}^{-}}{\mathrm{mol} \ \mathrm{HCl}}=3.031 \times 10^{-4} \ \mathrm{mol} \ \mathrm{HCO}_{3}^{-}} \nonumber$ The sample contains 315.1 mg $\text{CO}_3^{2-}$/L and 185.0 mg $\text{HCO}_3^-$/L Exercise 9.2.9 Samples that contain a mixture of the monoprotic weak acids 2–methylanilinium chloride (C7H10NCl, pKa = 4.447) and 3–nitrophenol (C6H5NO3, pKa = 8.39) can be analyzed by titrating with NaOH. A 2.006-g sample requires 19.65 mL of 0.200 M NaOH to reach the bromocresol purple end point and 48.41 mL of 0.200 M NaOH to reach the phenolphthalein end point. Report the %w/w of each compound in the sample. Answer Of the two analytes, 2-methylanilinium is the stronger acid and is the first to react with the titrant. Titrating to the bromocresol purple end point, therefore, provides information about the amount of 2-methylanilinium in the sample. $(0.200\ \mathrm{M} \ \mathrm{NaOH} )(0.01965 \ \mathrm{L}) \times \frac{1 \ \mathrm{mol} \ \mathrm{C}_{7} \mathrm{H}_{10} \mathrm{NCl}}{\mathrm{mol} \ \mathrm{NaOH}} \times \frac{143.61 \ \mathrm{g} \ \mathrm{C}_{7} \mathrm{H}_{10} \mathrm{NCl}}{\mathrm{mol} \ \mathrm{C}_{7} \mathrm{H}_{10} \mathrm{NCl}}=0.564 \ \mathrm{g} \ \mathrm{C}_{7} \mathrm{H}_{10} \mathrm{NCl} \nonumber$ $\frac{0.564 \ \mathrm{g} \ \mathrm{C}_{7} \mathrm{H}_{10} \mathrm{NCl}}{2.006 \ \mathrm{g} \text { sample }} \times 100=28.1 \ \% \mathrm{w} / \mathrm{w} \ \mathrm{C}_{7} \mathrm{H}_{10} \mathrm{NCl} \nonumber$ Titrating from the bromocresol purple end point to the phenolphthalein end point, a total of 48.41 mL – 19.65 mL = 28.76 mL, gives the amount of NaOH that reacts with 3-nitrophenol. The amount of 3-nitrophenol in the sample, therefore, is $(0.200 \ \mathrm{M} \ \mathrm{NaOH}) (0.02876 \ \mathrm{L}) \times \frac{1 \ \mathrm{mol} \ \mathrm{C}_{6} \mathrm{H}_{5} \mathrm{NO}_{3}}{\mathrm{mol} \ \mathrm{NaOH}} \times \frac{139.11 \ \mathrm{g} \ \mathrm{C}_{6} \mathrm{H}_{5} \mathrm{NO}_{3}}{\mathrm{mol} \ \mathrm{C}_{6} \mathrm{H}_{5} \mathrm{NO}_{3}}=0.800 \ \mathrm{g} \ \mathrm{C}_{6} \mathrm{H}_{5} \mathrm{NO}_{3} \nonumber$ $\frac{0.800 \ \mathrm{g} \ \mathrm{C}_{6} \mathrm{H}_{5} \mathrm{NO}_{3}}{2.006 \ \mathrm{g} \text { sample }} \times 100=39.8 \ \% \mathrm{w} / \mathrm{w} \ \mathrm{C}_{6} \mathrm{H}_{5} \mathrm{NO}_{3} \nonumber$ Qualitative Applications Example 9.5 shows how we can use an acid–base titration to determine the forms of alkalinity in waters and their concentrations. We can extend this approach to other systems. For example, if we titrate a sample to the methyl orange end point and the phenolphthalein end point using either a strong acid or a strong base, we can determine which of the following species are present and their concentrations: H3PO4, $\text{H}_2\text{PO}_4^-$, $\text{HPO}_4^{2-}$, $\text{PO}_4^{3-}$, HCl, and NaOH. As outlined in Table 9.2.8 , each species or mixture of species has a unique relationship between the volumes of titrant needed to reach these two end points. Note that mixtures containing three or more these species are not possible. Use a ladder diagram to convince yourself that mixtures containing three or more of these species are unstable. Table 9.2.8 . Relationship Between End Point Volumes for Mixtures of Phosphate Species with HCl and NaOH solution composition relationship between end point volumes with strong base titrant relationship between end point volumes with strong acid titrant H3PO4 $V_\text{PH} = 2 \times V_\text{MO}$ $\text{H}_2\text{PO}_4^-$ $V_\text{PH} > 0; V_\text{MO} = 0$ $\text{HPO}_4^{2-}$ $V_\text{MO} > 0; V_\text{PH} = 0$ $\text{PO}_4^{3-}$ $V_\text{MO} =2 \times V_\text{PH}$ HCl $V_\text{PH} = V_\text{MO}$ NaOH $V_\text{MO} = V_\text{PH}$ HCl and H3PO4 $V_\text{PH} < 2 \times V_\text{MO}$ H3PO4 and $\text{H}_2\text{PO}_4^-$ $V_\text{PH} > 2 \times V_\text{MO}$ $\text{H}_2\text{PO}_4^-$ and $\text{HPO}_4^{2-}$ $V_\text{PH} > 0; V_\text{MO} = 0$ $V_\text{MO} > 0; V_\text{PH} = 0$ $\text{HPO}_4^{2-}$ and $\text{PO}_4^{3-}$ $V_\text{MO} > 2 \times V_\text{PH}$ $\text{PO}_4^{3-}$ and NaOH $V_\text{MO} < 2 \times V_\text{PH}$ VPH and VMO are, respectively, the volume of titrant at the phenolphthalein and methyl orange end points when no information is provided, the volume at each end point is zero Characterization Applications In addition to a quantitative analysis and a qualitative analysis, we also can use an acid–base titration to characterize the chemical and physical properties of matter. Two useful characterization applications are the determination of a compound’s equivalent weight and the determination of its acid dissociation constant or its base dissociation constant. Equivalent Weights Suppose we titrate a sample of an impure weak acid to a well-defined end point using a monoprotic strong base as the titrant. If we assume the titration involves the transfer of n protons, then the moles of titrant needed to reach the end point is $\text { moles titrant }=\frac{n \text { moles titrant }}{\text { moles analyte }} \times \text { moles analyte } \nonumber$ If we know the analyte’s identity, we can use this equation to determine the amount of analyte in the sample $\text { grams analyte }=\text { moles titrant } \times \frac{1 \text { mole analyte }}{n \text { moles analyte }} \times F W \text { analyte } \nonumber$ where FW is the analyte’s formula weight. But what if we do not know the analyte’s identify? If we titrate a pure sample of the analyte, we can obtain some useful information that may help us establish its identity. Because we do not know the number of protons that are titrated, we let n = 1 and replace the analyte’s formula weight with its equivalent weight (EW) $\text { grams analyte }=\text { moles titrant } \times \frac{1 \text { equivalent analyte }}{1 \text { mole analyte }}=E W \text { analyte } \nonumber$ where $F W=n \times E W \nonumber$ Example 9.2.6 A 0.2521-g sample of an unknown weak acid is titrated with 0.1005 M NaOH, requiring 42.68 mL to reach the phenolphthalein end point. Determine the compound’s equivalent weight. Which of the following compounds is most likely to be the unknown weak acid? acid formula formula weight (g/mol) type ascorbic acid C8H8O6 176.1 monoprotic malonic acid C3H4O4 104.1 diprotic succinic acid C4H6O4 118.1 diprotic citric acid C6H8O7 192.1 triprotic Solution The moles of NaOH needed to reach the end point is $(0.1005 \ \mathrm{M} \ \mathrm{NaOH})(0.04268 \ \mathrm{L} \ \mathrm{NaOH})=4.289 \times 10^{-3} \ \mathrm{mol} \ \mathrm{NaOH} \nonumber$ The equivalents of weak acid are the same as the moles of NaOH used in the titration; thus, he analyte’s equivalent weight is $E W=\frac{0.2521 \ \mathrm{g}}{4.289 \times 10^{-3} \text { equivalents }}=58.78 \ \mathrm{g} / \mathrm{equivalent} \nonumber$ The possible formula weights for the weak acid are 58.78 g/mol (n = 1), 117.6 g/mol (n = 2), and 176.3 g/mol (n = 3). If the analyte is a monoprotic weak acid, then its formula weight is 58.78 g/mol, eliminating ascorbic acid as a possibility. If it is a diprotic weak acid, then the analyte’s formula weight is either 58.78 g/mol or 117.6 g/mol, depending on whether the weak acid was titrated to its first or its second equivalence point. Succinic acid, with a formula weight of 118.1 g/mole is a possibility, but malonic acid is not. If the analyte is a triprotic weak acid, then its formula weight is 58.78 g/mol, 117.6 g/mol, or 176.3 g/mol. None of these values is close to the formula weight for citric acid, eliminating it as a possibility. Only succinic acid provides a possible match. Exercise 9.2.10 Figure 9.2.16 shows the potentiometric titration curve for the titration of a 0.500-g sample an unknown weak acid. The titrant is 0.1032 M NaOH. What is the weak acid’s equivalent weight? Answer The first of the two visible end points is approximately 37 mL of NaOH. The analyte’s equivalent weight, therefore, is $(0.1032 \ \mathrm{M} \ \mathrm{NaOH})(0.037 \ \mathrm{L}) \times \frac{1 \text { equivalent }}{\mathrm{mol} \ \mathrm{NaOH}}=3.8 \times 10^{-3} \text { equivalents } \nonumber$ $E W=\frac{0.5000 \ \mathrm{g}}{3.8 \times 10^{-3} \text { equivalents }}=1.3 \times 10^{2} \ \mathrm{g} / \mathrm{equivalent} \nonumber$ Equilibrium Constants Another application of acid–base titrimetry is the determination of a weak acid’s or a weak base’s dissociation constant. Consider, for example, a solution of acetic acid, CH3COOH, for which the dissociation constant is $K_{\mathrm{a}}=\frac{\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]\left[\mathrm{CH}_{3} \mathrm{COO}^{-}\right]}{\left[\mathrm{CH}_{3} \mathrm{COOH}\right]} \nonumber$ When the concentrations of CH3COOH and CH3COO are equal, the Ka expression reduces to Ka = [H3O+], or pH = pKa. If we titrate a solution of acetic acid with NaOH, the pH equals the pKa when the volume of NaOH is approximately 1⁄2Veq. As shown in Figure 9.2.17 , a potentiometric titration curve provides a reasonable estimate of acetic acid’s pKa. Recall that pH = pKa is a step on a ladder diagram, which divides the pH axis into two regions, one where the weak acid is the predominate species, and one where its conjugate weak base is the predominate species. This method provides a reasonable estimate for a weak acid’s pKa if the acid is neither too strong nor too weak. These limitations are easy to appreciate if we consider two limiting cases. For the first limiting case, let’s assume the weak acid, HA, is more than 50% dissociated before the titration begins (a relatively large Ka value); in this case the concentration of HA before the equivalence point is always less than the concentration of A and there is no point on the titration curve where [HA] = [A]. At the other extreme, if the acid is too weak, then less than 50% of the weak acid reacts with the titrant at the equivalence point. In this case the concentration of HA before the equivalence point is always greater than that of A. Determining the pKa by the half-equivalence point method overestimates its value if the acid is too strong and underestimates its value if the acid is too weak. Exercise 9.2.11 Use the potentiometric titration curve in Figure 9.2.16 to estimate the pKa values for the weak acid in Exercise 9.2.10 . Answer At 1⁄2Veq, or approximately 18.5 mL, the pH is approximately 2.2; thus, we estimate that the analyte’s pKa is 2.2. A second approach for determining a weak acid’s pKa is to use a Gran plot. For example, earlier in this chapter we derived the following equation for the titration of a weak acid with a strong base. $\left[\mathrm{H}_{3} \mathrm{O}^{+}\right] \times V_{b}=K_{a} V_{e q}-K_{a} V_{b} \nonumber$ A plot of [H3O+] $\times$ Vb versus Vb for volumes less than the equivalence point yields a straight line with a slope of –Ka. Other linearizations have been developed that use the entire titration curve or that require no assumptions [(a) Gonzalez, A. G.; Asuero, A. G. Anal. Chim. Acta 1992, 256, 29–33; (b) Papanastasiou, G.; Ziogas, I.; Kokkindis, G. Anal. Chim. Acta 1993, 277, 119–135]. This approach to determining an acidity constant has been used to study the acid–base properties of humic acids, which are naturally occurring, large molecular weight organic acids with multiple acidic sites. In one study a humic acid was found to have six titratable sites, three which were identified as carboxylic acids, two which were believed to be secondary or tertiary amines, and one which was identified as a phenolic group [Alexio, L. M.; Godinho, O. E. S.; da Costa, W. F. Anal. Chim. Acta 1992, 257, 35–39]. Values of Ka determined by this method may have a substantial error if the effect of activity is ignored. See Chapter 6.9 for a discussion of activity. Evaluation of Acid–Base Titrimetry Scale of Operation In an acid–base titration, the volume of titrant needed to reach the equivalence point is proportional to the moles of titrand. Because the pH of the titrand or the titrant is a function of its concentration, the change in pH at the equivalence point—and thus the feasibility of an acid–base titration—depends on their respective concentrations. Figure 9.2.18 , for example, shows a series of titration curves for the titration of several concentrations of HCl with equimolar solutions NaOH. For titrand and titrant concentrations smaller than 10–3 M, the change in pH at the end point is too small to provide an accurate and a precise result. Acid–base titrimetry is an example of a total analysis technique in which the signal is proportional to the absolute amount of analyte. See Chapter 3 for a discussion of the difference between total analysis techniques and concentration techniques. A minimum concentration of 10–3 M places limits on the smallest amount of analyte we can analyze successfully. For example, suppose our analyte has a formula weight of 120 g/mol. To successfully monitor the titration’s end point using an indicator or a pH probe, the titrand needs an initial volume of approximately 25 mL. If we assume the analyte’s formula weight is 120 g/mol, then each sample must contain at least 3 mg of analyte. For this reason, acid–base titrations generally are limited to major and minor analytes. We can extend the analysis of gases to trace analytes by pulling a large volume of the gas through a suitable collection solution. We need a volume of titrand sufficient to cover the tip of the pH probe or to allow for an easy observation of the indicator’s color. A volume of 25 mL is not an unreasonable estimate of the minimum volume. One goal of analytical chemistry is to extend analyses to smaller samples. Here we describe two interesting approaches to titrating μL and pL samples. In one experimental design (Figure 9.2.19 ), samples of 20–100 μL are held by capillary action between a flat-surface pH electrode and a stainless steel sample stage [Steele, A.; Hieftje, G. M. Anal. Chem. 1984, 56, 2884–2888]. The titrant is added using the oscillations of a piezoelectric ceramic device to move an angled glass rod in and out of a tube connected to a reservoir that contains the titrant. Each time the glass tube is withdrawn an approximately 2 nL microdroplet of titrant is released. The microdroplets are allowed to fall onto the sample, with mixing accomplished by spinning the sample stage at 120 rpm. A total of 450 microdroplets, with a combined volume of 0.81–0.84 μL, is dispensed between each pH measurement. In this fashion a titration curve is constructed. This method has been used to titrate solutions of 0.1 M HCl and 0.1 M CH3COOH with 0.1 M NaOH. Absolute errors ranged from a minimum of +0.1% to a maximum of –4.1%, with relative standard deviations from 0.15% to 4.7%. Samples as small as 20 μL were titrated successfully. Another approach carries out the acid–base titration in a single drop of solution [(a) Gratzl, M.; Yi, C. Anal. Chem. 1993, 65, 2085–2088; (b) Yi, C.; Gratzl, M. Anal. Chem. 1994, 66, 1976–1982; (c) Hui, K. Y.; Gratzl, M. Anal. Chem. 1997, 69, 695–698; (d) Yi, C.; Huang, D.; Gratzl, M. Anal. Chem. 1996, 68, 1580–1584; (e) Xie, H.; Gratzl, M. Anal. Chem. 1996, 68, 3665–3669]. The titrant is delivered using a microburet fashioned from a glass capillary micropipet (Figure 9.2.20 ). The microburet has a 1-2 μm tip filled with an agar gel membrane. The tip of the microburet is placed within a drop of the sample solution, which is suspended in heptane, and the titrant is allowed to diffuse into the sample. The titration’s progress is monitored using an acid–base indicator and the time needed to reach the end point is measured. The rate of the titrant’s diffusion from the microburet is determined by a prior calibration. Once calibrated the end point time is converted to an end point volume. Samples usually consist of picoliter volumes (10–12 liters), with the smallest sample being 0.7 pL. The precision of the titrations is about 2%. Titrations conducted with microliter or picoliter sample volumes require a smaller absolute amount of analyte. For example, diffusional titrations have been conducted on as little as 29 femtomoles (10–15 moles) of nitric acid. Nevertheless, the analyte must be present in the sample at a major or minor level for the titration to give accurate and precise results. Accuracy When working with a macro–major or a macro–minor sample, an acid–base titration can achieve a relative error of 0.1–0.2%. The principal limitation to accuracy is the difference between the end point and the equivalence point. Precision An acid–base titration’s relative precision depends primarily on the precision with which we can measure the end point volume and the precision in detecting the end point. Under optimum conditions, an acid–base titration has a relative precision of 0.1–0.2%. We can improve the relative precision by using the largest possible buret and by ensuring we use most of its capacity in reaching the end point. A smaller volume buret is a better choice when using costly reagents, when waste disposal is a concern, or when we must complete the titration quickly to avoid competing chemical reactions. An automatic titrator is particularly useful for titrations that require small volumes of titrant because it provides significantly better precision (typically about ±0.05% of the buret’s volume). The precision of detecting the end point depends on how it is measured and the slope of the titration curve at the end point. With an indicator the precision of the end point signal usually is ±0.03–0.10 mL. Potentiometric end points usually are more precise. Sensitivity For an acid–base titration we can write the following general analytical equation to express the titrant’s volume in terms of the amount of titrand $\text { volume of titrant }=k \times \text { moles of titrand } \nonumber$ where k, the sensitivity, is determined by the stoichiometry between the titrand and the titrant. Consider, for example, the determination of sulfurous acid, H2SO3, by titrating with NaOH to the first equivalence point $\mathrm{H}_{2} \mathrm{SO}_{3}(a q)+\mathrm{OH}^{-}(a q) \rightarrow \mathrm{H}_{2} \mathrm{O}(l )+\mathrm{HSO}_{3}^{-}(a q) \nonumber$ At the equivalence point the relationship between the moles of NaOH and the moles of H2SO3 is $\mathrm{mol} \ \mathrm{NaOH}=\mathrm{mol} \ \mathrm{H}_{2} \mathrm{SO}_{3} \nonumber$ Substituting the titrant’s molarity and volume for the moles of NaOH and rearranging $M_{\mathrm{NaOH}} \times V_{\mathrm{NNOH}}=\mathrm{mol} \ \mathrm{H}_{2} \mathrm{SO}_{3} \nonumber$ $V_{\mathrm{NaOH}}=\frac{1}{M_{\mathrm{NaOH}}} \times \mathrm{mol} \ \mathrm{H}_{2} \mathrm{SO}_{3} \nonumber$ we find that k is $k=\frac{1}{M_{\mathrm{NaOH}}} \nonumber$ There are two ways in which we can improve a titration’s sensitivity. The first, and most obvious, is to decrease the titrant’s concentration because it is inversely proportional to the sensitivity, k. The second approach, which applies only if the titrand is multiprotic, is to titrate to a later equivalence point. If we titrate H2SO3 to its second equivalence point $\mathrm{H}_{2} \mathrm{SO}_{3}(a q)+2 \mathrm{OH}^{-}(a q) \rightarrow 2 \mathrm{H}_{2} \mathrm{O}(l)+\mathrm{SO}_{3}^{2-}(a q)\nonumber$ then each mole of H2SO3 consumes two moles of NaOH $\mathrm{mol} \ \mathrm{NaOH}=2 \times \mathrm{mol} \ \mathrm{H}_{2} \mathrm{SO}_{3} \nonumber$ and the sensitivity becomes $k=\frac{2}{M_{\mathrm{NaOH}}} \nonumber$ In practice, however, any improvement in sensitivity is offset by a decrease in the end point’s precision if a larger volume of titrant requires us to refill the buret. For this reason, standard acid–base titrimetric procedures are written to ensure that a titration uses 60–100% of the buret’s volume. Selectivity Acid–base titrants are not selective. A strong base titrant, for example, reacts with all acids in a sample, regardless of their individual strengths. If the titrand contains an analyte and an interferent, then selectivity depends on their relative acid strengths. Let’s consider two limiting situations. If the analyte is a stronger acid than the interferent, then the titrant will react with the analyte before it begins reacting with the interferent. The feasibility of the analysis depends on whether the titrant’s reaction with the interferent affects the accurate location of the analyte’s equivalence point. If the acid dissociation constants are substantially different, the end point for the analyte can be determined accurately. Conversely, if the acid dissociation constants for the analyte and interferent are similar, then there may not be an accurate end point for the analyte. In the latter case a quantitative analysis for the analyte is not possible. In the second limiting situation the analyte is a weaker acid than the interferent. In this case the volume of titrant needed to reach the analyte’s equivalence point is determined by the concentration of both the analyte and the interferent. To account for the interferent’s contribution to the end point, an end point for the interferent must be available. Again, if the acid dissociation constants for the analyte and interferent are significantly different, then the analyte’s determination is possible. If the acid dissociation constants are similar, however, there is only a single equivalence point and we cannot separate the analyte’s and the interferent’s contributions to the equivalence point volume. Time, Cost, and Equipment Acid–base titrations require less time than most gravimetric procedures, but more time than many instrumental methods of analysis, particularly when analyzing many samples. With an automatic titrator, however, concerns about analysis time are less significant. When performing a titration manually our equipment needs—a buret and, perhaps, a pH meter—are few in number, inexpensive, routinely available, and easy to maintain. Automatic titrators are available for between $3000 and$10 000.
textbooks/chem/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/09%3A_Titrimetric_Methods/9.02%3A_AcidBase_Titrations.txt
The earliest examples of metal–ligand complexation titrations are Liebig’s determinations, in the 1850s, of cyanide and chloride using, respectively, Ag+ and Hg2+ as the titrant. Practical analytical applications of complexation titrimetry were slow to develop because many metals and ligands form a series of metal–ligand complexes. Liebig’s titration of CN with Ag+ was successful because they form a single, stable complex of $\text{Ag(CN)}_2^-$, which results in a single, easily identified end point. Other metal–ligand complexes, such as $\text{CdI}_4^{2-}$, are not analytically useful because they form a series of metal–ligand complexes (CdI+, CdI2(aq), $\text{CdI}_3^-$ and $\text{CdI}_4^{2-}$) that produce a sequence of poorly defined end points. Recall that an acid–base titration curve for a diprotic weak acid has a single end point if its two Ka values are not sufficiently different. See Figure 9.2.6 for an example. In 1945, Schwarzenbach introduced aminocarboxylic acids as multidentate ligands. The most widely used of these new ligands—ethylenediaminetetraacetic acid, or EDTA—forms a strong 1:1 complex with many metal ions. The availability of a ligand that gives a single, easily identified end point made complexation titrimetry a practical analytical method. Chemistry and Properties of EDTA Ethylenediaminetetraacetic acid, or EDTA, is an aminocarboxylic acid. EDTA, the structure of which is shown in Figure 9.3.1 a in its fully deprotonated form, is a Lewis acid with six binding sites—the four negatively charged carboxylate groups and the two tertiary amino groups—that can donate up to six pairs of electrons to a metal ion. The resulting metal–ligand complex, in which EDTA forms a cage-like structure around the metal ion (Figure 9.3.1 b), is very stable. The actual number of coordination sites depends on the size of the metal ion, however, all metal–EDTA complexes have a 1:1 stoichiometry. Metal–EDTA Formation Constants To illustrate the formation of a metal–EDTA complex, let’s consider the reaction between Cd2+ and EDTA $\mathrm{Cd}^{2+}(a q)+\mathrm{Y}^{4-}(a q)\rightleftharpoons\mathrm{Cd} \mathrm{Y}^{2-}(a q) \label{9.1}$ where Y4– is a shorthand notation for the fully deprotonated form of EDTA shown in Figure 9.3.1 a. Because the reaction’s formation constant $K_{f}=\frac{\left[\mathrm{CdY}^{2-}\right]}{\left[\mathrm{Cd}^{2+}\right]\left[\mathrm{Y}^{4-}\right]}=2.9 \times 10^{16} \label{9.2}$ is large, its equilibrium position lies far to the right. Formation constants for other metal–EDTA complexes are found in Appendix 12. EDTA is a Weak Acid In addition to its properties as a ligand, EDTA is also a weak acid. The fully protonated form of EDTA, H6Y2+, is a hexaprotic weak acid with successive pKa values of $\mathrm{p} K_\text{a1}=0.0 \quad \mathrm{p} K_\text{a2}=1.5 \quad \mathrm{p} K_\text{a3}=2.0 \nonumber$ $\mathrm{p} K_\text{a4}=2.66 \quad \mathrm{p} K_\text{a5}=6.16 \quad \mathrm{p} K_\text{a6}=10.24 \nonumber$ The first four values are for the carboxylic acid protons and the last two values are for the ammonium protons. Figure 9.3.2 shows a ladder diagram for EDTA. The specific form of EDTA in reaction \ref{9.1} is the predominate species only when the pH is more basic than 10.24. Conditional Metal–Ligand Formation Constants The formation constant for CdY2– in Equation \ref{9.2} assumes that EDTA is present as Y4–. Because EDTA has many forms, when we prepare a solution of EDTA we know it total concentration, CEDTA, not the concentration of a specific form, such as Y4–. To use Equation \ref{9.2}, we need to rewrite it in terms of CEDTA. At any pH a mass balance on EDTA requires that its total concentration equal the combined concentrations of each of its forms. $C_{\mathrm{EDTA}}=\left[\mathrm{H}_{6} \mathrm{Y}^{2+}\right]+\left[\mathrm{H}_{5} \mathrm{Y}^{+}\right]+\left[\mathrm{H}_{4} \mathrm{Y}\right]+\left[\mathrm{H}_{3} \mathrm{Y}^-\right]+\left[\mathrm{H}_{2} \mathrm{Y}^{2-}\right]+\left[\mathrm{HY}^{3-}\right]+\left[\mathrm{Y}^{4-}\right] \nonumber$ To correct the formation constant for EDTA’s acid–base properties we need to calculate the fraction, $\alpha_{\text{Y}^{4-}}$, of EDTA that is present as Y4–. $\alpha_{\text{Y}^{4-}}=\frac{\left[\text{Y}^{4-}\right]}{C_\text{EDTA}} \label{9.3}$ Table 9.3.1 provides values of $\alpha_{\text{Y}^{4-}}$ for selected pH levels. Solving Equation \ref{9.3} for [Y4–] and substituting into Equation \ref{9.2} for the CdY2– formation constant $K_{\mathrm{f}}=\frac{\left[\mathrm{CdY}^{2-}\right]}{\left[\mathrm{Cd}^{2+}\right] (\alpha_{\mathrm{Y}^{4-}}) C_{\mathrm{EDTA}}} \nonumber$ and rearranging gives $K_{f}^{\prime}=K_{f} \times \alpha_{\text{Y}^{4-}}=\frac{\left[\mathrm{CdY}^{2-}\right]}{\left[\mathrm{Cd}^{2+}\right] C_{\mathrm{EDTA}}} \label{9.4}$ where $K_f^{\prime}$ is a pH-dependent conditional formation constant. As shown in Table 9.3.2 , the conditional formation constant for CdY2– becomes smaller and the complex becomes less stable at more acidic pHs. Table 9.3.1 . Values of $\alpha_{\text{Y}^{4-}}$ for EDTA at Selected pH Levels pH $\alpha_{\text{Y}^{4-}}$ pH $\alpha_{\text{Y}^{4-}}$ 1 $1.94 \times 10^{-18}$ 8 $5.68 \times 10^{-3}$ 2 $3.47 \times 10^{-14}$ 9 $5.47 \times 10^{-2}$ 3 $2.66 \times 10^{-11}$ 10 0.367 4 $3.80 \times 10^{-9}$ 11 0.853 5 $3.73 \times 10^{-7}$ 12 0.983 6 $2.37 \times 10^{-5}$ 13 0.988 7 $5.06 \times 10^{-4}$ 14 1.00 Table 9.3.2 . Conditional Formation Constants for $\text{CdY}^{2-}$ pH $\text{K}_f^{\prime}$ pH $\text{K}_f^{\prime}$ 1 $5.6 \times 10^{-2}$ 8 $1.6 \times 10^{14}$ 2 $1.0 \times 10^{3}$ 9 $1.6 \times 10^{15}$ 3 $7.7 \times 10^{5}$ 10 $1.1 \times 10^{16}$ 4 $1.1 \times 10^{8}$ 11 $2.5 \times 10^{16}$ 5 $1.1 \times 10^{10}$ 12 $2.9 \times 10^{16}$ 6 $6.9 \times 10^{11}$ 13 $2.9 \times 10^{16}$ 7 $1.5 \times 10^{13}$ 14 $2.9 \times 10^{16}$ EDTA Competes With Other Ligands To maintain a constant pH during a complexation titration we usually add a buffering agent. If one of the buffer’s components is a ligand that binds with Cd2+, then EDTA must compete with the ligand for Cd2+. For example, an $\text{NH}_4^+ / \text{NH}_3$ buffer includes NH3, which forms several stable Cd2+–NH3 complexes. Because EDTA forms a stronger complex with Cd2+ than does NH3, it displaces NH3; however, the stability of the Cd2+–EDTA complex decreases. We can account for the effect of an auxiliary complexing agent, such as NH3, in the same way we accounted for the effect of pH. Before adding EDTA, the mass balance on Cd2+, CCd, is $C_{\mathrm{Cd}} = \left[\mathrm{Cd}^{2+}\right] + \left[\mathrm{Cd}\left(\mathrm{NH}_{3}\right)^{2+}\right] + \left[\mathrm{Cd}\left(\mathrm{NH}_{3}\right)_{2}^{2+}\right] + \left[\mathrm{Cd}\left(\mathrm{NH}_{3}\right)_{3}^{2+}\right] + \left[\mathrm{Cd}\left(\mathrm{NH}_{3}\right)_{4}^{2+}\right] \nonumber$ and the fraction of uncomplexed Cd2+, $\alpha_{Cd^{2+}}$, is $\alpha_{\mathrm{Cd}^{2+}}=\frac{\left[\mathrm{Cd}^{2+}\right]}{C_{\mathrm{Cd}}} \label{9.5}$ The value of $\alpha_{\mathrm{Cd}^{2+}}$ depends on the concentration of NH3. Contrast this with $\alpha_{\text{Y}^{4-}}$, which depends on pH. Solving Equation \ref{9.5} for [Cd2+] and substituting into Equation \ref{9.4} gives $K_{f}^{\prime}=K_{f} \times \alpha_{Y^{4-}} = \frac {[\text{CdY}^{2-}]} {\alpha_{\text{Cd}^{2+}} C_\text{Cd} C_\text{EDTA}} \nonumber$ Because the concentration of NH3 in a buffer essentially is constant, we can rewrite this equation $K_{f}^{\prime \prime}=K_{f} \times \alpha_{\mathrm{Y}^{4-}} \times \alpha_{\mathrm{Cd}^{2+}}=\frac{\left[\mathrm{CdY}^{2-}\right]}{C_{\mathrm{Cd}} C_{\mathrm{EDTA}}} \label{9.6}$ to give a conditional formation constant, $K_f^{\prime \prime}$, that accounts for both pH and the auxiliary complexing agent’s concentration. Table 9.3.3 provides values of $\alpha_{\text{M}^{2+}}$ for several metal ion when NH3 is the complexing agent. Table 9.3.3 . Values of $\alpha_{\text{M}^{2+}}$ for Selected Concentrations of Ammonia [NH3] (M) $\alpha_{\text{Ca}^{2+}}$ $\alpha_{\text{Cd}^{2+}}$ $\alpha_{\text{Co}^{2+}}$ $\alpha_{\text{Cu}^{2+}}$ $\alpha_{\text{Mg}^{2+}}$ $\alpha_{\text{Ni}^{2+}}$ $\alpha_{\text{Zn}^{2+}}$ 1 $5.50 \times 10^{-1}$ $6.09 \times 10^{-8}$ $1.00 \times 10^{-6}$ $3.79 \times 10^{-14}$ $1.76 \times 10^{-1}$ $9.20 \times 10^{-10}$ $3.95 \times 10^{-10}$ 0.5 $7.36 \times 10^{-1}$ $1.05 \times 10^{-6}$ $2.22 \times 10^{-5}$ $6.86 \times 10^{-13}$ $4.13 \times 10^{-1}$ $3.44 \times 10^{-8}$ $6.27 \times 10^{-9}$ 0.1 $9.39 \times 10^{-1}$ $3.51 \times 10^{-4}$ $6.64 \times 10^{-3}$ $4.63 \times 10^{-10}$ $8.48 \times 10^{-1}$ $5.12 \times 10^{-5}$ $3.68 \times 10^{-6}$ 0.05 $9.69 \times 10^{-1}$ $2.72 \times 10^{-3}$ $3.54 \times 10^{-2}$ $7.17 \times 10^{-9}$ $9.22 \times 10^{-1}$ $6.37 \times 10^{-4}$ $5.45 \times 10^{-5}$ 0.01 $9.94 \times 10^{-1}$ $8.81 \times 10^{-2}$ $3.55 \times 10^{-1}$ $3.22 \times 10^{-6}$ $9.84 \times 10^{-1}$ $4.32 \times 10^{-2}$ $1.82 \times 10^{-2}$ 0.005 $9.97 \times 10^{-1}$ $2.27 \times 10^{-1}$ $5.68 \times 10^{-1}$ $3.62 \times 10^{-5}$ $9.92 \times 10^{-1}$ $1.36 \times 10^{-1}$ $1.27 \times 10^{-1}$ 0.001 $9.99 \times 10^{-1}$ $6.09 \times 10^{-1}$ $8.84 \times 10^{-1}$ $4.15 \times 10^{-4}$ $9.98 \times 10^{-1}$ $5.76 \times 10^{-1}$ $7.48 \times 10^{-1}$ Complexometric EDTA Titration Curves Now that we know something about EDTA’s chemical properties, we are ready to evaluate its usefulness as a titrant. To do so we need to know the shape of a complexometric titration curve. In chapter 9.2 we learned that an acid–base titration curve shows how the titrand’s pH changes as we add titrant. The analogous result for a complexation titration shows the change in pM, where M is the metal ion’s concentration, as a function of the volume of EDTA. In this section we will learn how to calculate a titration curve using the equilibrium calculations from Chapter 6. We also will learn how to sketch a good approximation of any complexation titration curve using a limited number of simple calculations. pM = –log[M2+] Calculating the Titration Curve Let’s calculate the titration curve for 50.0 mL of $5.00 \times 10^{-3}$ M Cd2+ using a titrant of 0.0100 M EDTA. Furthermore, let’s assume the titrand is buffered to a pH of 10 using a buffer that is 0.0100 M in NH3. Because the pH is 10, some of the EDTA is present in forms other than Y4–. In addition, EDTA will compete with NH3 for the Cd2+. To evaluate the titration curve, therefore, we first need to calculate the conditional formation constant for CdY2–. From Table 9.3.1 and Table 9.3.3 we find that $\alpha_{\text{Y}^{4-}}$ is 0.367 at a pH of 10, and that $\alpha_{\text{Cd}^{2+}}$ is 0.0881 when the concentration of NH3 is 0.0100 M. Using these values, the conditional formation constant is $K_{f}^{\prime \prime}=K_{f} \times \alpha_{\text{Y}^{4-}} \times \alpha_{\text{Cd}^{2+}}=\left(2.9 \times 10^{16}\right)(0.367)(0.0881)=9.4 \times 10^{14} \nonumber$ Because $K_f^{\prime \prime}$ is so large, we can treat the titration reaction $\mathrm{Cd}^{2+}(a q)+\mathrm{Y}^{4-}(a q) \longrightarrow \mathrm{CdY}^{2-}(a q) \nonumber$ as if it proceeds to completion. The next task is to determine the volume of EDTA needed to reach the equivalence point. At the equivalence point we know that the moles of EDTA added must equal the moles of Cd2+ in our sample; thus $\operatorname{mol} \mathrm{EDTA}=M_{\mathrm{EDTA}} \times V_{\mathrm{EDTA}}=M_{\mathrm{Cd}} \times V_{\mathrm{Cd}}=\mathrm{mol} \ \mathrm{Cd}^{2+} \nonumber$ Substituting in known values, we find that it requires $V_{eq}=V_{\mathrm{EDTA}}=\frac{M_{\mathrm{Cd}} V_{\mathrm{Cd}}}{M_{\mathrm{EDTA}}}=\frac{\left(5.00 \times 10^{-3} \ \mathrm{M}\right)(50.0 \ \mathrm{mL})}{0.0100 \ \mathrm{M}}=25.0 \ \mathrm{mL} \nonumber$ of EDTA to reach the equivalence point. Before the equivalence point, Cd2+ is present in excess and pCd is determined by the concentration of unreacted Cd2+. Because not all unreacted Cd2+ is free—some is complexed with NH3—we must account for the presence of NH3. For example, after adding 5.0 mL of EDTA, the total concentration of Cd2+ is $C_{\mathrm{Cd}} = \frac {(\text{mol Cd}^{2+})_\text{initial} - (\text{mol EDTA})_\text{added}} {\text{total volume}} = \frac {M_\text{Cd}V_\text{Cd} - M_\text{EDTA}V_\text{EDTA}} {V_\text{Cd} + V_\text{EDTA}} \nonumber$ $C_{\mathrm{Cd}}=\frac{\left(5.00 \times 10^{-3} \ \mathrm{M}\right)(50.0 \ \mathrm{mL})-(0.0100 \ \mathrm{M})(5.0 \ \mathrm{mL})}{50.0 \ \mathrm{mL}+5.0 \ \mathrm{mL}} \nonumber$ $C_{\mathrm{Cd}}=3.64 \times 10^{-3} \ \mathrm{M} \nonumber$ To calculate the concentration of free Cd2+ we use Equation \ref{9.5} $\left[\mathrm{Cd}^{2+}\right]=\alpha_{\mathrm{Cd}^{2+}} \times C_{\mathrm{Cd}}=(0.0881)\left(3.64 \times 10^{-3} \ \mathrm{M}\right)=3.21 \times 10^{-4} \ \mathrm{M} \nonumber$ which gives a pCd of $\mathrm{pCd}=-\log \left[\mathrm{Cd}^{2+}\right]=-\log \left(3.21 \times 10^{-4}\right)=3.49 \nonumber$ At the equivalence point all Cd2+ initially in the titrand is now present as CdY2–. The concentration of Cd2+, therefore, is determined by the dissociation of the CdY2– complex. First, we calculate the concentration of CdY2–. $\left[\mathrm{CdY}^{2-}\right]=\frac{\left(\mathrm{mol} \ \mathrm{Cd}^{2+}\right)_{\mathrm{initial}}}{\text { total volume }} = \frac {M_\text{Cd}V_\text{Cd}} {V_\text{Cd} + V_\text{EDTA}} \nonumber$ $\left[\mathrm{CdY}^{2-}\right]=\frac{\left(5.00 \times 10^{-3} \ \mathrm{M}\right)(50.0 \ \mathrm{mL})}{50.0 \ \mathrm{mL}+25.0 \ \mathrm{mL}}=3.33 \times 10^{-3} \ \mathrm{M} \nonumber$ Next, we solve for the concentration of Cd2+ in equilibrium with CdY2–. $K_{\mathrm{f}}^{\prime \prime}=\frac{\left[\mathrm{CdY}^{2-}\right]}{C_{\mathrm{Cd}} C_{\mathrm{EDTA}}}=\frac{3.33 \times 10^{-3}-x}{(x)(x)}=9.5 \times 10^{14} \nonumber$ $x=C_{\mathrm{Cd}}=1.87 \times 10^{-9} \ \mathrm{M} \nonumber$ In calculating that [CdY2–] at the equivalence point is $3.33 \times 10^{-3}$ M, we assumed the reaction between Cd2+ and EDTA went to completion. Here we let the system relax back to equilibrium, increasing CCd and CEDTA from 0 to x, and decreasing the concentration of CdY2– by x. Once again, to find the concentration of uncomplexed Cd we must account for the presence of NH3; thus $\left[\mathrm{Cd}^{2+}\right]=\alpha_{\mathrm{Cd}^{2+}} \times C_{\mathrm{Cd}}=(0.0881)\left(1.87 \times 10^{-9} \ \mathrm{M}\right)=1.64 \times 10^{-10} \ \mathrm{M} \nonumber$ and pCd is 9.78 at the equivalence point. After the equivalence point, EDTA is in excess and the concentration of Cd2+ is determined by the dissociation of the CdY2– complex. First, we calculate the concentrations of CdY2– and of unreacted EDTA. For example, after adding 30.0 mL of EDTA the concentration of CdY2– is $\left[\mathrm{CdY}^{2-}\right]=\frac{\left(\mathrm{mol} \mathrm{Cd}^{2+}\right)_{\mathrm{initial}}}{\text { total volume }} = \frac{M_{\mathrm{Cd}} V_{\mathrm{Cd}}}{V_{\mathrm{Cd}}+V_{\mathrm{EDTA}}} \nonumber$ $\left[\mathrm{CdY}^{2-}\right]=\frac{\left(5.00 \times 10^{-3} \ \mathrm{M}\right)(50.0 \ \mathrm{mL})}{50.0 \ \mathrm{mL}+30.0 \ \mathrm{mL}}=3.12 \times 10^{-3} \ \mathrm{M} \nonumber$ and the concentration of EDTA is $C_{\mathrm{EDTA}} = \frac {(\text{mol EDTA})_\text{added} - (\text{mol Cd}^{2+})_\text{initial}} {\text{total volume}} = \frac{M_{\mathrm{EDTA}} V_{\mathrm{EDTA}}-M_{\mathrm{Cd}} V_{\mathrm{Cd}}}{V_{\mathrm{Cd}}+V_{\mathrm{EDTA}}} \nonumber$ $C_{\text{EDTA}} = \frac {(0.0100 \text{ M})(30.0 \text{ mL}) - (5.00 \times 10^{-3} \text{ M})(50.0 \text{ mL})} {50.0 \text{ mL} + 30.0 \text{ mL}} \nonumber$ $C_{\mathrm{EDTA}}=6.25 \times 10^{-4} \ \mathrm{M} \nonumber$ Substituting into Equation \ref{9.6} and solving for [Cd2+] gives $\frac{\left[\mathrm{CdY}^{2-}\right]}{C_{\mathrm{Cd}} C_{\mathrm{EDTA}}} = \frac{3.12 \times 10^{-3} \ \mathrm{M}}{C_{\mathrm{Cd}}\left(6.25 \times 10^{-4} \ \mathrm{M}\right)} = 9.5 \times 10^{14} \nonumber$ $C_{\text{Cd}} = 5.27 \times 10^{-15} \text{ M} \nonumber$ $\left[ \text{Cd}^{2+} \right] = \alpha_{\text{Cd}^{2+}} \times C_{\text{Cd}} = (0.0881)(5.27 \times 10^{-15} \text{ M}) = 4.64 \times 10^{-16} \text{ M} \nonumber$ a pCd of 15.33. Table 9.3.4 and Figure 9.3.3 show additional results for this titration. After the equilibrium point we know the equilibrium concentrations of CdY2- and of EDTA in all its forms, CEDTA. We can solve for CCd using $K_f^{\prime \prime}$ and then calculate [Cd2+] using $\alpha_{\text{Cd}^{2+}}$. Because we used the same conditional formation constant, $K_f^{\prime \prime}$, for other calculations in this section, this is the approach used here as well. There is a second method for calculating [Cd2+] after the equivalence point. Because the calculation uses only [CdY2-] and CEDTA, we can use $K_f^{\prime}$ instead of $K_f^{\prime \prime}$; thus $\frac{\left[\mathrm{CdY}^{2-}\right]}{\left[\mathrm{Cd}^{2+}\right] C_{\mathrm{EDTA}}}=\alpha_{\mathrm{Y}^{4-}} \times K_{\mathrm{f}} \nonumber$ $\frac{3.13 \times 10^{-3} \ \mathrm{M}}{\left[\mathrm{Cd}^{2+}\right]\left(6.25 \times 10^{-4}\right)}=(0.367)\left(2.9 \times 10^{16}\right) \nonumber$ Solving gives [Cd2+] = $4.71 \times 10^{-16}$ M and a pCd of 15.33. We will use this approach when we learn how to sketch a complexometric titration curve. Table 9.3.4 . Titration of 50.0 mL of $5.00 \times 10^{-3}$ M $\text{Cd}^{2+}$ with 0.0100 M EDTA at a pH of 10 and in the Presence of 0.0100 M $\text{NH}_3$ volume of EDTA (mL) pCd volume of EDTA (mL) pCd 0.00 3.36 27.0 14.95 5.00 3.49 30.0 15.33 10.0 3.66 35.0 15.61 15.0 3.87 40.0 15.76 20.0 4.20 45.0 15.86 23.0 4.62 50.0 15.94 25.0 9.78 Exercise 9.3.1 Calculate titration curves for the titration of 50.0 mL of $5.00 \times 10^{-3}$ M Cd2+ with 0.0100 M EDTA (a) at a pH of 10 and (b) at a pH of 7. Neither titration includes an auxiliary complexing agent. Compare your results with Figure 9.3.3 and comment on the effect of pH on the titration of Cd2+ with EDTA. Answer Let’s begin with the calculations at a pH of 10 where some of the EDTA is present in forms other than Y4–. To evaluate the titration curve, therefore, we need the conditional formation constant for CdY2–, which, from Table 9.3.2 is $K_f^{\prime} = 1.1 \times 10^{16}$. Note that the conditional formation constant is larger in the absence of an auxiliary complexing agent. The titration’s equivalence point requires $V_{e q}=V_{\mathrm{EDTA}}=\frac{M_{\mathrm{Cd}} V_{\mathrm{Cd}}}{M_{\mathrm{EDTA}}}=\frac{\left(5.00 \times 10^{-3} \ \mathrm{M}\right)(50.0 \ \mathrm{mL})}{(0.0100 \ \mathrm{M})}=25.0 \ \mathrm{mL} \nonumber$ of EDTA. Before the equivalence point, Cd2+ is present in excess and pCd is determined by the concentration of unreacted Cd2+. For example, after adding 5.00 mL of EDTA, the total concentration of Cd2+ is $\left[\mathrm{Cd}^{2+}\right]=\frac{\left(5.00 \times 10^{-3} \ \mathrm{M}\right)(50.0 \ \mathrm{mL})-(0.0100 \ \mathrm{M})(5.00 \ \mathrm{mL})}{50.0 \ \mathrm{mL}+5.00 \ \mathrm{mL}} \nonumber$ which gives [Cd2+] as $3.64 \times 10^{-3}$ and pCd as 2.43. At the equivalence point all Cd2+ initially in the titrand is now present as CdY2–. The concentration of Cd2+, therefore, is determined by the dissociation of the CdY2– complex. First, we calculate the concentration of CdY2–. $\left[\mathrm{CdY}^{2-}\right]=\frac{\left(5.00 \times 10^{-3} \ \mathrm{M}\right)(50.0 \ \mathrm{mL})}{50.0 \ \mathrm{mL}+25.00 \ \mathrm{mL}}=3.33 \times 10^{-3} \ \mathrm{M} \nonumber$ Next, we solve for the concentration of Cd2+ in equilibrium with CdY2–. $K_{f}^{\prime}=\frac{\left[\mathrm{CdY}^{2-}\right]}{\left[\mathrm{Cd}^{2+}\right] C_{\mathrm{EDTA}}}=\frac{3.33 \times 10^{-3}-x}{(x)(x)}=1.1 \times 10^{16} \nonumber$ Solving gives [Cd2+] as $5.50 \times 10^{-10}$ M or a pCd of 9.26 at the equivalence point. After the equivalence point, EDTA is in excess and the concentration of Cd2+ is determined by the dissociation of the CdY2– complex. First, we calculate the concentrations of CdY2– and of unreacted EDTA. For example, after adding 30.0 mL of EDTA $\left[\mathrm{CdY}^{2-}\right]=\frac{\left(5.00 \times 10^{-3} \ \mathrm{M}\right)(50.0 \ \mathrm{mL})}{50.0 \ \mathrm{mL}+30.00 \ \mathrm{mL}}=3.12 \times 10^{-3} \ \mathrm{M} \nonumber$ $C_{\mathrm{EDTA}}=\frac{(0.0100 \ \mathrm{M})(30.00 \ \mathrm{mL})-\left(5.00 \times 10^{-3} \ \mathrm{M}\right)(50.0 \ \mathrm{mL})}{50.0 \ \mathrm{mL}+30.00 \ \mathrm{mL}} \nonumber$ $C_{\mathrm{EDTA}}=6.25 \times 10^{-4} \ \mathrm{M} \nonumber$ Substituting into the equation for the conditional formation constant $K_{f}^{\prime}=\frac{\left[\mathrm{CdY}^{2-}\right]}{\left[\mathrm{Cd}^{2+}\right] C_{\mathrm{EDTA}}}=\frac{3.12 \times 10^{-3} \ \mathrm{M}}{(\mathrm{x})\left(6.25 \times 10^{-4} \ \mathrm{M}\right)}=1.1 \times 10^{16} \nonumber$ and solving for [Cd2+] gives $4.54 \times 10^{-16}$ M or a pCd of 15.34. The calculations at a pH of 7 are identical, except the conditional formation constant for CdY2– is $1.5 \times 10^{13}$ instead of $1.1 \times 10^{16}$. The following table summarizes results for these two titrations as well as the results from Table 9.3.4 for the titration of Cd2+ at a pH of 10 in the presence of 0.0100 M NH3 as an auxiliary complexing agent. Volume of EDTA (mL) pCd at pH 10 pCd at pH 10 w/ 0.0100 M NH3 pCd at pH 7 0 2.30 3.36 2.30 5.00 2.43 3.49 2.43 10.0 2.60 3.66 2.60 15.0 2.81 3.87 2.81 20.0 3.15 4.20 3.15 23.0 3.56 4.62 3.56 25.0 9.26 9.77 7.83 27.0 14.94 14.95 12.08 30.0 15.34 15.33 12.48 35.0 15.61 15.61 12.78 40.0 15.76 15.76 12.95 45.0 15.86 15.86 13.08 50.0 15.94 15.94 13.18 Examining these results allows us to draw several conclusions. First, in the absence of an auxiliary complexing agent the titration curve before the equivalence point is independent of pH (compare columns 2 and 4). Second, for any pH, the titration curve after the equivalence point is the same regardless of whether an auxiliary complexing agent is present (compare columns 2 and 3). Third, the largest change in pH through the equivalence point occurs at higher pHs and in the absence of an auxiliary complexing agent. For example, from 23.0 mL to 27.0 mL of EDTA the change in pCd is 11.38 at a pH of 10, 10.33 at a pH of 10 in the presence of 0.0100 M NH3, and 8.52 at a pH of 7. Sketching an EDTA Titration Curve To evaluate the relationship between a titration’s equivalence point and its end point, we need to construct only a reasonable approximation of the exact titration curve. In this section we demonstrate a simple method for sketching a complexation titration curve. Our goal is to sketch the titration curve quickly, using as few calculations as possible. Let’s use the titration of 50.0 mL of $5.00 \times 10^{-3}$ M Cd2+ with 0.0100 M EDTA in the presence of 0.0100 M NH3 to illustrate our approach. This is the same example we used in developing the calculations for a complexation titration curve. You can review the results of that calculation in Table 9.3.4 and Figure 9.3.3 . We begin by calculating the titration’s equivalence point volume, which, as we determined earlier, is 25.0 mL. Next, we draw our axes, placing pCd on the y-axis and the titrant’s volume on the x-axis. To indicate the equivalence point’s volume, we draw a vertical line that intersects the x-axis at 25.0 mL of EDTA. Figure 9.3.4 a shows the result of the first step in our sketch. Before the equivalence point, Cd2+ is present in excess and pCd is determined by the concentration of unreacted Cd2+. Because not all unreacted Cd2+ is free—some is complexed with NH3—we must account for the presence of NH3. The calculations are straightforward, as we saw earlier. Figure 9.3.4 b shows the pCd after adding 5.00 mL and 10.0 mL of EDTA. The third step in sketching our titration curve is to add two points after the equivalence point. Here the concentration of Cd2+ is controlled by the dissociation of the Cd2+–EDTA complex. Beginning with the conditional formation constant $K_{f}^{\prime}=\frac{\left[\mathrm{CdY}^{2-}\right]}{\left[\mathrm{Cd}^{2+}\right] C_{\mathrm{EDTA}}} = \alpha_{\text{Y}^{4-}} \times K_{f}=(0.367)\left(2.9 \times 10^{16}\right)=1.1 \times 10^{16} \nonumber$ we take the log of each side and rearrange, arriving at $\begin{array}{c}{\log K_{f}^{\prime}=-\log \left[\mathrm{Cd}^{2+}\right]+\log \frac{\left[\mathrm{CdY}^{2-}\right]}{C_{\mathrm{EDTA}}}} \ {\mathrm{pCd}=\log K_{f}^{\prime}+\log \frac{C_{\mathrm{EDTA}}}{\left[\mathrm{CdY}^{2-}]\right.}}\end{array} \nonumber$ Recall that we can use either of our two possible conditional formation constants, $K_f^{\prime}$ or $K_f^{\prime \prime}$, to determine the composition of the system at equilibrium. Note that after the equivalence point, the titrand is a metal–ligand complexation buffer, with pCd determined by CEDTA and [CdY2–]. The buffer is at its lower limit of $\text{pCd} = \log{K_f^{\prime}} - 1$ when $\frac{C_{\mathrm{EDTA}}}{\left[\mathrm{CdY}^{2-}\right]} = \frac {(\text{mole EDTA})_\text{added} - (\text{mol Cd}^{2+})_\text{initial}} {(\text{mol Cd}^{2+})_\text{initial}} = \frac {1} {10} \nonumber$ Making appropriate substitutions and solving, we find that $\frac{M_{\mathrm{EDTA}} V_{\mathrm{EDTA}}-M_{\mathrm{Cd}} V_{\mathrm{Cd}}}{M_{\mathrm{Cd}} V_{\mathrm{Cd}}}=\frac{1}{10} \nonumber$ $M_{\mathrm{EDTA}} V_{\mathrm{EDTA}}-M_{\mathrm{Cd}} V_{\mathrm{Cd}}=0.1 \times M_{\mathrm{Cd}} V_{\mathrm{Cd}} \nonumber$ $V_{\mathrm{EDTA}}=\frac{1.1 \times M_{\mathrm{Cd}} V_{\mathrm{Cd}}}{M_{\mathrm{EDTA}}}=1.1 \times V_{e q} \nonumber$ Thus, when the titration reaches 110% of the equivalence point volume, pCd is $\log{K_f^{\prime}} - 1$. A similar calculation should convince you that pCd is $\log{K_f^{\prime}} + 1$ when the volume of EDTA is $2 \times V_\text{eq}$. Figure 9.3.4 c shows the third step in our sketch. First, we add a ladder diagram for the CdY2– complex, including its buffer range, using its $\log{K_f^{\prime}}$ value of 16.04. Next, we add two points, one for pCd at 110% of Veq (a pCd of 15.04 at 27.5 mL) and one for pCd at 200% of Veq (a pCd of 16.04 at 50.0 mL). Next, we draw a straight line through each pair of points, extending each line through the vertical line that indicates the equivalence point’s volume (Figure 9.3.4 d). Finally, we complete our sketch by drawing a smooth curve that connects the three straight-line segments (Figure 9.3.4 e). A comparison of our sketch to the exact titration curve (Figure 9.3.4 f) shows that they are in close agreement. Our treatment here is general and applies to any complexation titration using EDTA as a titrant. Exercise 9.3.2 Sketch titration curves for the titration of 50.0 mL of $5.00 \times 10^{-3}$ M Cd2+ with 0.0100 M EDTA (a) at a pH of 10 and (b) at a pH of 7. Compare your sketches to the calculated titration curves from Exercise 9.3.1 . Answer The figure below shows a sketch of the titration curves. The two black points before the equivalence point (VEDTA = 5 mL, pCd= 2.43 and VEDTA = 15 mL, pCd= 2.81) are the same for both pHs and taken from the results of Exercise 9.3.1 . The two black points after the equivalence point for a pH of 7 (VEDTA = 27.5 mL, pCd= 12.2 and VEDTA = 50 mL, pCd= 13.2) are plotted using the $\log{K_f^{\prime}}$ of 13.2 for CdY2-. The two points after the equivalence point for a pH of 10 (VEDTA = 27.5 mL, pCd= 15.0 andVEDTA = 50 mL, pCd= 16.0) are plotted using the $\log{K_f^{\prime}}$ of 16.0 for CdY2-. The points in red are the calculations from Exercise 9.3.1 for a pH of 10, and the points in green are the calculations from Exercise 9.3.1 for a pH of 7. Selecting and Evaluating the Endpoint The equivalence point of a complexation titration occurs when we react stoichiometrically equivalent amounts of the titrand and titrant. As is the case for an acid–base titration, we estimate the equivalence point for a complexation titration using an experimental end point. A variety of methods are available for locating the end point, including indicators and sensors that respond to a change in the solution conditions. Finding the End Point with an Indicator Most indicators for complexation titrations are organic dyes—known as metallochromic indicators—that form stable complexes with metal ions. The indicator, Inm, is added to the titrand’s solution where it forms a stable complex with the metal ion, MInn. As we add EDTA it reacts first with free metal ions, and then displaces the indicator from MInn. $\text{MIn}^{n-}(aq) + \text{Y}^{4-}(aq) \rightarrow \text{MY}^{2-}(aq) + \text{In}^{m-}(aq) \nonumber$ If MInn and Inm have different colors, then the change in color signals the end point. The accuracy of an indicator’s end point depends on the strength of the metal–indicator complex relative to the strength of the metal–EDTA complex. If the metal–indicator complex is too strong, the change in color occurs after the equivalence point. If the metal–indicator complex is too weak, however, the end point occurs before we reach the equivalence point. Most metallochromic indicators also are weak acids. One consequence of this is that the conditional formation constant for the metal–indicator complex depends on the titrand’s pH. This provides some control over an indicator’s titration error because we can adjust the strength of a metal–indicator complex by adjusted the pH at which we carry out the titration. Unfortunately, because the indicator is a weak acid, the color of the uncomplexed indicator also may change with pH. Figure 9.3.5 , for example, shows the color of the indicator calmagite as a function of pH and pMg, where H2In, HIn2–, and In3– are different forms of the uncomplexed indicator, and MgIn is the Mg2+–calmagite complex. Because the color of calmagite’s metal–indicator complex is red, its use as a metallochromic indicator has a practical pH range of approximately 8.5–11 where the uncomplexed indicator, HIn2–, has a blue color. Table 9.3.5 provides examples of metallochromic indicators and the metal ions and pH conditions for which they are useful. Even if a suitable indicator does not exist, it often is possible to complete an EDTA titration by introducing a small amount of a secondary metal–EDTA complex if the secondary metal ion forms a stronger complex with the indicator and a weaker complex with EDTA than the analyte. For example, calmagite has a poor end point when titrating Ca2+ with EDTA. Adding a small amount of Mg2+–EDTA to the titrand gives a sharper end point. Because Ca2+ forms a stronger complex with EDTA, it displaces Mg2+, which then forms the red-colored Mg2+–calmagite complex. At the titration’s end point, EDTA displaces Mg2+ from the Mg2+–calmagite complex, signaling the end point by the presence of the uncomplexed indicator’s blue form. Table 9.3.5 . Selected Metallochromic Indicators indicator pH range metal ions indicator pH range metal ions calmagite 8.5–11 Ba, Ca, Mg, Zn eriochrome Black T 7.5–10.5 Ba, Ca, Mg, Zn eriochrome Blue Black R 8–12 Ca, Mg, Zn, Cu PAN 2–11 Cd, Cu, Zn murexide 6–13 Ca, Ni, Cu salicylic acid 2–3 Fe all metal ions carry a +2 charge expect for iron, which is +3 metal ions in italic font have poor end points Finding the End Point By Monitoring Absorbance An important limitation when using a metallochromic indicator is that we must be able to see the indicator’s change in color at the end point. This may be difficult if the solution is already colored. For example, when titrating Cu2+ with EDTA, ammonia is used to adjust the titrand’s pH. The intensely colored $\text{Cu(NH}_3)_2^{4+}$ complex obscures the indicator’s color, making an accurate determination of the end point difficult. Other absorbing species present within the sample matrix may also interfere. This often is a problem when analyzing clinical samples, such as blood, or environmental samples, such as natural waters. If at least one species in a complexation titration absorbs electromagnet- ic radiation, then we can identify the end point by monitoring the titrand’s absorbance at a carefully selected wavelength. For example, we can identify the end point for a titration of Cu2+ with EDTA in the presence of NH3 by monitoring the titrand’s absorbance at a wavelength of 745 nm, where the $\text{Cu(NH}_3)_2^{4+}$ complex absorbs strongly. At the beginning of the titration the absorbance is at a maximum. As we add EDTA, however, the reaction $\text{Cu(NH}_3)_4^{2+}(aq) + \text{Y}^{4-} \rightleftharpoons \text{CuY}^{2-}(aq) + 4\text{NH}_3(aq) \nonumber$ decreases the concentration of $\text{Cu(NH}_3)_2^{4+}$ and decreases the absorbance until we reach the equivalence point. After the equivalence point the absorbance essentially remains unchanged. The resulting spectrophotometric titration curve is shown in Figure 9.3.6 a. Note that the titration curve’s y-axis is not the measured absorbance, Ameas, but a corrected absorbance, Acorr $A_\text{corr} = A_\text{meas} \times \frac {V_\text{EDTA} + V_\text{Cu}} {V_\text{Cu}} \nonumber$ where VEDTA and VCu are, respectively, the volumes of EDTA and Cu. Correcting the absorbance for the titrand’s dilution ensures that the spectrophotometric titration curve consists of linear segments that we can extrapolate to find the end point. Other common spectrophotometric titration curves are shown in Figures 9.3.6 b-f. Representative Method 9.3.1: Determination of Hardness of Water and Wastewater The best way to appreciate the theoretical and the practical details discussed in this section is to carefully examine a typical complexation titrimetric method. Although each method is unique, the following description of the determination of the hardness of water provides an instructive example of a typical procedure. The description here is based on Method 2340C as published in Standard Methods for the Examination of Water and Wastewater, 20th Ed., American Public Health Association: Washington, D. C., 1998. Description of the Method The operational definition of water hardness is the total concentration of cations in a sample that can form an insoluble complex with soap. Although most divalent and trivalent metal ions contribute to hardness, the two most important metal ions are Ca2+ and Mg2+. Hardness is determined by titrating with EDTA at a buffered pH of 10. Calmagite is used as an indicator. Hardness is reported as mg CaCO3/L. Procedure Select a volume of sample that requires less than 15 mL of titrant to keep the analysis time under 5 minutes and, if necessary, dilute the sample to 50 mL with distilled water. Adjust the sample’s pH by adding 1–2 mL of a pH 10 buffer that contains a small amount of Mg2+–EDTA. Add 1–2 drops of indicator and titrate with a standard solution of EDTA until the red-to-blue end point is reached (Figure 9.3.7 ). Questions 1. Why is the sample buffered to a pH of 10? What problems might you expect at a higher pH or a lower pH? Of the two primary cations that contribute to hardness, Mg2+ forms the weaker complex with EDTA and is the last cation to react with the titrant. Calmagite is a useful indicator because it gives a distinct end point when titrating Mg2+ (see Table 9.3.5 ). Because of calmagite’s acid–base properties, the range of pMg values over which the indicator changes color depends on the titrand’s pH (Figure 9.3.5 ). Figure 9.3.8 shows the titration curve for a 50-mL solution of 10–3 M Mg2+ with 10–2 M EDTA at pHs of 9, 10, and 11. Superimposed on each titration curve is the range of conditions for which the average analyst will observe the end point. At a pH of 9 an early end point is possible, which results in a negative determinate error. A late end point and a positive determinate error are possible if the pH is 11. 2. Why is a small amount of the Mg2+–EDTA complex added to the buffer? The titration’s end point is signaled by the indicator calmagite. The indicator’s end point with Mg2+ is distinct, but its change in color when titrating Ca2+ does not provide a good end point (see Table 9.3.5 ). If the sample does not contain any Mg2+ as a source of hardness, then the titration’s end point is poorly defined, which leads to an inaccurate and imprecise result. Adding a small amount of Mg2+–EDTA to the buffer ensures that the titrand includes at least some Mg2+. Because Ca2+ forms a stronger complex with EDTA, it displaces Mg2+ from the Mg2+–EDTA complex, freeing the Mg2+ to bind with the indicator. This displacement is stoichiometric, so the total concentration of hardness cations remains unchanged. The displacement by EDTA of Mg2+ from the Mg2+–indicator complex signals the titration’s end point. 3. Why does the procedure specify that the titration take no longer than 5 minutes? A time limitation suggests there is a kinetically-controlled interference, possibly arising from a competing chemical reaction. In this case the interference is the possible precipitation of CaCO3 at a pH of 10. Quantitative Applications Although many quantitative applications of complexation titrimetry have been replaced by other analytical methods, a few important applications continue to find relevance. In the section we review the general application of complexation titrimetry with an emphasis on applications from the analysis of water and wastewater. First, however, we discuss the selection and standardization of complexation titrants. Selection and Standardization of Titrants EDTA is a versatile titrant that can be used to analyze virtually all metal ions. Although EDTA is the usual titrant when the titrand is a metal ion, it cannot be used to titrate anions, for which Ag+ or Hg2+ are suitable titrants. Solutions of EDTA are prepared from its soluble disodium salt, Na2H2Y•2H2O, and standardized by titrating against a solution made from the primary standard CaCO3. Solutions of Ag+ and Hg2+ are prepared using AgNO3 and Hg(NO3)2, both of which are secondary standards. Standardization is accomplished by titrating against a solution prepared from primary standard grade NaCl. Inorganic Analytes Complexation titrimetry continues to be listed as a standard method for the determination of hardness, Ca2+, CN, and Cl in waters and wastewaters. The evaluation of hardness was described earlier in Representative Method 9.3.1. The determination of Ca2+ is complicated by the presence of Mg2+, which also reacts with EDTA. To prevent an interference the pH is adjusted to 12–13, which precipitates Mg2+ as Mg(OH)2. Titrating with EDTA using murexide or Eriochrome Blue Black R as the indicator gives the concentration of Ca2+. Cyanide is determined at concentrations greater than 1 mg/L by making the sample alkaline with NaOH and titrating with a standard solution of AgNO3 to form the soluble $\text{Ag(CN)}_2^-$ complex. The end point is determined using p-dimethylaminobenzalrhodamine as an indicator, with the solution turning from a yellow to a salmon color in the presence of excess Ag+. Chloride is determined by titrating with Hg(NO3)2, forming HgCl2(aq). The sample is acidified to a pH of 2.3–3.8 and diphenylcarbazone, which forms a colored complex with excess Hg2+, serves as the indicator. The pH indicator xylene cyanol FF is added to ensure that the pH is within the desired range. The initial solution is a greenish blue, and the titration is carried out to a purple end point. Quantitative Calculations The quantitative relationship between the titrand and the titrant is determined by the titration reaction’s stoichiometry. For a titration using EDTA, the stoichiometry is always 1:1. Example 9.3.1 The concentration of a solution of EDTA is determined by standardizing against a solution of Ca2+ prepared using a primary standard of CaCO3. A 0.4071-g sample of CaCO3 is transferred to a 500-mL volumetric flask, dissolved using a minimum of 6 M HCl, and diluted to volume. After transferring a 50.00-mL portion of this solution to a 250-mL Erlenmeyer flask, the pH is adjusted by adding 5 mL of a pH 10 NH3–NH4Cl buffer that contains a small amount of Mg2+–EDTA. After adding calmagite as an indicator, the solution is titrated with the EDTA, requiring 42.63 mL to reach the end point. Report the molar concentration of EDTA in the titrant. Solution The primary standard of Ca2+ has a concentration of $\frac {0.4071 \text{ g CaCO}_3}{0.5000 \text{ L}} \times \frac {1 \text{ mol Ca}^{2+}}{100.09 \text{ g CaCO}_3} = 8.135 \times 10^{-3} \text{ M Ca}^{2+} \nonumber$ The moles of Ca2+ in the titrand is $8.135 \times 10^{-3} \text{ M Ca}^{2+} \times 0.05000 \text{ L} = 4.068 \times 10^{-4} \text{ mol Ca}^{2+} \nonumber$ which means that $4.068 \times 10^{-4}$ moles of EDTA are used in the titration. The molarity of EDTA in the titrant is $\frac {4.068 \times 10^{-4} \text{ mol Ca}^{2+}}{0.04263 \text{ L}} = 9.543 \times 10^{-3} \text{ M EDTA} \nonumber$ Exercise 9.3.3 A 100.0-mL sample is analyzed for hardness using the procedure outlined in Representative Method 9.3.1, requiring 23.63 mL of 0.0109 M EDTA. Report the sample’s hardness as mg CaCO3/L. Answer In an analysis for hardness we treat the sample as if Ca2+ is the only metal ion that reacts with EDTA. The grams of Ca2+ in the sample, therefore, are $(0.0109 \text{ M EDTA})(0.02363 \text{ L}) \times \frac {1 \text{ mol Ca}^{2+}}{\text{mol EDTA}} = 2.58 \times 10^{-4} \text{ mol Ca}^{2+} \nonumber$ $2.58 \times 10^{-4} \text{ mol Ca}^{2+} \times \frac {1 \text{ mol CaCO}_3}{\text{mol Ca}^{2+}} \times \frac {100.09 \text{ g CaCO}_3}{\text{mol CaCO}_3} = 0.0258 \text{ g CaCO}_3 \nonumber$ and the sample’s hardness is $\frac {0.0258 \text{ g CaCO}_3}{0.1000 \text{ L}} \times \frac {1000 \text{ mg}}{\text{g}} = 258 \text{ g CaCO}_3\text{/L} \nonumber$ As shown in the following example, we can extended this calculation to complexation reactions that use other titrants. Example 9.3.2 The concentration of Cl in a 100.0-mL sample of water from a freshwater aquifer is tested for the encroachment of sea water by titrating with 0.0516 M Hg(NO3)2. The sample is acidified and titrated to the diphenylcarbazone end point, requiring 6.18 mL of the titrant. Report the concentration of Cl, in mg/L, in the aquifer. Solution The reaction between Cl and Hg2+ produces a metal–ligand complex of HgCl2(aq). Each mole of Hg2+ reacts with 2 moles of Cl; thus $\frac {0.0516 \text{ mol Hg(NO}_3)_2}{\text{L}} \times 0.00618 \text{ L} \times \frac {2 \text{ mol Cl}^-}{\text{mol Hg(NO}_3)_2} \times \frac {35.453 \text{ g Cl}^-}{\text{mol Cl}^-} = 0.0226 \text{ g Cl}^- \nonumber$ are in the sample. The concentration of Cl in the sample is $\frac {0.0226 \text{ g Cl}^-}{0.1000 \text{ L}} \times \frac {1000 \text{ mg}}{\text{g}} = 226 \text{ mg/L} \nonumber$ Exercise 9.3.4 A 0.4482-g sample of impure NaCN is titrated with 0.1018 M AgNO3, requiring 39.68 mL to reach the end point. Report the purity of the sample as %w/w NaCN. Answer The titration of CN with Ag+ produces the metal-ligand complex $\text{Ag(CN)}_2^-$; thus, each mole of AgNO3 reacts with two moles of NaCN. The grams of NaCN in the sample is $(0.1018 \text{ M AgNO}_3)(0.03968 \text{ L}) \times \frac {2 \text{ mol NaCN}}{\text{mol AgNO}_3} \times \frac {49.01 \text{ g NaCN}}{\text{mol NaCN}} = 0.3959 \text{ g NaCN} \nonumber$ and the purity of the sample is $\frac {0.3959 \text{ g NaCN}}{0.4482 \text{ g sample}} \times 100 = 88.33 \text{% w/w NaCN} \nonumber$ Finally, complex titrations involving multiple analytes or back titrations are possible. Example 9.3.3 An alloy of chromel that contains Ni, Fe, and Cr is analyzed by a complexation titration using EDTA as the titrant. A 0.7176-g sample of the alloy is dissolved in HNO3 and diluted to 250 mL in a volumetric flask. A 50.00-mL aliquot of the sample, treated with pyrophosphate to mask the Fe and Cr, requires 26.14 mL of 0.05831 M EDTA to reach the murexide end point. A second 50.00-mL aliquot is treated with hexamethylenetetramine to mask the Cr. Titrating with 0.05831 M EDTA requires 35.43 mL to reach the murexide end point. Finally, a third 50.00-mL aliquot is treated with 50.00 mL of 0.05831 M EDTA, and back titrated to the murexide end point with 6.21 mL of 0.06316 M Cu2+. Report the weight percents of Ni, Fe, and Cr in the alloy. Solution The stoichiometry between EDTA and each metal ion is 1:1. For each of the three titrations, therefore, we can write an equation that relates the moles of EDTA to the moles of metal ions that are titrated. titration 1: mol Ni = mol EDTA titration 2: mol Ni +mol Fe = mol EDTA titration 3: mol Ni + mol Fe + mol Cr + mol Cu = mol EDTA We use the first titration to determine the moles of Ni in our 50.00-mL portion of the dissolved alloy. The titration uses $\frac {0.05831 \text{ mol EDTA}}{\text{L}} \times 0.02614 \text{ L} = 1.524 \times 10^{-3} \text{ mol EDTA} \nonumber$ which means the sample contains $1.524 \times 10^{-3}$ mol Ni. Having determined the moles of EDTA that react with Ni, we use the second titration to determine the amount of Fe in the sample. The second titration uses $\frac {0.05831 \text{ mol EDTA}}{\text{L}} \times 0.03543 \text{ L} = 2.066 \times 10^{-3} \text{ mol EDTA} \nonumber$ of which $1.524 \times 10^{-3}$ mol are used to titrate Ni. This leaves $5.42 \times 10^{-4}$ mol of EDTA to react with Fe; thus, the sample contains $5.42 \times 10^{-4}$ mol of Fe. Finally, we can use the third titration to determine the amount of Cr in the alloy. The third titration uses $\frac {0.05831 \text{ mol EDTA}}{\text{L}} \times 0.0500 \text{ L} = 3.926 \times 10^{-3} \text{ mol EDTA} \nonumber$ of which $1.524 \times 10^{-3}$ mol are used to titrate Ni and $5.42 \times 10^{-4}$ mol are used to titrate Fe. This leaves $8.50 \times 10^{-4}$ mol of EDTA to react with Cu and Cr. The amount of EDTA that reacts with Cu is $\frac {0.06316 \text{ mol Cu}^{2+}}{\text{L}} \times 0.006211 \text{ L} \times \frac {1 \text{ mol EDTA}}{\text{mol Cu}^{2+}} = 3.92 \times 10^{-4} \text{ mol EDTA} \nonumber$ leaving $4.58 \times 10^{-4}$ mol of EDTA to react with Cr. The sample, therefore, contains $4.58 \times 10^{-4}$ mol of Cr. Having determined the moles of Ni, Fe, and Cr in a 50.00-mL portion of the dissolved alloy, we can calculate the %w/w of each analyte in the alloy. $\frac {1.524 \times 10^{-3} \text{ mol Ni}}{50.00 \text{ mL}} \times \frac {58.69 \text{ g Ni}}{\text{mol Ni}} = 0.4472 \text{ g Ni} \nonumber$ $\frac {0.4472 \text{ g Ni}}{0.7176 \text{ g sample}} \times 100 = 62.32 \text{% w/w Ni} \nonumber$ $\frac {5.42 \times 10^{-4} \text{ mol Fe}}{50.00 \text{ mL}} \times \frac {55.845 \text{ g Fe}}{\text{mol Fe}} = 0.151 \text{ g Fe} \nonumber$ $\frac {0.151 \text{ g Fe}}{0.7176 \text{ g sample}} \times 100 = 21.0 \text{% w/w Fe} \nonumber$ $\frac {4.58 \times 10^{-4} \text{ mol Cr}}{50.00 \text{ mL}} \times \frac {51.996 \text{ g Cr}}{\text{mol Cr}} = 0.119 \text{ g Cr} \nonumber$ $\frac {0.119 \text{ g Cr}}{0.7176 \text{ g sample}} \times 100 = 16.6 \text{% w/w Cr} \nonumber$ Exercise 9.3.5 An indirect complexation titration with EDTA can be used to determine the concentration of sulfate, $\text{SO}_4^{2-}$, in a sample. A 0.1557-g sample is dissolved in water and any sulfate present is precipitated as BaSO4 by adding Ba(NO3)2. After filtering and rinsing the precipitate, it is dissolved in 25.00 mL of 0.02011 M EDTA. The excess EDTA is titrated with 0.01113 M Mg2+, requiring 4.23 mL to reach the end point. Calculate the %w/w Na2SO4 in the sample. Answer The total moles of EDTA used in this analysis is $(0.02011 \text{ M EDTA})(0.02500 \text{ L}) = 5.028 \times 10^{-4} \text{ mol EDTA} \nonumber$ Of this, $(0.01113 \text{ M Mg}^{2+})(0.00423 \text{ L}) \times \frac {1 \text{ mol EDTA}}{\text{mol Mg}^{2+}} = 4.708 \times 10^{-5} \text{ mol EDTA} \nonumber$ are consumed in the back titration with Mg2+, which means that $5.028 \times 10^{-4} \text{ mol EDTA} - 4.708 \times 10^{-5} \text{ mol EDTA} = 4.557 \times 10^{-4} \text{ mol EDTA} \nonumber$ react with the BaSO4. Each mole of BaSO4 reacts with one mole of EDTA; thus $4.557 \times 10^{-4} \text{ mol EDTA} \times \frac {1 \text{ mol BaSO}_4}{\text{mol EDTA}} \times \frac {1 \text{ mol Na}_2\text{SO}_4}{\text{mol BaSO}_4} \times \frac {142.04 \text{ g Na}_2\text{SO}_4}{\text{mol Na}_2\text{SO}_4} = 0.06473 \text{ g Na}_2\text{SO}_4 \nonumber$ $\frac{0.06473 \text{ g Na}_2\text{SO}_4}{0.1557 \text{ g sample}} \times 100 = 41.57 \text{% w/w Na}_2\text{SO}_4 \nonumber$ Evaluation of Complexation Titrimetry The scale of operations, accuracy, precision, sensitivity, time, and cost of a complexation titration are similar to those described earlier for acid–base titrations. Complexation titrations, however, are more selective. Although EDTA forms strong complexes with most metal ion, by carefully controlling the titrand’s pH we can analyze samples that contain two or more analytes. The reason we can use pH to provide selectivity is shown in Figure 9.3.9 a. A titration of Ca2+ at a pH of 9 has a distinct break in the titration curve because the conditional formation constant for CaY2– of $2.6 \times 10^9$ is large enough to ensure that the reaction of Ca2+ and EDTA goes to completion. At a pH of 3, however, the conditional formation constant of 1.23 is so small that very little Ca2+ reacts with the EDTA. Suppose we need to analyze a mixture of Ni2+ and Ca2+. Both analytes react with EDTA, but their conditional formation constants differ significantly. If we adjust the pH to 3 we can titrate Ni2+ with EDTA without titrating Ca2+ (Figure 9.3.9 b). When the titration is complete, we adjust the titrand’s pH to 9 and titrate the Ca2+ with EDTA. A spectrophotometric titration is a particularly useful approach for analyzing a mixture of analytes. For example, as shown in Figure 9.3.10 , we can determine the concentration of a two metal ions if there is a difference between the absorbance of the two metal-ligand complexes.
textbooks/chem/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/09%3A_Titrimetric_Methods/9.03%3A_Complexation_Titrations.txt
Analytical titrations using oxidation–reduction reactions were introduced shortly after the development of acid–base titrimetry. The earliest redox titration took advantage of chlorine’s oxidizing power. In 1787, Claude Berthollet introduced a method for the quantitative analysis of chlorine water (a mixture of Cl2, HCl, and HOCl) based on its ability to oxidize indigo, a dye that is colorless in its oxidized state. In 1814, Joseph Gay-Lussac developed a similar method to determine chlorine in bleaching powder. In both methods the end point is a change in color. Before the equivalence point the solution is colorless due to the oxidation of indigo. After the equivalence point, however, unreacted indigo imparts a permanent color to the solution. The number of redox titrimetric methods increased in the mid-1800s with the introduction of $\text{MnO}_4^-$, $\text{Cr}_2\text{O}_7^{2-}$, and I2 as oxidizing titrants, and of Fe2+ and $\text{S}_2\text{O}_3^{2-}$ as reducing titrants. Even with the availability of these new titrants, redox titrimetry was slow to develop due to the lack of suitable indicators. A titrant can serve as its own indicator if its oxidized and its reduced forms differ significantly in color. For example, the intensely purple $\text{MnO}_4^-$ ion serves as its own indicator since its reduced form, Mn2+, is almost colorless. Other titrants require a separate indicator. The first such indicator, diphenylamine, was introduced in the 1920s. Other redox indicators soon followed, increasing the applicability of redox titrimetry. Redox Titration Curves To evaluate a redox titration we need to know the shape of its titration curve. In an acid–base titration or a complexation titration, the titration curve shows how the concentration of H3O+ (as pH) or Mn+ (as pM) changes as we add titrant. For a redox titration it is convenient to monitor the titration reaction’s potential instead of the concentration of one species. You may recall from Chapter 6 that the Nernst equation relates a solution’s potential to the concentrations of reactants and products that participate in the redox reaction. Consider, for example, a titration in which a titrand in a reduced state, Ared, reacts with a titrant in an oxidized state, Box. $A_{red} + B_{ox} \rightleftharpoons B_{red} + A_{ox} \nonumber$ where Aox is the titrand’s oxidized form, Bred is the titrant’s reduced form, and the stoichiometry between the two is 1:1. The reaction’s potential, Erxn, is the difference between the reduction potentials for each half-reaction. $E_{rxn} = E_{B_{ox}/B_{red}} - E_{A_{ox}/A_{red}} \nonumber$ After each addition of titrant the reaction between the titrand and the titrant reaches a state of equilibrium. Because the potential at equilibrium is zero, the titrand’s and the titrant’s reduction potentials are identical. $E_{B_{ox}/B_{red}} = E_{A_{ox}/A_{red}} \nonumber$ This is an important observation as it allows us to use either half-reaction to monitor the titration’s progress. Before the equivalence point the titration mixture consists of appreciable quantities of the titrand’s oxidized and reduced forms. The concentration of unreacted titrant, however, is very small. The potential, therefore, is easier to calculate if we use the Nernst equation for the titrand’s half-reaction $E = E_{A_{ox}/A_{red}} = E_{A_{ox}/A_{red}}^{\circ} - \frac{RT}{nF}\ln{\frac{[A_{red}]}{[A_{ox}]}} \nonumber$ After the equivalence point it is easier to calculate the potential using the Nernst equation for the titrant’s half-reaction. $E = E_{B_{ox}/B_{red}} = E_{B_{ox}/B_{red}}^{\circ} - \frac{RT}{nF}\ln{\frac{[B_{red}]}{[B_{ox}]}} \nonumber$ Although the Nernst equation is written in terms of the half-reaction’s standard state potential, a matrix-dependent formal potential often is used in its place. See Appendix 13 for the standard state potentials and formal potentials for selected half-reactions. Calculating the Titration Curve Let’s calculate the titration curve for the titration of 50.0 mL of 0.100 M Fe2+ with 0.100 M Ce4+ in a matrix of 1 M HClO4. The reaction in this case is $\text{Fe}^{2+}(aq) + \text{Ce}^{4+}(aq) \rightleftharpoons \text{Ce}^{3+}(aq) + \text{Fe}^{3+}(aq) \label{9.1}$ Because the equilibrium constant for reaction \ref{9.1} is very large—it is approximately $6 \times 10^{15}$—we may assume that the analyte and titrant react completely. In 1 M HClO4, the formal potential for the reduction of Fe3+ to Fe2+ is +0.767 V, and the formal potential for the reduction of Ce4+ to Ce3+ is +1.70 V. The first task is to calculate the volume of Ce needed to reach the titration’s equivalence point. From the reaction’s stoichiometry we know that $\text{mol Fe}^{2+} = M_\text{Fe}V_\text{Fe} = M_\text{Ce}V_\text{Ce} = \text{mol Ce}^{4+} \nonumber$ Solving for the volume of Ce4+ gives the equivalence point volume as $V_{eq} = V_\text{Ce} = \frac{M_\text{Fe}V_\text{Fe}}{M_\text{Ce}} = \frac{(0.100 \text{ M})(50.0 \text{ mL})}{(0.100 \text{ M})} = 50.0 \text{ mL} \nonumber$ Before the equivalence point, the concentration of unreacted Fe2+ and the concentration of Fe3+ are easy to calculate. For this reason we find the potential using the Nernst equation for the Fe3+/Fe2+ half-reaction. $E = +0.767 \text{ V} - 0.05916 \log{\frac{[\text{Fe}^{2+}]}{[\text{Fe}^{3+}]}} \label{9.2}$ For example, the concentrations of Fe2+ and Fe3+ after adding 10.0 mL of titrant are $[\text{Fe}^{2+}] = \frac{(\text{mol Fe}^{2+})_\text{initial} - (\text{mol Ce}^{4+})_\text{added}}{\text{total volume}} = \frac{M_\text{Fe}V_\text{Fe} - M_\text{Ce}V_\text{Ce}}{V_\text{Fe} + V_\text{Ce}} \nonumber$ $[\text{Fe}^{2+}] = \frac{(0.100 \text{ M})(50.0 \text{ mL}) - (0.100 \text{ M})(10.0 \text{ mL})}{50.0 \text{ mL} + 10.0 \text{ mL}} = 6.67 \times 10^{-2} \text{ M} \nonumber$ $[\text{Fe}^{3+}] = \frac{(\text{mol Ce}^{4+})_\text{added}}{\text{total volume}} = \frac{M_\text{Ce}V_\text{Ce}}{V_\text{Fe} + V_\text{Ce}} \nonumber$ $[\text{Fe}^{3+}] = \frac{(0.100 \text{ M})(10.0 \text{ mL})}{50.0 \text{ mL} + 10.0 \text{ mL}} = 1.67 \times 10^{-2} \text{ M} \nonumber$ Substituting these concentrations into Equation \ref{9.2} gives the potential as $E = +0.767 \text{ V} - 0.05916 \log{\frac{6.67 \times 10^{-2}}{1.67 \times 10^{-2}}} = +0.731 \text{ V} \nonumber$ After the equivalence point, the concentration of Ce3+ and the concentration of excess Ce4+ are easy to calculate. For this reason we find the potential using the Nernst equation for the Ce4+/Ce3+ half-reaction in a manner similar to that used above to calculate potentials before the equivalence point. $E = +1.70 \text{ V} - 0.05916 \log{\frac{[\text{Ce}^{3+}]}{[\text{Ce}^{4+}]}} \label{9.3}$ For example, after adding 60.0 mL of titrant, the concentrations of Ce3+ and Ce4+ are $[\text{Ce}^{3+}] = \frac{(\text{mol Fe}^{2+})_\text{initial}}{\text{total volume}} = \frac{M_\text{Fe}V_\text{Fe}}{V_\text{Fe}+V_\text{Ce}} \nonumber$ $[\text{Ce}^{3+}] = \frac{(0.100 \text{ M})(50.0 \text{ mL})}{50.0 \text{ mL} + 60.0 \text{ mL}} = 4.55 \times 10^{-2} \text{ M} \nonumber$ $[\text{Ce}^{4+}] = \frac{(\text{mol Ce}^{4+})_\text{added}-(\text{mol Fe}^{2+})_\text{initial}}{\text{total volume}} = \frac{M_\text{Ce}V_\text{Ce}-M_\text{Fe}V_\text{Fe}}{V_\text{Fe}+V_\text{Ce}} \nonumber$ $[\text{Ce}^{4+}] = \frac{(0.100 \text{ M})(60.0 \text{ mL})-(0.100 \text{ M})(50.0 \text{ mL})}{50.0 \text{ mL} + 60.0 \text{ mL}} = 9.09 \times 10^{-3} \text{ M} \nonumber$ Substituting these concentrations into Equation \ref{9.3} gives a potential of $E = +1.70 \text{ V} - 0.05916 \log{\frac{4.55 \times 10^{-2} \text{ M}}{9.09 \times 10^{-3} \text{ M}}} = +1.66 \text{ V} \nonumber$ At the titration’s equivalence point, the potential, Eeq, in Equation \ref{9.2} and Equation \ref{9.3} are identical. Adding the equations together to gives $2E_{eq} = E_{\text{Fe}^{3+}/\text{Fe}^{2+}}^{\circ} + E_{\text{Ce}^{4+}/\text{Ce}^{3+}}^{\circ} - 0.05916 \log{\frac{[\text{Fe}^{2+}][\text{Ce}^{3+}]}{[\text{Fe}^{3+}][\text{Ce}^{4+}]}} \nonumber$ Because [Fe2+] = [Ce4+] and [Ce3+] = [Fe3+] at the equivalence point, the log term has a value of zero and the equivalence point’s potential is $E_{eq} = \frac{E_{\text{Fe}^{3+}/\text{Fe}^{2+}}^{\circ} + E_{\text{Ce}^{4+}/\text{Ce}^{3+}}^{\circ}}{2} = \frac{0.767 \text{ V} + 1.70 \text{ V}}{2} = +1.23 \text{ V} \nonumber$ Additional results for this titration curve are shown in Table 9.4.1 and Figure 9.4.1 . Table 9.4.1 . Data for the Titration of 50.0 mL of 0.100 M $\text{Fe}^{2+}$ with 0.100 M $\text{Ce}^{4+}$ volume of Ce4+ (mL) E (V) volume of Ce4+ (mL) E (V) 10.0 0.731 60.0 1.66 20.0 0.757 70.0 1.68 30.0 0.777 80.0 1.69 40.0 0.803 90.0 1.69 50.0 1.23 100.0 1.70 Exercise 9.4.1 Calculate the titration curve for the titration of 50.0 mL of 0.0500 M Sn2+ with 0.100 M Tl3+. Both the titrand and the titrant are 1.0 M in HCl. The titration reaction is $\text{Sn}^{2+}(aq) + \text{Tl}^{3+} \rightleftharpoons \text{Tl}^+(aq) + \text{Sn}^{4+}(aq) \nonumber$ Answer The volume of Tl3+ needed to reach the equivalence point is $V_{eq} = V_\text{Tl} = \frac{M_\text{Sn}V_\text{Sn}}{M_\text{Tl}} = \frac{(0.050 \text{ M})(50.0 \text{ mL})}{(0.100 \text{ M})} = 25.0 \text{ mL} \nonumber$ Before the equivalence point, the concentration of unreacted Sn2+ and the concentration of Sn4+ are easy to calculate. For this reason we find the potential using the Nernst equation for the Sn4+/Sn2+ half-reaction. For example, the concentrations of Sn2+ and Sn4+ after adding 10.0 mL of titrant are $[\text{Sn}^{2+}] = \frac{(0.050 \text{ M})(50.0 \text{ mL}) - (0.100 \text{ M})(10.0 \text{ mL})}{50.0 \text{ mL} + 10.0 \text{ mL}} = 0.0250 \text{ M} \nonumber$ $[\text{Sn}^{4+}] = \frac{(0.100 \text{ M})(10.0 \text{ mL})}{50.0 \text{ mL} + 10.0 \text{ mL}} = 0.0167 \text{ M} \nonumber$ and the potential is $E = +0.139 \text{ V} - \frac{0.05916}{2} \log{\frac{0.0250 \text{ M}}{0.0167 \text{ M}}} = +0.134 \text{ V} \nonumber$ After the equivalence point, the concentration of Tl+ and the concentration of excess Tl3+ are easy to calculate. For this reason we find the potential using the Nernst equation for the Tl3+/Tl+ half-reaction. For example, after adding 40.0 mL of titrant, the concentrations of Tl+ and Tl3+ are $[\text{Tl}^{+}] = \frac{(0.050 \text{ M})(50.0 \text{ mL})}{50.0 \text{ mL} + 40.0 \text{ mL}} = 0.0278 \text{ M} \nonumber$ $[\text{Tl}^{3+}] = \frac{(0.100 \text{ M})(40.0 \text{ mL}) - (0.050 \text{ M})(50.0 \text{ mL})}{50.0 \text{ mL} + 40.0 \text{ mL}} = 0.0167 \text{ M} \nonumber$ and the potential is $E = +0.77 \text{ V} - \frac{0.05916}{2} \log{\frac{0.0278 \text{ M}}{0.0167 \text{ M}}} = +0.76 \text{ V} \nonumber$ At the titration’s equivalence point, the potential, Eeq, potential is $E_{eq} = \frac{0.139 \text{ V} + 0.77 \text{ V}}{2} = +0.45 \text{ V} \nonumber$ Some additional results are shown here. volume of Tl3+ (mL) E (V) volume of Tl3+ (mL) E (V) 5.00 0.121 30.0 0.75 10.0 0.134 35.0 0.75 15.0 0.144 40.0 0.76 20.0 0.157 45.0 0.76 25.0 0.45 Sketching a Redox Titration Curve To evaluate the relationship between a titration’s equivalence point and its end point we need to construct only a reasonable approximation of the exact titration curve. In this section we demonstrate a simple method for sketching a redox titration curve. Our goal is to sketch the titration curve quickly, using as few calculations as possible. Let’s use the titration of 50.0 mL of 0.100 M Fe2+ with 0.100 M Ce4+ in a matrix of 1 M HClO4. This is the same example that we used in developing the calculations for a redox titration curve. You can review the results of that calculation in Table 9.4.1 and Figure 9.4.1 . We begin by calculating the titration’s equivalence point volume, which, as we determined earlier, is 50.0 mL. Next, we draw our axes, placing the potential, E, on the y-axis and the titrant’s volume on the x-axis. To indicate the equivalence point’s volume, we draw a vertical line that intersects the x-axis at 50.0 mL of Ce4+. Figure 9.4.2 a shows the result of the first step in our sketch. Before the equivalence point, the potential is determined by a redox buffer of Fe2+ and Fe3+. Although we can calculate the potential using the Nernst equation, we can avoid this calculation if we make a simple assumption. You may recall from Chapter 6 that a redox buffer operates over a range of potentials that extends approximately ±(0.05916/n) unit on either side of $E_{\text{Fe}^{3+}/\text{Fe}^{2+}}^{\circ}$. The potential at the buffer’s lower limit is $E = E_{\text{Fe}^{3+}/\text{Fe}^{2+}}^{\circ} - 0.05916 \nonumber$ when the concentration of Fe2+ is $10 \times$ greater than that of Fe3+. The buffer reaches its upper potential of $E = E_{\text{Fe}^{3+}/\text{Fe}^{2+}}^{\circ} + 0.05916 \nonumber$ when the concentration of Fe2+ is $10 \times$ smaller than that of Fe3+. The redox buffer spans a range of volumes from approximately 10% of the equivalence point volume to approximately 90% of the equivalence point volume. Figure 9.4.2 b shows the second step in our sketch. First, we superimpose a ladder diagram for Fe on the y-axis, using its $E_{\text{Fe}^{3+}/\text{Fe}^{2+}}^{\circ}$ value of 0.767 V and including the buffer’s range of potentials. Next, we add points for the potential at 10% of Veq (a potential of 0.708 V at 5.0 mL) and for the potential at 90% of Veq (a potential of 0.826 V at 45.0 mL). We used a similar approach when sketching the acid–base titration curve for the titration of acetic acid with NaOH; see Chapter 9.2 for details. The third step in sketching our titration curve is to add two points after the equivalence point. Here the potential is controlled by a redox buffer of Ce3+ and Ce4+. The redox buffer is at its lower limit of $E = E_{\text{Ce}^{4+}/\text{Ce}^{3+}}^{\circ} - 0.05916 \nonumber$ when the titrant reaches 110% of the equivalence point volume and the potential is $E_{\text{Ce}^{4+}/\text{Ce}^{3+}}^{\circ}$ when the volume of Ce is $2 \times V_{eq}$. We used a similar approach when sketching the complexation titration curve for the titration of Mg2+ with EDTA; see Chapter 9.3 for details. Figure 9.4.2 c shows the third step in our sketch. First, we superimpose a ladder diagram for Ce on the y-axis, using its $E_{\text{Ce}^{4+}/\text{Ce}^{3+}}^{\circ}$ value of 1.70 V and including the buffer’s range. Next, we add points representing the potential at 110% of Veq (a value of 1.66 V at 55.0 mL) and at 200% of Veq (a value of 1.70 V at 100.0 mL). Next, we draw a straight line through each pair of points, extending the line through the vertical line that indicates the equivalence point’s volume (Figure 9.4.2 d). Finally, we complete our sketch by drawing a smooth curve that connects the three straight-line segments (Figure 9.4.2 e). A comparison of our sketch to the exact titration curve (Figure 9.4.2 f) shows that they are in close agreement. Exercise 9.4.2 Sketch the titration curve for the titration of 50.0 mL of 0.0500 M Sn4+ with 0.100 M Tl+. Both the titrand and the titrant are 1.0 M in HCl. The titration reaction is $\text{Sn}^{2+}(aq) + \text{Tl}^{3+}(aq) \rightleftharpoons \text{Tl}^{+}(aq) + \text{Sn}^{4+}(aq) \nonumber$ Compare your sketch to your calculated titration curve from Exercise 9.4.1 . Answer The figure below shows a sketch of the titration curve. The two points before the equivalence point VTl = 2.5 mL, E = +0.109 V and VTl = 22.5 mL, E = +0.169 V are plotted using the redox buffer for Sn4+/Sn2+, which spans a potential range of +0.139 ± 0.5916/2. The two points after the equivalence point VTl = 27.5 mL, E = +0.74 V and VTl = 50 mL, E = +0.77 V are plotted using the redox buffer for Tl3+/Tl+, which spans the potential range of +0.139 ± 0.5916/2. The black dots and curve are the approximate sketch of the titration curve. The points in red are the calculations from Exercise 9.4.1 . Selecting and Evaluating the End Point A redox titration’s equivalence point occurs when we react stoichiometrically equivalent amounts of titrand and titrant. As is the case for acid–base titrations and complexation titrations, we estimate the equivalence point of a redox titration using an experimental end point. A variety of methods are available for locating a redox titration’s end point, including indicators and sensors that respond to a change in the solution conditions. Where is the Equivalence Point For an acid–base titration or a complexometric titration the equivalence point is almost identical to the inflection point on the steeply rising part of the titration curve. If you look back at Figure 9.2.2 and Figure 9.3.3, you will see that the inflection point is in the middle of this steep rise in the titration curve, which makes it relatively easy to find the equivalence point when you sketch these titration curves. We call this a symmetric equivalence point. If the stoichiometry of a redox titration is 1:1—that is, one mole of titrant reacts with each mole of titrand—then the equivalence point is symmetric. If the titration reaction’s stoichiometry is not 1:1, then the equivalence point is closer to the top or to the bottom of the titration curve’s sharp rise. In this case we have an asymmetric equivalence point. Example 9.4.1 Derive a general equation for the equivalence point’s potential when titrating Fe2+ with $\text{MnO}_4^-$. $5\text{Fe}^{2+}(aq) + \text{MnO}_4^-(aq) + 8\text{H}^+(aq) \rightarrow 5\text{Fe}^{3+}(aq) + \text{Mn}^{2+}(aq) + 4\text{H}_2\text{O}(l) \nonumber$ Solution The half-reactions for the oxidation of Fe2+ and the reduction of $\text{MnO}_4^-$ are $\text{Fe}^{2+}(aq) \rightarrow \text{Fe}^{3+}(aq) + e^- \nonumber$ $\text{MnO}_4^-(aq) + 8\text{H}^+(aq) + 5 e^- \rightarrow \text{Mn}^{2+}(aq) + 4\text{H}_2\text{O}(l) \nonumber$ for which the Nernst equations are $E = E_{\text{Fe}^{3+}/\text{Fe}^{2+}}^{\circ} - 0.5916 \log{\frac{[\text{Fe}^{2+}]}{[\text{Fe}^{3+}]}} \nonumber$ $E = E_{\text{MnO}_4^{-}/\text{Mn}^{2+}}^{\circ} - \frac{0.5916}{5} \log{\frac{[\text{Mn}^{2+}]}{[\text{MnO}_4^{-}][\text{H}^+]^8}} \nonumber$ Before we add together these two equations we must multiply the second equation by 5 so that we can combine the log terms; thus $6E_{eq} = E_{\text{Fe}^{3+}/\text{Fe}^{2+}}^{\circ} + 5E_{\text{MnO}_4^{-}/\text{Mn}^{2+}}^{\circ} - 0.05916 \log{\frac{[\text{Fe}^{2+}][\text{Mn}^{2+}]}{[\text{Fe}^{3+}][\text{MnO}_4^{-}][\text{H}^+]^8}} \nonumber$ At the equivalance point we know that $[\text{Fe}^{2+}] = 5 \times [\text{MnO}_4^-] \text{ and } [\text{Fe}^{3+}] = 5 \times [\text{Mn}^{2+}] \nonumber$ Substituting these equalities into the previous equation and rearranging gives us a general equation for the potential at the equivalence point. $6E_{eq} = E_{\text{Fe}^{3+}/\text{Fe}^{2+}}^{\circ} + 5E_{\text{MnO}_4^{-}/\text{Mn}^{2+}}^{\circ} - 0.05916 \log{\frac{5[\text{MnO}_4^{-}][\text{Mn}^{2+}]}{5[\text{Mn}^{2+}][\text{MnO}_4^{-}][\text{H}^+]^8}} \nonumber$ $E_{eq} = \frac{E_{\text{Fe}^{3+}/\text{Fe}^{2+}}^{\circ} + 5E_{\text{MnO}_4^{-}/\text{Mn}^{2+}}^{\circ}}{6} - \frac{0.05916}{6} \log{\frac{1}{[\text{H}^+]^8}} \nonumber$ $E_{eq} = \frac{E_{\text{Fe}^{3+}/\text{Fe}^{2+}}^{\circ} + 5E_{\text{MnO}_4^{-}/\text{Mn}^{2+}}^{\circ}}{6} + \frac{0.05916 \times 8}{6} \log{[\text{H}^+]} \nonumber$ $E_{eq} = \frac{E_{\text{Fe}^{3+}/\text{Fe}^{2+}}^{\circ} + 5E_{\text{MnO}_4^{-}/\text{Mn}^{2+}}^{\circ}}{6} - 0.07888 \text{pH} \nonumber$ Our equation for the equivalence point has two terms. The first term is a weighted average of the titrand’s and the titrant’s standard state potentials, in which the weighting factors are the number of electrons in their respective half-reactions. The second term shows that Eeq for this titration is pH-dependent. At a pH of 1 (in H2SO4), for example, the equivalence point has a potential of $E_{eq} = \frac{0.768 + 5 \times 1.51}{6} - 0.07888 \times 1 = 1.31 \text{ V} \nonumber$ Figure 9.4.3 shows a typical titration curve for titration of Fe2+ with $\text{MnO}_4^-$. Note that the titration’s equivalence point is asymmetrical. Exercise 9.4.3 Derive a general equation for the equivalence point’s potential for the titration of U4+ with Ce4+. The unbalanced reaction is $\text{Ce}^{4+}(aq) + \text{U}^{4+}(aq) \rightarrow \text{UO}_2^{2+}(aq) + \text{Ce}^{3+}(aq) \nonumber$ What is the equivalence point’s potential if the pH is 1? Answer The two half reactions are $\text{Ce}^{4+}(aq) + e^- \rightarrow \text{Ce}^{3+}(aq) \nonumber$ $\text{U}^{4+}(aq) +2\text{H}_2\text{O}(l) \rightarrow \text{UO}_2^{2+}(aq)) + 4\text{H}^+(aq) +2e^- \nonumber$ for which the Nernst equations are $E = E_{\text{Ce}^{4+}/\text{Ce}^{3+}}^{\circ} - 0.05916 \log{\frac{[\text{Ce}^{3+}]}{[\text{Ce}^{4+}]}} \nonumber$ $E = E_{\text{UO}_2^{2+}/\text{U}^{4+}}^{\circ} - \frac{0.05916}{2} \log{\frac{[\text{U}^{4+}]}{[\text{UO}_2^{2+}][\text{H}^+]^4}} \nonumber$ Before adding these two equations together we must multiply the second equation by 2 so that we can combine the log terms; thus $3E = E_{\text{Ce}^{4+}/\text{Ce}^{3+}}^{\circ} + 2E_{\text{UO}_2^{2+}/\text{U}^{4+}}^{\circ} - 0.05916 \log{\frac{[\text{Ce}^{3+}][\text{U}^{4+}]}{[\text{Ce}^{4+}][\text{UO}_2^{2+}][\text{H}^+]^4}} \nonumber$ At the equivalence point we know that $[\text{Ce}^{3+}] = 2 \times [\text{UO}_2^{2+}] \text{ and } [\text{Ce}^{4+}] = 2 \times [\text{U}^{4+}] \nonumber$ Substituting these equalities into the previous equation and rearranging gives us a general equation for the potential at the equivalence point. $3E = E_{\text{Ce}^{4+}/\text{Ce}^{3+}}^{\circ} + 2E_{\text{UO}_2^{2+}/\text{U}^{4+}}^{\circ} - 0.05916 \log{\frac{2[\text{UO}_2^{2+}][\text{U}^{4+}]}{2[\text{U}^{4+}][\text{UO}_2^{2+}][\text{H}^+]^4}} \nonumber$ $E = \frac{E_{\text{Ce}^{4+}/\text{Ce}^{3+}}^{\circ} + 2E_{\text{UO}_2^{2+}/\text{U}^{4+}}^{\circ}}{3} - \frac{0.05916}{3} \log{\frac{1}{[\text{H}^+]^4}} \nonumber$ $E = \frac{E_{\text{Ce}^{4+}/\text{Ce}^{3+}}^{\circ} + 2E_{\text{UO}_2^{2+}/\text{U}^{4+}}^{\circ}}{3} + \frac{0.05916 \times 4}{3} \log{[\text{H}^+]^4} \nonumber$ $E = \frac{E_{\text{Ce}^{4+}/\text{Ce}^{3+}}^{\circ} + 2E_{\text{UO}_2^{2+}/\text{U}^{4+}}^{\circ}}{3} - 0.07888\text{pH} \nonumber$ At a pH of 1 the equivalence point has a potential of $E = \frac{1.72 + 2 \times 0.327}{3} - 0.07888 \times 1 = +0.712 \text{ V} \nonumber$ Finding the End Point With an Indicator Three types of indicators are used to signal a redox titration’s end point. The oxidized and reduced forms of some titrants, such as $\text{MnO}_4^-$, have different colors. A solution of $\text{MnO}_4^-$ is intensely purple. In an acidic solution, however, permanganate’s reduced form, Mn2+, is nearly colorless. When using $\text{MnO}_4^-$ as a titrant, the titrand’s solution remains colorless until the equivalence point. The first drop of excess $\text{MnO}_4^-$ produces a permanent tinge of purple, signaling the end point. Some indicators form a colored compound with a specific oxidized or reduced form of the titrant or the titrand. Starch, for example, forms a dark purple complex with $\text{I}_3^-$. We can use this distinct color to signal the presence of excess $\text{I}_3^-$ as a titrant—a change in color from colorless to purple—or the completion of a reaction that consumes $\text{I}_3^-$ as the titrand— a change in color from purple to colorless. Another example of a specific indicator is thiocyanate, SCN, which forms the soluble red-colored complex of Fe(SCN)2+ in the presence of Fe3+. The most important class of indicators are substances that do not participate in the redox titration, but whose oxidized and reduced forms differ in color. When we add a redox indicator to the titrand, the indicator imparts a color that depends on the solution’s potential. As the solution’s potential changes with the addition of titrant, the indicator eventually changes oxidation state and changes color, signaling the end point. To understand the relationship between potential and an indicator’s color, consider its reduction half-reaction $\text{In}_\text{ox} + ne^- \rightleftharpoons \text{In}_\text{red} \nonumber$ where Inox and Inred are, respectively, the indicator’s oxidized and reduced forms. For simplicity, Inox and Inred are shown without specific charges. Because there is a change in oxidation state, Inox and Inred cannot both be neutral. The Nernst equation for this half-reaction is $E = E_{\text{In}_\text{ox}/\text{In}_\text{red}}^{\circ} - \frac{0.05916}{n} \log{\frac{[\text{In}_\text{red}]}{[\text{In}_\text{ox}]}} \nonumber$ As shown in Figure 9.4.4 , if we assume the indicator’s color changes from that of Inox to that of Inred when the ratio [Inred]/[Inox] changes from 0.1 to 10, then the end point occurs when the solution’s potential is within the range $E = E_{\text{In}_\text{ox}/\text{In}_\text{red}}^{\circ} \pm \frac{0.05916}{n} \nonumber$ This is the same approach we took in considering acid–base indicators and complexation indicators. A partial list of redox indicators is shown in Table 9.4.2 . Examples of an appropriate and an inappropriate indicator for the titration of Fe2+ with Ce4+ are shown in Figure 9.4.5 . Table 9.4.2 . Selected Examples of Redox Indicators indicator color Inox color of Inred Eo (V) indigo tetrasulfate blue colorless 0.36 methylene blue blue colorless 0.53 diphenylamine violet colorless 0.75 diphenylamine sulfonic acid red-violet colorless 0.85 tris(2,2'-bipyradine)iron pale blue red 1.120 ferroin pale blue red 1.147 tris(5-nitro-1,10-phenanthroline)iron pale blue red-violet 1.25 Other Methods for Finding the End Point Another method for locating a redox titration’s end point is a potentiometric titration in which we monitor the change in potential while we add the titrant to the titrand. The end point is found by examining visually the titration curve. The simplest experimental design for a potentiometric titration consists of a Pt indicator electrode whose potential is governed by the titrand’s or the titrant’s redox half-reaction, and a reference electrode that has a fixed potential. Other methods for locating the titration’s end point include thermometric titrations and spectrophotometric titrations. You will a further discussion of potentiometry in Chapter 11. Representative Method 9.4.1: Determination of Total Chlorine Residual The best way to appreciate the theoretical and the practical details discussed in this section is to carefully examine a typical redox titrimetric method. Although each method is unique, the following description of the determination of the total chlorine residual in water provides an instructive example of a typical procedure. The description here is based on Method 4500-Cl B as published in Standard Methods for the Examination of Water and Wastewater, 20th Ed., American Public Health Association: Washington, D. C., 1998. Description of the Method The chlorination of a public water supply produces several chlorine-containing species, the combined concentration of which is called the total chlorine residual. Chlorine is present in a variety of chemical states, including the free residual chlorine, which consists of Cl2, HOCl and OCl, and the combined chlorine residual, which consists of NH2Cl, NHCl2, and NCl3. The total chlorine residual is determined by using the oxidizing power of chlorine to convert I to $\text{I}_3^-$. The amount of $\text{I}_3^-$ formed is then determined by titrating with Na2S2O3 using starch as an indicator. Regardless of its form, the total chlorine residual is reported as if Cl2 is the only source of chlorine, and is reported as mg Cl/L. Procedure Select a volume of sample that requires less than 20 mL of Na2S2O3 to reach the end point. Using glacial acetic acid, acidify the sample to a pH between 3 and 4, and add about 1 gram of KI. Titrate with Na2S2O3 until the yellow color of $\text{I}_3^-$ begins to disappear. Add 1 mL of a starch indicator solution and continue titrating until the blue color of the starch–$\text{I}_3^-$ complex disappears (Figure 9.4.6 ). Use a blank titration to correct the volume of titrant needed to reach the end point for reagent impurities. Questions 1. Is this an example of a direct or an indirect analysis? This is an indirect analysis because the chlorine-containing species do not react with the titrant. Instead, the total chlorine residual oxidizes I to $\text{I}_3^-$, and the amount of $\text{I}_3^-$ is determined by titrating with Na2S2O3. 2. Why does the procedure rely on an indirect analysis instead of directly titrating the chlorine-containing species using KI as a titrant? Because the total chlorine residual consists of six different species, a titration with I does not have a single, well-defined equivalence point. By converting the chlorine residual to an equivalent amount of $\text{I}_3^-$, the indirect titration with Na2S2O3 has a single, useful equivalence point. Even if the total chlorine residual is from a single species, such as HOCl, a direct titration with KI is impractical. Because the product of the titration, $\text{I}_3^-$, imparts a yellow color, the titrand’s color would change with each addition of titrant, making it difficult to find a suitable indicator. 3. Both oxidizing and reducing agents can interfere with this analysis. Explain the effect of each type of interferent on the total chlorine residual. An interferent that is an oxidizing agent converts additional I to $\text{I}_3^-$. Because this extra $\text{I}_3^-$ requires an additional volume of Na2S2O3 to reach the end point, we overestimate the total chlorine residual. If the interferent is a reducing agent, it reduces back to I some of the $\text{I}_3^-$ produced by the reaction between the total chlorine residual and iodide; as a result, we underestimate the total chlorine residual. Quantitative Applications Although many quantitative applications of redox titrimetry have been re- placed by other analytical methods, a few important applications continue to find relevance. In this section we review the general application of redox titrimetry with an emphasis on environmental, pharmaceutical, and industrial applications. We begin, however, with a brief discussion of selecting and characterizing redox titrants, and methods for controlling the titrand’s oxidation state. Adjusting the Titrand's Oxidation State If a redox titration is to be used in a quantitative analysis, the titrand initially must be present in a single oxidation state. For example, iron is determined by a redox titration in which Ce4+ oxidizes Fe2+ to Fe3+. Depending on the sample and the method of sample preparation, iron initially may be present in both the +2 and +3 oxidation states. Before titrating, we must reduce any Fe3+ to Fe2+ if we want to determine the total concentration of iron in the sample. This type of pretreatment is accomplished using an auxiliary reducing agent or oxidizing agent. A metal that is easy to oxidize—such as Zn, Al, and Ag—can serve as an auxiliary reducing agent. The metal, as a coiled wire or powder, is added to the sample where it reduces the titrand. Because any unreacted auxiliary reducing agent will react with the titrant, it is removed before we begin the titration by removing the coiled wire or by filtering. An alternative method for using an auxiliary reducing agent is to immobilize it in a column. To prepare a reduction column an aqueous slurry of the finally divided metal is packed in a glass tube equipped with a porous plug at the bottom. The sample is placed at the top of the column and moves through the column under the influence of gravity or vacuum suction. The length of the reduction column and the flow rate are selected to ensure the analyte’s complete reduction. Two common reduction columns are used. In the Jones reductor the column is filled with amalgamated zinc, Zn(Hg), which is prepared by briefly placing Zn granules in a solution of HgCl2. Oxidation of zinc $\text{Zn(Hg)}(s) \rightarrow \text{Zn}^{2+}(aq) + \text{Hg}(l) + 2e^- \nonumber$ provides the electrons for reducing the titrand. In the Walden reductor the column is filled with granular Ag metal. The solution containing the titrand is acidified with HCl and passed through the column where the oxidation of silver $\text{Ag}(s) + \text{Cl}^- (aq) \rightarrow \text{AgCl}(s) + e^- \nonumber$ provides the necessary electrons for reducing the titrand. Table 9.4.3 provides a summary of several applications of reduction columns. Table 9.4.3 . Examples of Reactions for Reducing a Titrand's Oxidation State Using a Reduction Column oxidized titrand Walden reductor Jones reducator Cr3+ $\text{Cr}^{3+}(aq)+e^- \rightarrow \text{Cr}^{2+}(aq)$ Cu2+ $\text{Cu}^{2+}(aq)+e^- \rightarrow \text{Cu}^{+}(aq)$ $\text{Cu}^{2+}(aq)+2e^- \rightarrow \text{Cu}(s)$ Fe3+ $\text{Fe}^{3+}(aq)+e^- \rightarrow \text{Fe}^{2+}(aq)$ $\text{Fe}^{3+}(aq)+e^- \rightarrow \text{Fe}^{2+}(aq)$ TiO2+ $\text{TiO}^{2+} (aq) + 2\text{H}^+ (aq) + e^- \rightarrow \text{Ti}^{3+} (aq) + \text{H}_2\text{O}(l)$ $\text{MoO}_4^{2+}$ $\text{MoO}_2^{2+}(aq) + e^- \rightarrow \text{MoO}_2^+(aq)$ $\text{MoO}_2^{2+}(aq) + 4\text{H}^+(aq) + 3 e^- \rightarrow \text{Mo}^{3+}(aq) + 2\text{H}_2\text{O}(l)$ $\text{VO}_4^{+}$ $\text{VO}_2^{+}(aq) + 2\text{H}^+(aq) + e^- \rightarrow \text{VO}^{2+}(aq) + \text{H}_2\text{O}(l)$ $\text{VO}_2^{+}(aq) + 4\text{H}^+(aq) + 3e^- \rightarrow \text{V}^{2+}(aq) + 2\text{H}_2\text{O}(l)$ Several reagents are used as auxiliary oxidizing agents, including ammonium peroxydisulfate, (NH4)2S2O8, and hydrogen peroxide, H2O2. Peroxydisulfate is a powerful oxidizing agent $\text{S}_2\text{O}_8^{2-}(aq) + 2e^- \rightarrow 2\text{SO}_4^{2-}(aq) \nonumber$ that is capable of oxidizing Mn2+ to $\text{MnO}_4^-$, Cr3+ to $\text{Cr}_2\text{O}_7^{2-}$, and Ce3+ to Ce4+. Excess peroxydisulfate is destroyed by briefly boiling the solution. The reduction of hydrogen peroxide in an acidic solution $\text{H}_2\text{O}_2(aq) + 2\text{H}^+(aq) + 2e^- \rightarrow 2\text{H}_2\text{O}(l) \nonumber$ provides another method for oxidizing a titrand. Excess H2O2 is destroyed by briefly boiling the solution. Selecting and Standardizing a Titrant If it is to be used quantitatively, the titrant’s concentration must remain stable during the analysis. Because a titrant in a reduced state is susceptible to air oxidation, most redox titrations use an oxidizing agent as the titrant. There are several common oxidizing titrants, including $\text{MnO}_4^-$, Ce4+, $\text{Cr}_2\text{O}_7^{2-}$, and $\text{I}_3^-$. Which titrant is used often depends on how easily it oxidizes the titrand. A titrand that is a weak reducing agent needs a strong oxidizing titrant if the titration reaction is to have a suitable end point. The two strongest oxidizing titrants are $\text{MnO}_4^-$ and Ce4+, for which the reduction half-reactions are $\text{MnO}_4^-(aq) + 8\text{H}^+(aq) + 5e^- \rightleftharpoons \text{Mn}^{2+}(aq) + 4\text{H}_2\text{O}(l) \nonumber$ $\text{Ce}^{4+}(aq) + e^- \rightleftharpoons \text{Ce}^{3+}(aq) \nonumber$ A solution of Ce4+ in 1 M H2SO4 usually is prepared from the primary standard cerium ammonium nitrate, Ce(NO3)4•2NH4NO3. When prepared using a reagent grade material, such as Ce(OH)4, the solution is standardized against a primary standard reducing agent such as Na2C2O4 or Fe2+ (prepared from iron wire) using ferroin as an indicator. Despite its availability as a primary standard and its ease of preparation, Ce4+ is not used as frequently as $\text{MnO}_4^-$ because it is more expensive. The standardization reactions are $\text{Ce}^{4+}(aq) + \text{Fe}^{2+}(aq) \rightarrow \text{Fe}^{3+}(aq) + \text{Ce}^{3+}(aq) \nonumber$ $2\text{Ce}^{4+}(aq) + \text{H}_2\text{C}_2\text{O}_4(aq) \rightarrow 2\text{Ce}^{3+}(aq) + 2\text{CO}_2(g) + 2\text{H}^+(aq) \nonumber$ A solution of $\text{MnO}_4^-$ is prepared from KMnO4, which is not available as a primary standard. An aqueous solution of permanganate is thermodynamically unstable due to its ability to oxidize water. $4\text{MnO}_4^-(aq) + 2\text{H}_2\text{O}(l) \rightleftharpoons 4\text{MnO}_2(s) + 3\text{O}_2 (g) + 4\text{OH}^-(aq) \nonumber$ This reaction is catalyzed by the presence of MnO2, Mn2+, heat, light, and the presence of acids and bases. A moderately stable solution of permanganate is prepared by boiling it for an hour and filtering through a sintered glass filter to remove any solid MnO2 that precipitates. Standardization is accomplished against a primary standard reducing agent such as Na2C2O4 or Fe2+ (prepared from iron wire), with the pink color of excess $\text{MnO}_4^-$ signaling the end point. A solution of $\text{MnO}_4^-$ prepared in this fashion is stable for 1–2 weeks, although you should recheck the standardization periodically. The standardization reactions are $\text{MnO}_4^-(aq) + 5\text{Fe}^{2+}(aq) + 8\text{H}^+(aq) \rightarrow \text{Mn}^{2+}(aq) + 5\text{Fe}^{3+}(aq) + 4\text{H}_2\text{O}(l) \nonumber$ $2\text{MnO}_4^-(aq) + 5\text{H}_2\text{C}_2\text{O}_4(aq) + 6\text{H}^+(aq) \rightarrow 2\text{Mn}^{2+}(aq) + 10\text{CO}_2(g) + 8\text{H}_2\text{O}(l) \nonumber$ Potassium dichromate is a relatively strong oxidizing agent whose principal advantages are its availability as a primary standard and its long term stability when in solution. It is not, however, as strong an oxidizing agent as $\text{MnO}_4^-$ or Ce4+, which makes it less useful when the titrand is a weak reducing agent. Its reduction half-reaction is $\text{Cr}_2\text{O}_7^{2-}(aq) + 14\text{H}^+(aq) + 6e^- \rightleftharpoons 2\text{Cr}^{3+}(aq) + 7\text{H}_2\text{O}(l) \nonumber$ Although a solution of $\text{Cr}_2\text{O}_7^{2-}$ is orange and a solution of Cr3+ is green, neither color is intense enough to serve as a useful indicator. Diphenylamine sulfonic acid, whose oxidized form is red-violet and reduced form is colorless, gives a very distinct end point signal with $\text{Cr}_2\text{O}_7^{2-}$. Iodine is another important oxidizing titrant. Because it is a weaker oxidizing agent than $\text{MnO}_4^-$, Ce4+, and $\text{Cr}_2\text{O}_7^{2-}$, it is useful only when the titrand is a stronger reducing agent. This apparent limitation, however, makes I2 a more selective titrant for the analysis of a strong reducing agent in the presence of a weaker reducing agent. The reduction half-reaction for I2 is $\text{I}_2(aq) + 2e^- \rightleftharpoons 2\text{I}^-(aq) \nonumber$ Because iodine is not very soluble in water, solutions are prepared by adding an excess of I. The complexation reaction $\text{I}_2(aq) + \text{I}^-(aq) \rightleftharpoons \text{I}_3^-(aq) \nonumber$ increases the solubility of I2 by forming the more soluble triiodide ion, $\text{I}_3^-$. Even though iodine is present as $\text{I}_3^-$ instead of I2, the number of electrons in the reduction half-reaction is unaffected. $\text{I}_3^-(aq) + 2e^-(aq) \rightleftharpoons 3\text{I}^-(aq) \nonumber$ Solutions of $\text{I}_3^-$ normally are standardized against Na2S2O3 using starch as a specific indicator for $\text{I}_3^-$. The standardization reaction is $\text{I}_3^-(aq) + 2\text{S}_2\text{O}_3^{2-}(aq) \rightarrow 3\text{I}^-(aq) + 2\text{S}_4\text{O}_6^{2-} (aq) \nonumber$ An oxidizing titrant such as $\text{MnO}_4^-$, Ce4+, $\text{Cr}_2\text{O}_7^{2-}$, and $\text{I}_3^-$, is used when the titrand is in a reduced state. If the titrand is in an oxidized state, we can first reduce it with an auxiliary reducing agent and then complete the titration using an oxidizing titrant. Alternatively, we can titrate it using a reducing titrant. Iodide is a relatively strong reducing agent that could serve as a reducing titrant except that its solutions are susceptible to the air-oxidation of I to $\text{I}_3^-$. $3\text{I}^-(aq) \rightleftharpoons \text{I}_3^- (aq) + 2e^- \nonumber$ A freshly prepared solution of KI is clear, but after a few days it may show a faint yellow coloring due to the presence of $\text{I}_3^-$. Instead, adding an excess of KI reduces the titrand and releases a stoichiometric amount of $\text{I}_3^-$. The amount of $\text{I}_3^-$ produced is then determined by a back titration using thiosulfate, $\text{S}_2\text{O}_3^{2-}$, as a reducing titrant. $2\text{S}_2\text{O}_3^{2-}(aq) \rightleftharpoons \text{S}_4\text{O}_6^{2-}(aq) + 2e^- \nonumber$ Solutions of $\text{S}_2\text{O}_3^{2-}$ are prepared using Na2S2O3•5H2O and are standardized before use. Standardization is accomplished by dissolving a carefully weighed portion of the primary standard KIO3 in an acidic solution that contains an excess of KI. The reaction between $\text{IO}_3^-$ and I $\text{IO}_3^-(aq) + 8\text{I}^-(aq) + 6\text{H}^+(aq) \rightarrow 3\text{I}_3^-(aq) + 3\text{H}_2\text{O}(l) \nonumber$ liberates a stoichiometric amount of I-3 . By titrating this $\text{I}_3^-$ with thiosulfate, using starch as a visual indicator, we can determine the concentration of $\text{S}_2\text{O}_3^{2-}$ in the titrant. The standardization titration is $\text{I}_3^-(aq) + 2\text{S}_2\text{O}_3^{2-}(aq) \rightarrow 3\text{I}^-(aq) + \text{S}_4\text{O}_6^{2-}(aq) \nonumber$ which is the same reaction used to standardize solutions of $\text{I}_3^-$. This approach to standardizing solutions of $\text{S}_2\text{O}_2^{3-}$ is similar to that used in the determination of the total chlorine residual outlined in Representative Method 9.4.1. Although thiosulfate is one of the few reducing titrants that is not readily oxidized by contact with air, it is subject to a slow decomposition to bisulfite and elemental sulfur. If used over a period of several weeks, a solution of thiosulfate is restandardized periodically. Several forms of bacteria are able to metabolize thiosulfate, which leads to a change in its concentration. This problem is minimized by adding a preservative such as HgI2 to the solution. Another useful reducing titrant is ferrous ammonium sulfate, Fe(NH4)2(SO4)2•6H2O, in which iron is present in the +2 oxidation state. A solution of Fe2+ is susceptible to air-oxidation, but when prepared in 0.5 M H2SO4 it remains stable for as long as a month. Periodic restandardization with K2Cr2O7 is advisable. Ferrous ammonium sulfate is used as the titrant in a direct analysis of the titrand, or, it is added to the titrand in excess and the amount of Fe3+ produced determined by back titrating with a standard solution of Ce4+ or $\text{Cr}_2\text{O}_7^{2-}$. Inorganic Analysis One of the most important applications of redox titrimetry is evaluating the chlorination of public water supplies. Representative Method 9.4.1, for example, describes an approach for determining the total chlorine residual using the oxidizing power of chlorine to oxidize I to $\text{I}_3^-$. The amount of $\text{I}_3^-$ is determined by back titrating with $\text{S}_2\text{O}_3^{2-}$. The efficiency of chlorination depends on the form of the chlorinating species. There are two contributions to the total chlorine residual—the free chlorine residual and the combined chlorine residual. The free chlorine residual includes forms of chlorine that are available for disinfecting the water supply. Examples of species that contribute to the free chlorine residual include Cl2, HOCl and OCl. The combined chlorine residual includes those species in which chlorine is in its reduced form and, therefore, no longer capable of providing disinfection. Species that contribute to the combined chlorine residual are NH2Cl, NHCl2 and NCl3. When a sample of iodide-free chlorinated water is mixed with an excess of the indicator N,N-diethyl-p-phenylenediamine (DPD), the free chlorine oxidizes a stoichiometric portion of DPD to its red-colored form. The oxidized DPD is then back-titrated to its colorless form using ferrous ammonium sulfate as the titrant. The volume of titrant is proportional to the free residual chlorine. Having determined the free chlorine residual in the water sample, a small amount of KI is added, which catalyzes the reduction of monochloramine, NH2Cl, and oxidizes a portion of the DPD back to its red-colored form. Titrating the oxidized DPD with ferrous ammonium sulfate yields the amount of NH2Cl in the sample. The amount of dichloramine and trichloramine are determined in a similar fashion. The methods described above for determining the total, free, or combined chlorine residual also are used to establish a water supply’s chlorine demand. Chlorine demand is defined as the quantity of chlorine needed to react completely with any substance that can be oxidized by chlorine, while also maintaining the desired chlorine residual. It is determined by adding progressively greater amounts of chlorine to a set of samples drawn from the water supply and determining the total, free, or combined chlorine residual. Another important example of redox titrimetry, which finds applications in both public health and environmental analysis, is the determination of dissolved oxygen. In natural waters, such as lakes and rivers, the level of dissolved O2 is important for two reasons: it is the most readily available oxidant for the biological oxidation of inorganic and organic pollutants; and it is necessary for the support of aquatic life. In a wastewater treatment plant dissolved O2 is essential for the aerobic oxidation of waste materials. If the concentration of dissolved O2 falls below a critical value, aerobic bacteria are replaced by anaerobic bacteria, and the oxidation of organic waste produces undesirable gases, such as CH4 and H2S. One standard method for determining dissolved O2 in natural waters and wastewaters is the Winkler method. A sample of water is collected without exposing it to the atmosphere, which might change the concentration of dissolved O2. The sample first is treated with a solution of MnSO4 and then with a solution of NaOH and KI. Under these alkaline conditions the dissolved oxygen oxidizes Mn2+ to MnO2. $2\text{Mn}^{2+}(aq) + 4\text{OH}^-(aq) + \text{O}_2(g) \rightarrow 2\text{MnO}_2(s) + 2\text{H}_2\text{O}(l) \nonumber$ After the reaction is complete, the solution is acidified with H2SO4. Under the now acidic conditions, I is oxidized to $\text{I}_3^-$ by MnO2. $\text{MnO}_2(s) + 3\text{I}^-(aq) + 4\text{H}^+(aq) \rightarrow \text{Mn}^{2+}(aq) + \text{I}_3^-(aq) + 2\text{H}_2\text{O}(l) \nonumber$ The amount of $\text{I}_3^-$ that forms is determined by titrating with $\text{S}_2\text{O}_3^{2-}$ using starch as an indicator. The Winkler method is subject to a variety of interferences and several modifications to the original procedure have been proposed. For example, $\text{NO}_2^-$ interferes because it reduces $\text{I}_3^-$ to I under acidic conditions. This interference is eliminated by adding sodium azide, NaN3, which reduces $\text{NO}_2^-$ to N2. Other reducing agents, such as Fe2+, are eliminated by pretreating the sample with KMnO4 and destroying any excess permanganate with K2C2O4. Another important example of redox titrimetry is the determination of water in nonaqueous solvents. The titrant for this analysis is known as the Karl Fischer reagent and consists of a mixture of iodine, sulfur dioxide, pyridine, and methanol. Because the concentration of pyridine is sufficiently large, I2 and SO2 react with pyridine (py) to form the complexes py•I2 and py•SO2. When added to a sample that contains water, I2 is reduced to I and SO2 is oxidized to SO3. $\text{py}\cdot\text{I}_2 + \text{py}\cdot\text{SO}_2 + \text{H}_2\text{O} + 2\text{py} \rightarrow 2\text{py}\cdot\text{HI} + \text{py}\cdot\text{SO}_3 \nonumber$ Methanol is included to prevent the further reaction of py•SO3 with water. The titration’s end point is signaled when the solution changes from the product’s yellow color to the brown color of the Karl Fischer reagent. Organic Analysis Redox titrimetry also is used for the analysis of organic analytes. One important example is the determination of the chemical oxygen demand (COD) of natural waters and wastewaters. The COD is a measure of the quantity of oxygen necessary to oxidize completely all the organic matter in a sample to CO2 and H2O. Because no attempt is made to correct for organic matter that is decomposed biologically, or for slow decomposition kinetics, the COD always overestimates a sample’s true oxygen demand. The determination of COD is particularly important in the management of industrial wastewater treatment facilities where it is used to monitor the release of organic-rich wastes into municipal sewer systems or into the environment. A sample’s COD is determined by refluxing it in the presence of excess K2Cr2O7, which serves as the oxidizing agent. The solution is acidified with H2SO4, using Ag2SO4 to catalyze the oxidation of low molecular weight fatty acids. Mercuric sulfate, HgSO4, is added to complex any chloride that is present, which prevents the precipitation of the Ag+ catalyst as AgCl. Under these conditions, the efficiency for oxidizing organic matter is 95–100%. After refluxing for two hours, the solution is cooled to room temperature and the excess $\text{Cr}_2\text{O}_7^{2-}$ determined by a back titration using ferrous ammonium sulfate as the titrant and ferroin as the indicator. Because it is difficult to remove completely all traces of organic matter from the reagents, a blank titration is performed. The difference in the amount of ferrous ammonium sulfate needed to titrate the sample and the blank is proportional to the COD. Iodine has been used as an oxidizing titrant for a number of compounds of pharmaceutical interest. Earlier we noted that the reaction of $\text{S}_2\text{O}_3^{2-}$ with $\text{I}_3^-$ produces the tetrathionate ion, $\text{S}_4\text{O}_6^{2-}$. The tetrathionate ion is actually a dimer that consists of two thiosulfate ions connected through a disulfide (–S–S–) linkage. In the same fashion, $\text{I}_3^-$ is used to titrate mercaptans of the general formula RSH, forming the dimer RSSR as a product. The amino acid cysteine also can be titrated with $\text{I}_3^-$. The product of this titration is cystine, which is a dimer of cysteine. Triiodide also is used for the analysis of ascorbic acid (vitamin C) by oxidizing the enediol functional group to an alpha diketone and for the analysis of reducing sugars, such as glucose, by oxidizing the aldehyde functional group to a carboxylate ion in a basic solution. An organic compound that contains a hydroxyl, a carbonyl, or an amine functional group adjacent to an hydoxyl or a carbonyl group can be oxidized using metaperiodate, $\text{IO}_4^-$, as an oxidizing titrant. $\text{IO}_4^-(aq) + \text{H}_2\text{O}(l) + 2e^- \rightleftharpoons \text{IO}_3^-(aq) + 2\text{OH}^-(aq) \nonumber$ A two-electron oxidation cleaves the C–C bond between the two functional groups with hydroxyl groups oxidized to aldehydes or ketones, carbonyl groups oxidized to carboxylic acids, and amines oxidized to an aldehyde and an amine (ammonia if a primary amine). The analysis is conducted by adding a known excess of $\text{IO}_4^-$ to the solution that contains the analyte and allowing the oxidation to take place for approximately one hour at room temperature. When the oxidation is complete, an excess of KI is added, which converts any unreacted $\text{IO}_4^-$ to $\text{IO}_3^-$ and $\text{I}_3^-$. $\text{IO}_4^-(aq) + 3\text{I}^-(aq) + \text{H}_2\text{O}(l) \rightarrow \text{IO}_3^-(aq) + \text{I}_3^-(aq) + 2\text{OH}^-(aq) \nonumber$ The $\text{I}_3^-$ is then determined by titrating with $\text{S}_2\text{O}_3^{2-}$ using starch as an indicator. Quantitative Calculations The quantitative relationship between the titrand and the titrant is determined by the stoichiometry of the titration reaction. If you are unsure of the balanced reaction, you can deduce its stoichiometry by remembering that the electrons in a redox reaction are conserved. Example 9.4.2 The amount of Fe in a 0.4891-g sample of an ore is determined by titrating with K2Cr2O7. After dissolving the sample in HCl, the iron is brought into a +2 oxidation state using a Jones reductor. Titration to the diphenylamine sulfonic acid end point requires 36.92 mL of 0.02153 M K2Cr2O7. Report the ore’s iron content as %w/w Fe2O3. Solution Because we are not provided with the titration reaction, we will use a conservation of electrons to deduce the stoichiometry. During the titration the analyte is oxidized from Fe2+ to Fe3+, and the titrant is reduced from $\text{Cr}_2\text{O}_7^{2-}$ to Cr3+. Oxidizing Fe2+ to Fe3+ requires a single electron. Reducing $\text{Cr}_2\text{O}_7^{2-}$, in which each chromium is in the +6 oxidation state, to Cr3+ requires three electrons per chromium, for a total of six electrons. A conservation of electrons for the titration, therefore, requires that each mole of K2Cr2O7 reacts with six moles of Fe2+. The moles of K2Cr2O7 used to reach the end point is $(0.02153 \text{ M})(0.03692 \text{ L}) = 7.949 \times 10^{-4} \text{ mol K}_2\text{Cr}_2\text{O}_7 \nonumber$ which means the sample contains $7.949 \times 10^{-4} \text{ mol K}_2\text{Cr}_2\text{O}_7 \times \frac{6 \text{ mol Fe}^{2+}}{\text{mol K}_2\text{Cr}_2\text{O}_7} = 4.769 \times 10^{-3} \text{ mol Fe}^{2+} \nonumber$ Thus, the %w/w Fe2O3 in the sample of ore is $4.769 \times 10^{-3} \text{ mol Fe}^{2+} \times \frac{1 \text{ mol Fe}_2\text{O}_3}{2 \text{ mol Fe}^{2+}} \times \frac{159.69 \text{g Fe}_2\text{O}_3}{\text{mol Fe}_2\text{O}_3} = 0.3808 \text{ g Fe}_2\text{O}_3 \nonumber$ $\frac{0.3808 \text{ g Fe}_2\text{O}_3}{0.4891 \text{ g sample}} \times 100 = 77.86 \text{% w/w Fe}_2\text{O}_3 \nonumber$ Although we can deduce the stoichiometry between the titrant and the titrand in Example 9.4.2 without balancing the titration reaction, the balanced reaction $\text{K}_2\text{Cr}_2\text{O}_7(aq) + 6\text{Fe}^{2+}(aq) + 14\text{H}^+(aq) \rightarrow 2\text{Cr}^{3+}(aq) + 2\text{K}^+(aq) + 6\text{Fe}^{3+}(aq) + 7\text{H}_2\text{O}(l) \nonumber$ does provide useful information. For example, the presence of H+ reminds us that the reaction must take place in an acidic solution. Exercise 9.4.4 The purity of a sample of sodium oxalate, Na2C2O4, is determined by titrating with a standard solution of KMnO4. If a 0.5116-g sample requires 35.62 mL of 0.0400 M KMnO4 to reach the titration’s end point, what is the %w/w Na2C2O4 in the sample. Answer Because we are not provided with a balanced reaction, let’s use a conservation of electrons to deduce the stoichiometry. Oxidizing $\text{C}_2\text{O}_4^{2-}$, in which each carbon has a +3 oxidation state, to CO2, in which carbon has an oxidation state of +4, requires one electron per carbon or a total of two electrons for each mole of $\text{C}_2\text{O}_4^{2-}$. Reducing $\text{MnO}_4^-$, in which each manganese is in the +7 oxidation state, to Mn2+ requires five electrons. A conservation of electrons for the titration, therefore, requires that two moles of KMnO4 (10 moles of e-) react with five moles of Na2C2O4 (10 moles of e-). The moles of KMnO4 used to reach the end point is $(0.0400 \text{ M KMnO}_4)(0.03562 \text{ L})=1.42 \times 10^{-3} \text{ mol KMnO}_4 \nonumber$ which means the sample contains $1 .42 \times 10^{-3} \text{ mol KMnO}_4 \times \frac{5 \text{ mol Na}_2\text{C}_2\text{O}_4}{2 \text{ mol KMnO}_4} = 3.55 \times 10^{-3} \text{ mol Na}_2\text{C}_2\text{O}_4 \nonumber$ Thus, the %w/w Na2C2O4 in the sample of ore is $3.55 \times 10^{-3} \text{ mol Na}_2\text{C}_2\text{O}_4 \times \frac{134.00 \text{ g Na}_2\text{C}_2\text{O}_4}{\text{mol Na}_2\text{C}_2\text{O}_4} = 0.476 \text{ g Na}_2\text{C}_2\text{O}_4 \nonumber$ $\frac{0.476 \text{ g Na}_2\text{C}_2\text{O}_4}{0.5116 \text{ g sample}} \times 100 = 93.0 \text{% w/w Na}_2\text{C}_2\text{O}_4 \nonumber$ As shown in the following two examples, we can easily extend this approach to an analysis that requires an indirect analysis or a back titration. Example 9.4.3 A 25.00-mL sample of a liquid bleach is diluted to 1000 mL in a volumetric flask. A 25-mL portion of the diluted sample is transferred by pipet into an Erlenmeyer flask that contains an excess of KI, reducing the OCl to Cl and producing $\text{I}_3^-$. The liberated $\text{I}_3^-$ is determined by titrating with 0.09892 M Na2S2O3, requiring 8.96 mL to reach the starch indicator end point. Report the %w/v NaOCl in the sample of bleach. Solution To determine the stoichiometry between the analyte, NaOCl, and the titrant, Na2S2O3, we need to consider both the reaction between OCl and I, and the titration of $\text{I}_3^-$ with Na2S2O3. First, in reducing OCl to Cl the oxidation state of chlorine changes from +1 to –1, requiring two electrons. The oxidation of three I to form $\text{I}_3^-$ releases two electrons as the oxidation state of each iodine changes from –1 in I to –1⁄3 in $\text{I}_3^-$. A conservation of electrons, therefore, requires that each mole of OCl produces one mole of $\text{I}_3^-$. Second, in the titration reaction, $\text{I}_3^-$ is reduced to I and $\text{S}_2\text{O}_3^{2-}$ is oxidized to $\text{S}_4\text{O}_6^{2-}$. Reducing $\text{I}_3^-$ to 3I requires two elections as each iodine changes from an oxidation state of –1⁄3 to –1. In oxidizing $\text{S}_2\text{O}_3^{2-}$ to $\text{S}_4\text{O}_6^{2-}$, each sulfur changes its oxidation state from +2 to +2.5, releasing one electron for each $\text{S}_2\text{O}_3^{2-}$. A conservation of electrons, therefore, requires that each mole of $\text{I}_3^-$ reacts with two moles of $\text{S}_2\text{O}_3^{2-}$. Finally, because each mole of OCl produces one mole of $\text{I}_3^-$, and each mole of $\text{I}_3^-$ reacts with two moles of $\text{S}_2\text{O}_3^{2-}$, we know that every mole of NaOCl in the sample ultimately results in the consumption of two moles of Na2S2O3. The moles of Na2S2O3 used to reach the titration’s end point is $(0.09892 \text{ M})(0.00896 \text{ L}) = 8.86 \times 10^{-4} \text{ mol Na}_2\text{S}_2\text{O}_3 \nonumber$ which means the sample contains $8.86 \times 10^{-4} \text{ mol Na}_2\text{S}_2\text{O}_3 \times \frac{1 \text{ mol NaOCl}}{\text{mol Na}_2\text{S}_2\text{O}_3} \times \frac{74.44 \text{ g NaOCl}}{\text{mol NaOCl}} = 0.03299 \text{ g NaOCl} \nonumber$ Thus, the %w/v NaOCl in the diluted sample is $\frac{0.03299 \text{ g NaOCl}}{25.00 \text{ mL}} \times 100 = 0.132 \text{% w/v NaOCl} \nonumber$ Because the bleach was diluted by a factor of $40 \times$ (25 mL to 1000 mL), the concentration of NaOCl in the bleach is 5.28% w/v. The balanced reactions for this analysis are: $\text{OCl}^-(aq) + 3\text{I}^-(aq) + 2\text{H}^+(aq) \rightarrow \text{I}_3^-(aq) + \text{Cl}^-(aq) + \text{H}_2\text{O}(l) \nonumber$ $\text{I}_3^-(aq) + 2\text{S}_2\text{O}_3^{2-}(aq) \rightarrow \text{S}_4\text{O}_6^{2-}(aq) + 3\text{I}^-(aq) \nonumber$ Example 9.4.4 The amount of ascorbic acid, C6H8O6, in orange juice is determined by oxidizing ascorbic acid to dehydroascorbic acid, C6H6O6, with a known amount of $\text{I}_3^-$, and back titrating the excess $\text{I}_3^-$ with Na2S2O3. A 5.00-mL sample of filtered orange juice is treated with 50.00 mL of 0.01023 M $\text{I}_3^-$. After the oxidation is complete, 13.82 mL of 0.07203 M Na2S2O3 is needed to reach the starch indicator end point. Report the concentration ascorbic acid in mg/100 mL. Solution For a back titration we need to determine the stoichiometry between $\text{I}_3^-$ and the analyte, C6H8O6, and between $\text{I}_3^-$ and the titrant, Na2S2O3. The later is easy because we know from Example 9.4.3 that each mole of $\text{I}_3^-$ reacts with two moles of Na2S2O3. In oxidizing ascorbic acid to dehydroascorbic acid, the oxidation state of carbon changes from +2⁄3 in C6H8O6 to +1 in C6H6O6. Each carbon releases 1⁄3 of an electron, or a total of two electrons per ascorbic acid. As we learned in Example 9.4.3 , reducing $\text{I}_3^-$ requires two electrons; thus, a conservation of electrons requires that each mole of ascorbic acid consumes one mole of $\text{I}_3^-$. The total moles of $\text{I}_3^-$ that react with C6H8O6 and with Na2S2O3 is $(0.01023 \text{ M})(0.05000 \text{ L}) = 5.115 \times 10^{-4} \text{ mol I}_3^- \nonumber$ The back titration consumes $0.01382 \text{ L Na}_2\text{S}_2\text{O}_3 \times \frac{0.07203 \text{ mol Na}_2\text{S}_2\text{O}_3}{\text{ L Na}_2\text{S}_2\text{O}_3} \times \frac{1 \text{ mol I}_3^-}{2 \text{ mol Na}_2\text{S}_2\text{O}_3} = 4.977 \times 10^{-4} \text{ mol I}_3^- \nonumber$ Subtracting the moles of $\text{I}_3^-$ that react with Na2S2O3 from the total moles of $\text{I}_3^-$ gives the moles reacting with ascorbic acid. $5.115 \times 10^{-4} \text{ mol I}_3^- - 4.977 \times 10^{-4} \text{ mol I}_3^- = 1.38 \times 10^{-5} \text{ mol I}_3^- \nonumber$ The grams of ascorbic acid in the 5.00-mL sample of orange juice is $1.38 \times 10^{-5} \text{ mol I}_3^- \times \frac{1 \text{ mol C}_6\text{H}_8\text{O}_6}{\text{mol I}_3^-} \times \frac{176.12 \text{ g C}_6\text{H}_8\text{O}_6}{\text{mol C}_6\text{H}_8\text{O}_6} = 2.43 \times 10^{-3} \text{ g C}_6\text{H}_8\text{O}_6 \nonumber$ There are 2.43 mg of ascorbic acid in the 5.00-mL sample, or 48.6 mg per 100 mL of orange juice. The balanced reactions for this analysis are: $\text{C}_6\text{H}_8\text{O}_6(aq) + \text{I}_3^- (aq) \rightarrow 3\text{I}^-(aq) + \text{C}_6\text{H}_6\text{O}_6(aq) + 2\text{H}^+(aq) \nonumber$ $\text{I}_3^-(aq) + 2\text{S}_2\text{O}_3^{2-}(aq) \rightarrow \text{S}_4\text{O}_6^{2-}(aq) + 3\text{I}^-(aq) \nonumber$ Exercise 9.4.5 A quantitative analysis for ethanol, C2H6O, is accomplished by a redox back titration. Ethanol is oxidized to acetic acid, C2H4O2, using excess dichromate, $\text{Cr}_2\text{O}_7^{2-}$, which is reduced to Cr3+. The excess dichromate is titrated with Fe2+, giving Cr3+ and Fe3+ as products. In a typical analysis, a 5.00-mL sample of a brandy is diluted to 500 mL in a volumetric flask. A 10.00-mL sample is taken and the ethanol is removed by distillation and collected in 50.00 mL of an acidified solution of 0.0200 M K2Cr2O7. A back titration of the unreacted $\text{Cr}_2\text{O}_7^{2-}$ requires 21.48 mL of 0.1014 M Fe2+. Calculate the %w/v ethanol in the brandy. Answer For a back titration we need to determine the stoichiometry between $\text{Cr}_2\text{O}_7^{2-}$ and the analyte, C2H6O, and between $\text{Cr}_2\text{O}_7^{2-}$ and the titrant, Fe2+. In oxidizing ethanol to acetic acid, the oxidation state of carbon changes from –2 in C2H6O to 0 in C2H4O2. Each carbon releases two electrons, or a total of four electrons per C2H6O. In reducing $\text{Cr}_2\text{O}_7^{2-}$, in which each chromium has an oxidation state of +6, to Cr3+, each chromium loses three electrons, for a total of six electrons per $\text{Cr}_2\text{O}_7^{2-}$. Oxidation of Fe2+ to Fe3+ requires one electron. A conservation of electrons requires that each mole of K2Cr2O7 (6 moles of e) reacts with six moles of Fe2+ (6 moles of e), and that four moles of K2Cr2O7 (24 moles of e) react with six moles of C2H6O (24 moles of e). The total moles of K2Cr2O7 that react with C2H6O and with Fe2+ is $(0.0200 \text{ M K}_2\text{Cr}_2\text{O}_7)(0.05000 \text{ L})=1.00 \times 10^{-3} \text{ mol K}_2\text{Cr}_2\text{O}_7 \nonumber$ The back titration with Fe2+ consumes $(0.1014 \text{ M Fe}^{2+})(0.02148 \text{ L}) \times \frac{1 \text{ mol K}_2\text{Cr}_2\text{O}_7}{6 \text{ mol Fe}^{2+}} = 3.63 \times 10^{-4} \text{ mol K}_2\text{Cr}_2\text{O}_7 \nonumber$ Subtracting the moles of K2Cr2O7 that react with Fe2+ from the total moles of K2Cr2O7 gives the moles that react with the analyte. $(1.00 \times 10^{-3} \text{ mol K}_2\text{Cr}_2\text{O}_7) - (3.63 \times 10^{-4} \text{ mol K}_2\text{Cr}_2\text{O}_7) = 6.37 \times 10^{-4} \text{ mol K}_2\text{Cr}_2\text{O}_7 \nonumber$ The grams of ethanol in the 10.00-mL sample of diluted brandy is $6.37 \times 10^{-4} \text{ mol K}_2\text{Cr}_2\text{O}_7 \times \frac{6 \text{ mol C}_2\text{H}_6\text{O}}{4 \text{ mol K}_2\text{Cr}_2\text{O}_7} \times \frac{46.07 \text{ g C}_2\text{H}_6\text{O}}{\text{mol C}_2\text{H}_6\text{O}} = 0.0440 \text{ g C}_2\text{H}_6\text{O} \nonumber$ The %w/v C2H6O in the brandy is $\frac{0.0440 \text{ g C}_2\text{H}_6\text{O}}{10.0 \text{ mL diluted brandy}} \times \frac{500.0 \text{ mL diluted brandy}}{5.00 \text{ mL brandy}} \times 100 = 44.0 \text{% w/v C}_2\text{H}_6\text{O} \nonumber$ Evaluation of Redox Titrimetry The scale of operations, accuracy, precision, sensitivity, time, and cost of a redox titration are similar to those described earlier in this chapter for an acid–base or a complexation titration. As with an acid–base titration, we can extend a redox titration to the analysis of a mixture of analytes if there is a significant difference in their oxidation or reduction potentials. Figure 9.4.7 shows an example of the titration curve for a mixture of Fe2+ and Sn2+ using Ce4+ as the titrant. A titration of a mixture of analytes is possible if their standard state potentials or formal potentials differ by at least 200 mV.
textbooks/chem/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/09%3A_Titrimetric_Methods/9.04%3A_Redox_Titrations.txt
Thus far we have examined titrimetric methods based on acid–base, complexation, and oxidation–reduction reactions. A reaction in which the analyte and titrant form an insoluble precipitate also can serve as the basis for a titration. We call this type of titration a precipitation titration. One of the earliest precipitation titrations—developed at the end of the eighteenth century—was the analysis of K2CO3 and K2SO4 in potash. Calcium nitrate, Ca(NO3)2, was used as the titrant, which forms a precipitate of CaCO3 and CaSO4. The titration’s end point was signaled by noting when the addition of titrant ceased to generate additional precipitate. The importance of precipitation titrimetry as an analytical method reached its zenith in the nineteenth century when several methods were developed for determining Ag+ and halide ions. Titration Curves A precipitation titration curve follows the change in either the titrand’s or the titrant’s concentration as a function of the titrant’s volume. As we did for other titrations, we first show how to calculate the titration curve and then demonstrate how we can sketch a reasonable approximation of the titration curve. Calculating the Titration Curve Let’s calculate the titration curve for the titration of 50.0 mL of 0.0500 M NaCl with 0.100 M AgNO3. The reaction in this case is $\text{Ag}^+(aq) + \text{Cl}^-(aq) \rightleftharpoons \text{AgCl}(s) \nonumber$ Because the reaction’s equilibrium constant is so large $K = (K_\text{sp})^{-1} = (1.8 \times 10^{-10})^{-1} = 5.6 \times 10^9 \nonumber$ we may assume that Ag+ and Cl react completely. By now you are familiar with our approach to calculating a titration curve. The first task is to calculate the volume of Ag+ needed to reach the equivalence point. The stoichiometry of the reaction requires that $\text{mol Ag}^+ = M_\text{Ag}V_\text{Ag} = M_\text{Cl}V_\text{Cl} = \text{mol Cl}^- \nonumber$ Solving for the volume of Ag+ $V_{eq} = V_\text{Ag} = \frac{M_\text{Cl}V_\text{Cl}}{M_\text{Ag}} = \frac{(0.0500 \text{ M})(50.0 \text{ mL})}{0.100 \text{ M}} = 25.0 \text{ mL} \nonumber$ shows that we need 25.0 mL of Ag+ to reach the equivalence point. Before the equivalence point the titrand, Cl, is in excess. The concentration of unreacted Cl after we add 10.0 mL of Ag+, for example, is $[\text{Cl}^-] = \frac{(\text{mol Cl}^-)_\text{initial} - (\text{mol Ag}^+)_\text{added}}{\text{total volume}} = \frac{M_\text{Cl}V_\text{Cl} - M_\text{Ag}V_\text{Ag}}{V_\text{Cl} + V_\text{Ag}} \nonumber$ $[\text{Cl}^-] = \frac{(0.0500 \text{ M})(50.0 \text{ mL}) - (0.100 \text{ M})(10.0 \text{ mL})}{50.0 \text{ mL} + 10.0 \text{ mL}} = 2.50 \times 10^{-2} \text{ M} \nonumber$ which corresponds to a pCl of 1.60. At the titration’s equivalence point, we know that the concentrations of Ag+ and Cl are equal. To calculate the concentration of Cl we use the Ksp for AgCl; thus $K_\text{sp} = [\text{Ag}^+][\text{Cl}^-] = (x)(x) = 1.8 \times 10^{-10} \nonumber$ Solving for x gives [Cl] as $1.3 \times 10^{-5}$ M, or a pCl of 4.89. After the equivalence point, the titrant is in excess. We first calculate the concentration of excess Ag+ and then use the Ksp expression to calculate the concentration of Cl. For example, after adding 35.0 mL of titrant $[\text{Ag}^+] = \frac{(\text{mol Ag}^+)_\text{added} - (\text{mol Cl}^-)_\text{initial}}{\text{total volume}} = \frac{M_\text{Ag}V_\text{Ag} - M_\text{Cl}V_\text{Cl}}{V_\text{Ag} + V_\text{Cl}} \nonumber$ $[\text{Ag}^+] = \frac{(0.100 \text{ M})(35.0 \text{ mL}) - (0.0500 \text{ M})(50.0 \text{ mL})}{35.0 \text{ mL} + 50.0 \text{ mL}} = 1.18 \times 10^{-2} \text{ M} \nonumber$ $[\text{Cl}^-] = \frac{K_\text{sp}}{[\text{Ag}^+]} = \frac{1.8 \times 10^{-10}}{1.18 \times 10^{-2}} = 1.5 \times 10^{-8} \text{ M} \nonumber$ or a pCl of 7.81. Additional results for the titration curve are shown in Table 9.5.1 and Figure 9.5.1 . Table 9.5.1 . Titration of 50.0 mL of 0.0500 M NaCl with 0.100 M $\text{AgNO}_3$ volume of AgNO3 (mL) pCl volume of AgNO3 (mL) pCl 0.00 1.30 30.0 7.54 5.00 1.44 35.0 7.82 10.0 1.60 40.0 7.97 15.0 1.81 45.0 8.07 20.0 2.15 50.0 8.14 25.0 4.89 Exercise 9.5.1 When calculating a precipitation titration curve, you can choose to follow the change in the titrant’s concentration or the change in the titrand’s concentration. Calculate the titration curve for the titration of 50.0 mL of 0.0500 M AgNO3 with 0.100 M NaCl as pAg versusVNaCl, and as pCl versus VNaCl. Answer The first task is to calculate the volume of NaCl needed to reach the equivalence point; thus $V_{eq} = V_\text{NaCl} = \frac{M_\text{Ag}V_\text{Ag}}{M_\text{NaCl}} = \frac{(0.0500 \text{ M})(50.0 \text{ mL})}{0.100 \text{ M}} = 25.0 \text{ mL} \nonumber$ Before the equivalence point the titrand, Ag+, is in excess. The concentration of unreacted Ag+ after adding 10.0 mL of NaCl, for example, is $[\text{Ag}^+] = \frac{(0.0500 \text{ M})(50.0 \text{ mL}) - (0.100 \text{ M})(10.0 \text{ mL})}{50.0 \text{ mL} + 10.0 \text{ mL}} = 2.50 \times 10^{-2} \text{ M} \nonumber$ which corresponds to a pAg of 1.60. To find the concentration of Cl we use the Ksp for AgCl; thus $[\text{Cl}^-] = \frac{K_\text{sp}}{[\text{Ag}^+]} = \frac{1.8 \times 10^{-10}}{2.50 \times 10^{-2}} = 7.2 \times 10^{-9} \text{ M} \nonumber$ or a pCl of 8.14. At the titration’s equivalence point, we know that the concentrations of Ag+ and Cl are equal. To calculate their concentrations we use the Ksp expression for AgCl; thus $K_\text{sp} = [\text{Ag}^+][\text{Cl}^-] = (x)(x) = 1.8 \times 10^{-10} \nonumber$ Solving for x gives the concentration of Ag+ and the concentration of Cl as $1.3 \times 10^{-5}$ M, or a pAg and a pCl of 4.89. After the equivalence point, the titrant is in excess. For example, after adding 35.0 mL of titrant $[\text{Cl}^-] = \frac{(0.100 \text{ M})(35.0 \text{ mL}) - (0.0500 \text{ M})(50.0 \text{ mL})}{35.0 \text{ mL} + 50.0 \text{ mL}} = 1.18 \times 10^{-2} \text{ M} \nonumber$ or a pCl of 1.93. To find the concentration of Ag+ we use the Ksp for AgCl; thus $[\text{Ag}^+] = \frac{K_\text{sp}}{[\text{Cl}^-]} = \frac{1.8 \times 10^{-10}}{1.18 \times 10^{-2}} = 1.5 \times 10^{-8} \text{ M} \nonumber$ or a pAg of 7.82. The following table summarizes additional results for this titration. volume of NaCl (mL) pAg pCl 0 1.30 5.00 1.44 8.31 10.0 1.60 8.14 15.0 1.81 7.93 20.0 2.15 7.60 25.0 4.89 4.89 30.0 7.54 2.20 35.0 7.82 1.93 40.0 7.97 1.78 45.0 8.07 1.68 50.0 8.14 1.60 Sketching the Titration Curve To evaluate the relationship between a titration’s equivalence point and its end point we need to construct only a reasonable approximation of the exact titration curve. In this section we demonstrate a simple method for sketching a precipitation titration curve. Our goal is to sketch the titration curve quickly, using as few calculations as possible. Let’s use the titration of 50.0 mL of 0.0500 M NaCl with 0.100 M AgNO3. This is the same example that we used in developing the calculations for a precipitation titration curve. You can review the results of that calculation in Table 9.5.1 and Figure 9.5.1 . We begin by calculating the titration’s equivalence point volume, which, as we determined earlier, is 25.0 mL. Next we draw our axes, placing pCl on the y-axis and the titrant’s volume on the x-axis. To indicate the equivalence point’s volume, we draw a vertical line that intersects the x-axis at 25.0 mL of AgNO3. Figure 9.5.2 a shows the result of this first step in our sketch. Before the equivalence point, Cl is present in excess and pCl is determined by the concentration of unreacted Cl. As we learned earlier, the calculations are straightforward. Figure 9.5.2 b shows pCl after adding 10.0 mL and 20.0 mL of AgNO3. After the equivalence point, Ag+ is in excess and the concentration of Cl is determined by the solubility of AgCl. Again, the calculations are straightforward. Figure 9.5.2 c shows pCl after adding 30.0 mL and 40.0 mL of AgNO3. Next, we draw a straight line through each pair of points, extending them through the vertical line that represents the equivalence point’s volume (Figure 9.5.2 d). Finally, we complete our sketch by drawing a smooth curve that connects the three straight-line segments (Figure 9.5.2 e). A comparison of our sketch to the exact titration curve (Figure 9.5.2 f) shows that they are in close agreement. Selecting and Evaluating the End Point At the beginning of this section we noted that the first precipitation titration used the cessation of precipitation to signal the end point. At best, this is a cumbersome method for detecting a titration’s end point. Before precipitation titrimetry became practical, better methods for identifying the end point were necessary. Finding the End Point With an Indicator There are three general types of indicators for a precipitation titration, each of which changes color at or near the titration’s equivalence point. The first type of indicator is a species that forms a precipitate with the titrant. In the Mohr method for Cl using Ag+ as a titrant, for example, a small amount of K2CrO4 is added to the titrand’s solution. The titration’s end point is the formation of a reddish-brown precipitate of Ag2CrO4. The Mohr method was first published in 1855 by Karl Friedrich Mohr. Because $\text{CrO}_4^{2-}$ imparts a yellow color to the solution, which might obscure the end point, only a small amount of K2CrO4 is added. As a result, the end point is always later than the equivalence point. To compensate for this positive determinate error, an analyte-free reagent blank is analyzed to determine the volume of titrant needed to affect a change in the indicator’s color. Subtracting the end point for the reagent blank from the titrand’s end point gives the titration’s end point. Because $\text{CrO}_4^{2-}$ is a weak base, the titrand’s solution is made slightly alkaline. If the pH is too acidic, chromate is present as $\text{HCrO}_4^{-}$ instead of $\text{CrO}_4^{2-}$, and the Ag2CrO4 end point is delayed. The pH also must be less than 10 to avoid the precipitation of silver hydroxide. A second type of indicator uses a species that forms a colored complex with the titrant or the titrand. In the Volhard method for Ag+ using KSCN as the titrant, for example, a small amount of Fe3+ is added to the titrand’s solution. The titration’s end point is the formation of the reddish-colored Fe(SCN)2+ complex. The titration is carried out in an acidic solution to prevent the precipitation of Fe3+ as Fe(OH)3. The Volhard method was first published in 1874 by Jacob Volhard. The third type of end point uses a species that changes color when it adsorbs to the precipitate. In the Fajans method for Cl using Ag+ as a titrant, for example, the anionic dye dichlorofluoroscein is added to the titrand’s solution. Before the end point, the precipitate of AgCl has a negative surface charge due to the adsorption of excess Cl. Because dichlorofluoroscein also carries a negative charge, it is repelled by the precipitate and remains in solution where it has a greenish-yellow color. After the end point, the surface of the precipitate carries a positive surface charge due to the adsorption of excess Ag+. Dichlorofluoroscein now adsorbs to the precipitate’s surface where its color is pink. This change in the indicator’s color signals the end point. The Fajans method was first published in the 1920s by Kasimir Fajans. Finding the End Point Potentiometrically Another method for locating the end point is a potentiometric titration in which we monitor the change in the titrant’s or the titrand’s concentration using an ion-selective electrode. The end point is found by visually examining the titration curve. For a discussion of potentiometry and ion-selective electrodes, see Chapter 11. Quantitative Applications Although precipitation titrimetry rarely is listed as a standard method of analysis, it is useful as a secondary analytical method to verify other analytical methods. Most precipitation titrations use Ag+ as either the titrand or the titrant. A titration in which Ag+ is the titrant is called an argentometric titration. Table 9.5.2 provides a list of several typical precipitation titrations. Table 9.5.2 . Representative Examples of Precipitation Titrations titrand titrant end point $\text{AsO}_4^{3-}$ AgNO3 and KSCN Volhard Br AgNO3 AgNO3 and KSCN Mohr or Fajans Volhard Cl AgNO3 AgNO3 and KSCN Mohr or Fajarns Volhard* $\text{CO}_3^-$ AgNO3 and KSCN Volhard* $\text{C}_2\text{O}_4^{2-}$ AgNO3 and KSCN Volhard* $\text{CrO}_4^{2-}$ AgNO3 and KSCN Volhard* I AgNO3 AgNO3 and KSCN Fajans Volhard $\text{PO}_4^{3-}$ AgNO3 and KSCN Volhard* S2– AgNO3 and KSCN Volhard* SCN AgNO3 and KSCN Volhard* When two titrants are listed (AgNO3 and KSCN), the analysis is by a back titration; the first titrant, AgNO3, is added in excess and the excess is titrated using the second titrant, KSCN. For those Volhard methods identified with an asterisk (*), the precipitated silver salt is removed before carrying out the back titration. Quantitative Calculations The quantitative relationship between the titrand and the titrant is determined by the stoichiometry of the titration reaction. If you are unsure of the balanced reaction, you can deduce the stoichiometry from the precipitate’s formula. For example, in forming a precipitate of Ag2CrO4, each mole of $\text{CrO}_4^{2-}$ reacts with two moles of Ag+. Example 9.5.1 A mixture containing only KCl and NaBr is analyzed by the Mohr method. A 0.3172-g sample is dissolved in 50 mL of water and titrated to the Ag2CrO4 end point, requiring 36.85 mL of 0.1120 M AgNO3. A blank titration requires 0.71 mL of titrant to reach the same end point. Report the %w/w KCl in the sample. Solution To find the moles of titrant reacting with the sample, we first need to correct for the reagent blank; thus $V_\text{Ag} = 36.85 \text{ mL} - 0.71 \text{ mL} = 36.14 \text{ mL} \nonumber$ $(0.1120 \text{ M})(0.03614 \text{ L}) = 4.048 \times 10^{-3} \text{ mol AgNO}_3 \nonumber$ Titrating with AgNO3 produces a precipitate of AgCl and AgBr. In forming the precipitates, each mole of KCl consumes one mole of AgNO3 and each mole of NaBr consumes one mole of AgNO3; thus $\text{mol KCl + mol NaBr} = 4.048 \times 10^{-3} \text{ mol AgNO}_3 \nonumber$ We are interested in finding the mass of KCl, so let’s rewrite this equation in terms of mass. We know that $\text{mol KCl} = \frac{\text{g KCl}}{74.551 \text{g KCl/mol KCl}} \nonumber$ $\text{mol NaBr} = \frac{\text{g NaBr}}{102.89 \text{g NaBr/mol NaBr}} \nonumber$ which we substitute back into the previous equation $\frac{\text{g KCl}}{74.551 \text{g KCl/mol KCl}} + \frac{\text{g NaBr}}{102.89 \text{g NaBr/mol NaBr}} = 4.048 \times 10^{-3} \nonumber$ Because this equation has two unknowns—g KCl and g NaBr—we need another equation that includes both unknowns. A simple equation takes advantage of the fact that the sample contains only KCl and NaBr; thus, $\text{g NaBr} = 0.3172 \text{ g} - \text{ g KCl} \nonumber$ $\frac{\text{g KCl}}{74.551 \text{g KCl/mol KCl}} + \frac{0.3172 \text{ g} - \text{ g KCl}}{102.89 \text{g NaBr/mol NaBr}} = 4.048 \times 10^{-3} \nonumber$ $1.341 \times 10^{-2}(\text{g KCl}) + 3.083 \times 10^{-3} - 9.719 \times 10^{-3} (\text{g KCl}) = 4.048 \times 10^{-3} \nonumber$ $3.69 \times 10^{-3}(\text{g KCl}) = 9.65 \times 10^{-4} \nonumber$ The sample contains 0.262 g of KCl and the %w/w KCl in the sample is $\frac{0.262 \text{ g KCl}}{0.3172 \text{ g sample}} \times 100 = 82.6 \text{% w/w KCl} \nonumber$ The analysis for I using the Volhard method requires a back titration. A typical calculation is shown in the following example. Example 9.5.2 The %w/w I in a 0.6712-g sample is determined by a Volhard titration. After adding 50.00 mL of 0.05619 M AgNO3 and allowing the precipitate to form, the remaining silver is back titrated with 0.05322 M KSCN, requiring 35.14 mL to reach the end point. Report the %w/w I in the sample. Solution There are two precipitates in this analysis: AgNO3 and I form a precipitate of AgI, and AgNO3 and KSCN form a precipitate of AgSCN. Each mole of I consumes one mole of AgNO3 and each mole of KSCN consumes one mole of AgNO3; thus $\text{mol AgNO}_3 = \text{mol I}^- + \text{mol KSCN} \nonumber$ Solving for the moles of I we find $\text{mol I}^- = \text{mol AgNO}_3 - \text{mol KSCN} = M_\text{Ag} V_\text{Ag} - M_\text{KSCN} V_\text{KSCN} \nonumber$ $\text{mol I}^- = (0.05619 \text{ M})(0.0500 \text{ L}) - (0.05322 \text{ M})(0.03514 \text{ L}) = 9.393 \times 10^{-4} \nonumber$ The %w/w I in the sample is $\frac{(9.393 \times 10^{-4} \text{ mol I}^-) \times \frac{126.9 \text{ g I}^-}{\text{mol I}^-}}{0.6712 \text{ g sample}} \times 100 = 17.76 \text{% w/w I}^- \nonumber$ Exercise 9.5.2 A 1.963-g sample of an alloy is dissolved in HNO3 and diluted to volume in a 100-mL volumetric flask. Titrating a 25.00-mL portion with 0.1078 M KSCN requires 27.19 mL to reach the end point. Calculate the %w/w Ag in the alloy. Answer The titration uses $(0.1078 \text{ M KSCN})(0.02719 \text{ L}) = 2.931 \times 10^{-3} \text{ mol KSCN} \nonumber$ The stoichiometry between SCN and Ag+ is 1:1; thus, there are $2.931 \times 10^{-3} \text{ mol Ag}^+ \times \frac{107.87 \text{ g Ag}}{\text{mol Ag}} = 0.3162 \text{ g Ag} \nonumber$ in the 25.00 mL sample. Because this represents 1⁄4 of the total solution, there are $0.3162 \times 4$ or 1.265 g Ag in the alloy. The %w/w Ag in the alloy is $\frac{1.265 \text{ g Ag}}{1.963 \text{ g sample}} \times 100 = 64.44 \text{% w/w Ag} \nonumber$ Evaluation of Precipitation Titrimetry The scale of operations, accuracy, precision, sensitivity, time, and cost of a precipitation titration is similar to those described elsewhere in this chapter for acid–base, complexation, and redox titrations. Precipitation titrations also can be extended to the analysis of mixtures provided there is a significant difference in the solubilities of the precipitates. Figure 9.5.3 shows an example of a titration curve for a mixture of I and Cl using Ag+ as a titrant.
textbooks/chem/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/09%3A_Titrimetric_Methods/9.05%3A_Precipitation_Titrations.txt
Some of the problems that follow require one or more equilibrium constants or standard state potentials. For your convenience, here are hyperlinks to the appendices containing these constants: Appendix 10: Solubility Products Appendix 11: Acid Dissociation Constants Appendix 12: Metal-Ligand Formation Constants Appendix 13: Standard State Reduction Potentials 1. Calculate or sketch titration curves for the following acid–base titrations. (a) 25.0 mL of 0.100 M NaOH with 0.0500 M HCl (b) 50.0 mL of 0.0500 M HCOOH with 0.100 M NaOH (c) 50.0 mL of 0.100 M NH3 with 0.100 M HCl (d) 50.0 mL of 0.0500 M ethylenediamine with 0.100 M HCl (e) 50.0 mL of 0.0400 M citric acid with 0.120 M NaOH (f) 50.0 mL of 0.0400 M H3PO4 with 0.120 M NaOH 2. Locate the equivalence point(s) for each titration curve in problem 1 and, where feasible, calculate the pH at the equivalence point. What is the stoichiometric relationship between the moles of acid and the moles of base for each of these equivalence points? 3. Suggest an appropriate visual indicator for each of the titrations in problem 1. 4. To sketch the titration curve for a weak acid we approximate the pH at 10% of the equivalence point volume as pKa – 1, and the pH at 90% of the equivalence point volume as pKa + 1. Show that these assumptions are reasonable. 5. Tartaric acid, H2C4H4O6, is a diprotic weak acid with a pKa1 of 3.0 and a pKa2 of 4.4. Suppose you have a sample of impure tartaric acid (purity > 80%), and that you plan to determine its purity by titrating with a solution of 0.1 M NaOH using an indicator to signal the end point. Describe how you will carry out the analysis, paying particular attention to how much sample to use, the desired pH range for the indicator, and how you will calculate the %w/w tartaric acid. Assume your buret has a maximum capacity of 50 mL. 6. The following data for the titration of a monoprotic weak acid with a strong base were collected using an automatic titrator. Prepare normal, first derivative, second derivative, and Gran plot titration curves for this data, and locate the equivalence point for each. volume of NaOH (mL) pH volume of NaOH (mL) pH 0.25 3.0 49.95 7.8 0.86 3.2 49.97 8.0 1.63 3.4 49.98 8.2 2.72 3.6 49.99 8.4 4.29 3.8 50.00 8.7 6.54 4.0 50.01 9.1 9.67 4.2 50.02 9.4 13.79 4.4 50.04 9.6 18.83 4.6 50.06 9.8 24.47 4.8 50.10 10.0 30.15 5.0 50.16 10.2 35.33 5.2 50.25 10.4 39.62 5.4 50.40 10.6 42.91 5.6 50.63 10.8 45.28 5.8 51.01 11.0 46.91 6.0 51.61 11.2 48.01 6.2 52.58 11.4 48.72 6.4 54.15 11.6 49.19 6.6 56.73 11.8 49.48 6.8 61.11 12.0 49.67 7.0 68.83 12.2 49.79 7.2 83.54 12.4 49.87 7.4 116.14 12.6 49.92 7.6 7. Schwartz published the following simulated data for the titration of a $1.02 \times 10^{-4}$ M solution of a monoprotic weak acid (pKa = 8.16) with $1.004 \times 10^{-3}$ M NaOH [Schwartz, L. M. J. Chem. Educ. 1992, 69, 879–883]. The simulation assumes that a 50-mL pipet is used to transfer a portion of the weak acid solution to the titration vessel. A calibration of the pipet shows that it delivers a volume of only 49.94 mL. Prepare normal, first derivative, second derivative, and Gran plot titration curves for this data, and determine the equivalence point for each. How do these equivalence points compare to the expected equivalence point? Comment on the utility of each titration curve for the analysis of very dilute solutions of very weak acids. mL of NaOH pH mL of NaOH pH 0.03 6.212 4.79 8.858 0.09 6.504 4.99 8.926 0.29 6.936 5.21 8.994 0.72 7.367 5.41 9.056 1.06 7.567 5.61 9.118 1.32 7.685 5.85 9.180 1.53 7.776 6.05 9.231 1.76 7.863 6.28 9.283 1.97 7.938 6.47 9.327 2.18 8.009 6.71 9.374 2.38 8.077 6.92 9.414 2.60 8.146 7.15 9.451 2.79 8.208 7.36 9.484 3.01 8.273 7.56 9.514 3.19 8.332 7.79 9.545 3.41 8.398 7.99 9.572 3.60 8.458 8.21 9.599 3.80 8.521 8.44 9.624 3.99 8.584 8.64 9.645 4.18 8.650 8.84 9.666 4.40 8.720 9.07 9.688 4.57 8.784 9.27 9.706 8. Calculate or sketch the titration curve for a 50.0 mL solution of a 0.100 M monoprotic weak acid (pKa = 8.0) with 0.1 M strong base in a nonaqueous solvent with Ks = $10^{-20}$. You may assume that the change in solvent does not affect the weak acid’s pKa. Compare your titration curve to the titration curve when water is the solvent. 9. The titration of a mixture of p-nitrophenol (pKa = 7.0) and m-nitrophenol (pKa = 8.3) is followed spectrophotometrically. Neither acid absorbs at a wavelength of 545 nm, but their respective conjugate bases do absorb at this wavelength. The m-nitrophenolate ion has a greater absorbance than an equimolar solution of the p-nitrophenolate ion. Sketch the spectrophotometric titration curve for a 50.00-mL mixture consisting of 0.0500 M p-nitrophenol and 0.0500 M m-nitrophenol with 0.100 M NaOH. Compare your result to the expected potentiometric titration curves. 10. A quantitative analysis for aniline (C6H5NH2, Kb = $3.94 \times 10^{-10}$) is carried out by an acid–base titration using glacial acetic acid as the solvent and HClO4 as the titrant. A known volume of sample that contains 3–4 mmol of aniline is transferred to a 250-mL Erlenmeyer flask and diluted to approximately 75 mL with glacial acetic acid. Two drops of a methyl violet indicator are added, and the solution is titrated with previously standardized 0.1000 M HClO4 (prepared in glacial acetic acid using anhydrous HClO4) until the end point is reached. Results are reported as parts per million aniline. (a) Explain why this titration is conducted using glacial acetic acid as the solvent instead of using water. (b) One problem with using glacial acetic acid as solvent is its relatively high coefficient of thermal expansion of 0.11%/oC. For example, 100.00 mL of glacial acetic acid at 25oC occupies 100.22 mL at 27oC. What is the effect on the reported concentration of aniline if the standardization of HClO4 is conducted at a temperature that is lower than that for the analysis of the unknown? (c) The procedure calls for a sample that contains 3–4 mmoles of aniline. Why is this requirement necessary? Some of the problems that follow require one or more equilibrium constants or standard state potentials. For your convenience, here are hyperlinks to the appendices containing these constants: Appendix 10: Solubility Products Appendix 11: Acid Dissociation Constants Appendix 12: Metal-Ligand Formation Constants Appendix 13: Standard State Reduction Potentials 11. Using a ladder diagram, explain why the presence of dissolved CO2 leads to a determinate error for the standardization of NaOH if the end point’s pH is between 6–10, but no determinate error if the end point’s pH is less than 6. 12. A water sample’s acidity is determined by titrating to fixed end point pHs of 3.7 and 8.3, with the former providing a measure of the concentration of strong acid and the later a measure of the combined concentrations of strong acid and weak acid. Sketch a titration curve for a mixture of 0.10 M HCl and 0.10 M H2CO3 with 0.20 M strong base, and use it to justify the choice of these end points. 13. Ethylenediaminetetraacetic acid, H4Y, is a weak acid with successive acid dissociation constants of 0.010, $2.19 \times 10^{-3}$, $6.92 \times 10^{-7}$, and $5.75 \times 10^{-11}$. The figure below shows a titration curve for H4Y with NaOH. What is the stoichiometric relationship between H4Y and NaOH at the equivalence point marked with the red arrow? 14. A Gran plot method has been described for the quantitative analysis of a mixture that consists of a strong acid and a monoprotic weak acid [(a) Boiani, J. A. J. Chem. Educ. 1986, 63, 724–726; (b) Castillo, C. A.; Jaramillo, A. J. Chem. Educ. 1989, 66, 341]. A 50.00-mL mixture of HCl and CH3COOH is transferred to an Erlenmeyer flask and titrated by using a digital pipet to add successive 1.00-mL aliquots of 0.09186 M NaOH. The progress of the titration is monitored by recording the pH after each addition of titrant. Using the two papers listed above as a reference, prepare a Gran plot for the following data and determine the concentrations of HCl and CH3COOH. mL of NaOH pH mL of NaOH pH mL of NaOH pH 1.00 1.83 24.00 4.45 47.00 12.14 2.00 1.86 25.00 4.53 48.00 12.17 3.00 1.89 26.00 4.61 49.00 12.20 4.00 1.92 27.00 4.69 50.00 12.23 5.00 1.95 28.00 4.76 51.00 12.26 6.00 1.99 29.00 4.84 52.00 12.28 7.00 2.03 30.00 4.93 53.00 12.30 8.00 2.10 31.00 5.02 54.00 12.32 9.00 2.18 32.00 5.13 55.00 12.34 10.00 2.31 33.00 5.23 56.00 12.36 11.00 2.51 34.00 5.37 57.00 12.38 12.00 2.81 35.00 5.52 58.00 12.39 13.00 3.16 36.00 5.75 59.00 12.40 14.00 3.36 37.00 6.14 60.00 12.42 15.00 3.54 38.00 10.30 61.00 12.43 16.00 3.69 39.00 11.31 62.00 12.44 17.00 3.81 40.00 11.58 63.00 12.45 18.00 3.93 41.00 11.74 64.00 12.47 19.00 4.02 42.00 11.85 65.00 12.48 20.00 4.14 43.00 11.93 66.00 12.49 21.00 4.22 44.00 12.00 67.00 12.50 22.00 4.30 45.00 12.05 68.00 12.51 23.00 4.38 46.00 12.10 69.00 12.52 15. Explain why it is not possible for a sample of water to simultaneously have OH and $\text{HCO}_3^-$ as sources of alkalinity. 16. For each of the samples a–e, determine the sources of alkalinity (OH, $\text{HCO}_3^-$, $\text{CO}_3^{2-}$) and their respective concentrations in parts per million In each case a 25.00-mL sample is titrated with 0.1198 M HCl to the bromocresol green and the phenolphthalein end points. sample volume of HCl (mL) to phenolphthalein end point volume of HCl (mL) to the bromocresol green end point a 21.36 21.38 b 5.67 21.13 c 0.00 14.28 d 17.12 34.26 e 21.36 25.69 17. A sample may contain any of the following: HCl, NaOH, H3PO4, $\text{H}_2\text{PO}_4^-$, $\text{HPO}_4^{2-}$, or $\text{PO}_4^{3-}$. The composition of a sample is determined by titrating a 25.00-mL portion with 0.1198 M HCl or 0.1198 M NaOH to the phenolphthalein and to the methyl orange end points. For each of the following samples, determine which species are present and their respective molar concentrations. sample titrant volume (mL) to phenophthalein end point volume (mL) to methyl orange end point a HCl 11.54 35.29 b NaOH 19.79 9.89 c HCl 22.76 22.78 d NaOH 39.42 17.48 18. The protein in a 1.2846-g sample of an oat cereal is determined by a Kjeldahl analysis. The sample is digested with H2SO4, the resulting solution made basic with NaOH, and the NH3 distilled into 50.00 mL of 0.09552 M HCl. The excess HCl is back titrated using 37.84 mL of 0.05992 M NaOH. Given that the proteins in grains average 17.54% w/w N, report the %w/w protein in the sample. 19. The concentration of SO2 in air is determined by bubbling a sample of air through a trap that contains H2O2. Oxidation of SO2 by H2O2 results in the formation of H2SO4, which is then determined by titrat-ing with NaOH. In a typical analysis, a sample of air is passed through the peroxide trap at a rate of 12.5 L/min for 60 min and required 10.08 mL of 0.0244 M NaOH to reach the phenolphthalein end point. Calculate the μL/L SO2 in the sample of air. The density of SO2 at the temperature of the air sample is 2.86 mg/mL. 20. The concentration of CO2 in air is determined by an indirect acid–base titration. A sample of air is bubbled through a solution that contains an excess of Ba(OH)2, precipitating BaCO3. The excess Ba(OH)2 is back titrated with HCl. In a typical analysis a 3.5-L sample of air is bubbled through 50.00 mL of 0.0200 M Ba(OH)2. Back titrating with 0.0316 M HCl requires 38.58 mL to reach the end point. Determine the ppm CO2 in the sample of air given that the density of CO2 at the temperature of the sample is 1.98 g/L. Some of the problems that follow require one or more equilibrium constants or standard state potentials. For your convenience, here are hyperlinks to the appendices containing these constants: Appendix 10: Solubility Products Appendix 11: Acid Dissociation Constants Appendix 12: Metal-Ligand Formation Constants Appendix 13: Standard State Reduction Potentials 21. The purity of a synthetic preparation of methylethyl ketone, C4H8O, is determined by reacting it with hydroxylamine hydrochloride, liberating HCl (see reaction in Table 9.2.7). In a typical analysis a 3.00-mL sample is diluted to 50.00 mL and treated with an excess of hydroxylamine hydrochloride. The liberated HCl is titrated with 0.9989 M NaOH, requiring 32.68 mL to reach the end point. Report the percent purity of the sample given that the density of methylethyl ketone is 0.805 g/mL. 22. Animal fats and vegetable oils are triesters formed from the reaction between glycerol (1,2,3-propanetriol) and three long-chain fatty acids. One of the methods used to characterize a fat or an oil is a determination of its saponification number. When treated with boiling aqueous KOH, an ester saponifies into the parent alcohol and fatty acids (as carboxylate ions). The saponification number is the number of milligrams of KOH required to saponify 1.000 gram of the fat or the oil. In a typical analysis a 2.085-g sample of butter is added to 25.00 mL of 0.5131 M KOH. After saponification is complete the excess KOH is back titrated with 10.26 mL of 0.5000 M HCl. What is the saponification number for this sample of butter? 23. A 250.0-mg sample of an organic weak acid is dissolved in an appropriate solvent and titrated with 0.0556 M NaOH, requiring 32.58 mL to reach the end point. Determine the compound’s equivalent weight. 24. The figure below shows a potentiometric titration curve for a 0.4300-g sample of a purified amino acid that was dissolved in 50.00 mL of water and titrated with 0.1036 M NaOH. Identify the amino acid from the possibilities listed in the table. amino acid formula weight (g/mol) Ka alanine 89.1 $1.35 \times 10^{-10}$ glycine 75.1 $1.67 \times 10^{-10}$ methionine 149.2 $8.9 \times 10^{-10}$ taurine 125.2 $1.8 \times 10^{-9}$ asparagine 150 $1.9 \times 10^{-9}$ leucine 131.2 $1.79 \times 10^{-10}$ phenylalanine 166.2 $4.9 \times 10^{-10}$ valine 117.2 $1.91 \times 10^{-10}$ 25. Using its titration curve, determine the acid dissociation constant for the weak acid in problem 9.6. 26. Where in the scale of operations do the microtitration techniques discussed in Chapter 9.7 belong? 27. An acid–base titration can be used to determine an analyte’s equivalent weight, but it can not be used to determine its formula weight. Explain why. 28. Commercial washing soda is approximately 30–40% w/w Na2CO3. One procedure for the quantitative analysis of washing soda contains the following instructions: Transfer an approximately 4-g sample of the washing soda to a 250-mL volumetric flask. Dissolve the sample in about 100 mL of H2O and then dilute to the mark. Using a pipet, transfer a 25-mL aliquot of this solution to a 125-mL Erlenmeyer flask and add 25-mL of H2O and 2 drops of bromocresol green indicator. Titrate the sample with 0.1 M HCl to the indicator’s end point. What modifications, if any, are necessary if you want to adapt this procedure to evaluate the purity of commercial Na2CO3 that is >98% pure? 29. A variety of systematic and random errors are possible when standardizing a solution of NaOH against the primary weak acid standard potassium hydrogen phthalate (KHP). Identify, with justification, whether the following are sources of systematic error or random error, or if they have no affect on the error. If the error is systematic, then indicate whether the experimentally determined molarity for NaOH is too high or too low. The standardization reaction is $\text{C}_8\text{H}_5\text{O}_4^-(aq) + \text{OH}^-(aq) \rightarrow \text{C}_8\text{H}_4\text{O}_4^-(aq) + \text{H}_2\text{O}(l) \nonumber$ (a) The balance used to weigh KHP is not properly calibrated and always reads 0.15 g too low. (b) The indicator for the titration changes color between a pH of 3–4. (c) An air bubble, which is lodged in the buret’s tip at the beginning of the analysis, dislodges during the titration. (d) Samples of KHP are weighed into separate Erlenmeyer flasks, but the balance is tarred only for the first flask. (e) The KHP is not dried before it is used. (f) The NaOH is not dried before it is used. (g) The procedure states that the sample of KHP should be dissolved in 25 mL of water, but it is accidentally dissolved in 35 mL of water. 30. The concentration of o-phthalic acid in an organic solvent, such as n-butanol, is determined by an acid–base titration using aqueous NaOH as the titrant. As the titrant is added, the o-phthalic acid extracts into the aqueous solution where it reacts with the titrant. The titrant is added slowly to allow sufficient time for the extraction to take place. (a) What type of error do you expect if the titration is carried out too quickly? (b) Propose an alternative acid–base titrimetric method that allows for a more rapid determination of the concentration of o-phthalic acid in n-butanol. Some of the problems that follow require one or more equilibrium constants or standard state potentials. For your convenience, here are hyperlinks to the appendices containing these constants: Appendix 10: Solubility Products Appendix 11: Acid Dissociation Constants Appendix 12: Metal-Ligand Formation Constants Appendix 13: Standard State Reduction Potentials 31. Calculate or sketch titration curves for 50.0 mL of 0.100 Mg2+ with 0.100 M EDTA at a pH of 7 and 10. Locate the equivalence point for each titration curve. 32. Calculate or sketch titration curves for 25.0 mL of 0.0500 M Cu2+ with 0.025 M EDTA at a pH of 10 and in the presence of 10–3 M and 10–1 M NH3. Locate the equivalence point for each titration curve. 33. Sketch the spectrophotometric titration curve for the titration of a mixture of $5.00 \times 10^{-3}$ M Bi3+ and $5.00 \times 10^{-3}$ M Cu2+ with 0.0100 M EDTA. Assume that only the Cu2+–EDTA complex absorbs at the selected wavelength. 34. The EDTA titration of mixtures of Ca2+ and Mg2+ can be followed thermometrically because the formation of the Ca2+–EDTA complex is exothermic and the formation of the Mg2+–EDTA complex is endothermic. Sketch the thermometric titration curve for a mixture of $5.00 \times 10^{-3}$ M Ca2+ and $5.00 \times 10^{-3}$ M Mg2+ using 0.0100 M EDTA as the titrant. The heats of formation for CaY2– and MgY2– are, respectively, –23.9 kJ/mole and 23.0 kJ/mole. 35. EDTA is one member of a class of aminocarboxylate ligands that form very stable 1:1 complexes with metal ions. The following table provides logKf values for the complexes of six such ligands with Ca2+ and Mg2+. Which ligand is the best choice for a direct titration of Ca2+ in the presence of Mg2+? ligand Mg2+ Ca2+ EDTA: ethylenediaminetetraacetica acid 8.7 10.7 HEDTA: N-hydroxyethylenediametriacetic acid 7.0 8.0 EEDTA: ethyletherdiaminetetraacetic acid 8.3 10.0 EGTA: ethyleneglycol-bis($\beta$-aminoethylether)-N,N'-tetraacetic acid 5.4 10.9 DTPA: diethylenetriaminpentaacetic acid 9.0 107 CyDTA: cycloheanediaminetetraacetic acid 10.3 12.3 36. The amount of calcium in physiological fluids is determined by a complexometric titration with EDTA. In one such analysis a 0.100-mL sample of a blood serum is made basic by adding 2 drops of NaOH and titrated with 0.00119 M EDTA, requiring 0.268 mL to reach the end point. Report the concentration of calcium in the sample as milligrams Ca per 100 mL. 37. After removing the membranes from an eggshell, the shell is dried and its mass recorded as 5.613 g. The eggshell is transferred to a 250-mL beaker and dissolved in 25 mL of 6 M HCl. After filtering, the solution that contains the dissolved eggshell is diluted to 250 mL in a volumetric flask. A 10.00-mL aliquot is placed in a 125-mL Erlenmeyer flask and buffered to a pH of 10. Titrating with 0.04988 M EDTA requires 44.11 mL to reach the end point. Determine the amount of calcium in the eggshell as %w/w CaCO3. 38. The concentration of cyanide, CN, in a copper electroplating bath is determined by a complexometric titration using Ag+ as the titrant, forming the soluble $\text{Ag(CN)}_2^-$ complex. In a typical analysis a 5.00-mL sample from an electroplating bath is transferred to a 250-mL Erlenmeyer flask, and treated with 100 mL of H2O, 5 mL of 20% w/v NaOH and 5 mL of 10% w/v KI. The sample is titrated with 0.1012 M AgNO3, requiring 27.36 mL to reach the end point as signaled by the formation of a yellow precipitate of AgI. Report the concentration of cyanide as parts per million of NaCN. 39. Before the introduction of EDTA most complexation titrations used Ag+ or CN as the titrant. The analysis for Cd2+, for example, was accomplished indirectly by adding an excess of KCN to form $\text{Cd(CN)}_4^{2-}$, and back-titrating the excess CN with Ag+, forming $\text{Ag(CN)}_2^-$. In one such analysis a 0.3000-g sample of an ore is dissolved and treated with 20.00 mL of 0.5000 M KCN. The excess CN requires 13.98 mL of 0.1518 M AgNO3 to reach the end point. Determine the %w/w Cd in the ore. 40. Solutions that contain both Fe3+ and Al3+ are selectively analyzed for Fe3+ by buffering to a pH of 2 and titrating with EDTA. The pH of the solution is then raised to 5 and an excess of EDTA added, resulting in the formation of the Al3+–EDTA complex. The excess EDTA is back-titrated using a standard solution of Fe3+, providing an indirect analysis for Al3+. (a) At a pH of 2, verify that the formation of the Fe3+–EDTA complex is favorable, and that the formation of the Al3+–EDTA complex is not favorable. (b) A 50.00-mL aliquot of a sample that contains Fe3+ and Al3+ is transferred to a 250-mL Erlenmeyer flask and buffered to a pH of 2. A small amount of salicylic acid is added, forming the soluble red-colored Fe3+–salicylic acid complex. The solution is titrated with 0.05002 M EDTA, requiring 24.82 mL to reach the end point as signaled by the disappearance of the Fe3+–salicylic acid complex’s red color. The solution is buffered to a pH of 5 and 50.00 mL of 0.05002 M EDTA is added. After ensuring that the formation of the Al3+–EDTA complex is complete, the excess EDTA is back titrated with 0.04109 M Fe3+, requiring 17.84 mL to reach the end point as signaled by the reappearance of the red-colored Fe3+–salicylic acid complex. Report the molar concentrations of Fe3+ and Al3+ in the sample. Some of the problems that follow require one or more equilibrium constants or standard state potentials. For your convenience, here are hyperlinks to the appendices containing these constants: Appendix 10: Solubility Products Appendix 11: Acid Dissociation Constants Appendix 12: Metal-Ligand Formation Constants Appendix 13: Standard State Reduction Potentials 41. Prada and colleagues described an indirect method for determining sulfate in natural samples, such as seawater and industrial effluents [Prada, S.; Guekezian, M.; Suarez-Iha, M. E. V. Anal. Chim. Acta 1996, 329, 197–202]. The method consists of three steps: precipitating the sulfate as PbSO4; dissolving the PbSO4 in an ammonical solution of excess EDTA to form the soluble PbY2– complex; and titrating the excess EDTA with a standard solution of Mg2+. The following reactions and equilibrium constants are known $\text{PbSO}_4(s) \rightleftharpoons \text{Pb}^{2+}(aq) + \text{SO}_4^{2-}(aq) \quad K_\text{sp} = 1.6 \times 10^{-8} \nonumber$ $\text{Pb}^{2+}(aq) + \text{Y}^{4-}(aq) \rightleftharpoons \text{PbY}^{2-}(aq) \quad K_\text{f} = 1.1 \times 10^{18} \nonumber$ $\text{Mg}^{2+}(aq) + \text{Y}^{4-}(aq) \rightleftharpoons \text{MgY}^{2-}(aq) \quad K_\text{f} = 4.9 \times 10^{8} \nonumber$ $\text{Zn}^{2+}(aq) + \text{Y}^{4-}(aq) \rightleftharpoons \text{ZnY}^{2-}(aq) \quad K_\text{f} = 3.2 \times 10^{16} \nonumber$ (a) Verify that a precipitate of PbSO4 will dissolve in a solution of Y4–. (b) Sporek proposed a similar method using Zn2+ as a titrant and found that the accuracy frequently was poor [Sporek, K. F. Anal. Chem. 1958, 30, 1030–1032]. One explanation is that Zn2+ might react with the PbY2– complex, forming ZnY2–. Show that this might be a problem when using Zn2+ as a titrant, but that it is not a problem when using Mg2+ as a titrant. Would such a displacement of Pb2+ by Zn2+ lead to the reporting of too much or too little sulfate? (c) In a typical analysis, a 25.00-mL sample of an industrial effluent is carried through the procedure using 50.00 mL of 0.05000 M EDTA. Titrating the excess EDTA requires 12.42 mL of 0.1000 M Mg2+. Report the molar concentration of $\text{SO}_4^{2-}$ in the sample of effluent. 42. Table 9.3.1 provides values for the fraction of EDTA present as Y4–, $\alpha_{\text{Y}^{4-}}$. Values of $\alpha_{\text{Y}^{4-}}$ are calculated using the equation $\alpha_{\text{Y}^{4-}} = \frac{[\text{Y}^{4-}]}{C_\text{EDTA}} \nonumber$ where [Y4-] is the concentration of the fully deprotonated EDTA and CEDTA is the total concentration of EDTA in all of its forms $C_\text{EDTA} = [\text{H}_6\text{Y}^{2+}]+[\text{H}_5\text{Y}^{+}]+[\text{H}_4\text{Y}]+ [\text{H}_3\text{Y}^{-}] + [\text{H}_2\text{Y}^{2-}] + [\text{H}_\text{Y}^{3-}] + [\text{Y}^{4-}] \nonumber$ $\text{H}_6\text{Y}^{2+} (aq) + \text{H}_2\text{O}(l) \rightleftharpoons \text{H}_3\text{O}^+(aq) + \text{H}_5\text{Y}^{+}(aq) \quad K_\text{a1} \nonumber$ $\text{H}_5\text{Y}^{+} (aq) + \text{H}_2\text{O}(l) \rightleftharpoons \text{H}_3\text{O}^+(aq) + \text{H}_4\text{Y}(aq) \quad K_\text{a2} \nonumber$ $\text{H}_4\text{Y} (aq) + \text{H}_2\text{O}(l) \rightleftharpoons \text{H}_3\text{O}^+(aq) + \text{H}_3\text{Y}^{-}(aq) \quad K_\text{a3} \nonumber$ $\text{H}_3\text{Y}^{-} (aq) + \text{H}_2\text{O}(l) \rightleftharpoons \text{H}_3\text{O}^+(aq) + \text{H}_2\text{Y}^{2-}(aq) \quad K_\text{a4} \nonumber$ $\text{H}_2\text{Y}^{2-} (aq) + \text{H}_2\text{O}(l) \rightleftharpoons \text{H}_3\text{O}^+(aq) + \text{H}\text{Y}^{3-}(aq) \quad K_\text{a5} \nonumber$ $\text{H}\text{Y}^{2-} (aq) + \text{H}_2\text{O}(l) \rightleftharpoons \text{H}_3\text{O}^+(aq) + \text{Y}^{4-}(aq) \quad K_\text{a6} \nonumber$ to show that $\alpha_{\text{Y}^{4-}} = \frac{K_\text{a1}K_\text{a2}K_\text{a3}K_\text{a4}K_\text{a5}K_\text{a6}}{d} \nonumber$ where $d = [\text{H}_3\text{O}^+]^6 + [\text{H}_3\text{O}^+]^5K_\text{a1} + [\text{H}_3\text{O}^+]^4K_\text{a1}K_\text{a2} + [\text{H}_3\text{O}^+]^3K_\text{a1}K_\text{a2}K_\text{a3} + [\text{H}_3\text{O}^+]^2K_\text{a1}K_\text{a2}K_\text{a3}K_\text{a4} + [\text{H}_3\text{O}^+]K_\text{a1}K_\text{a2}K_\text{a3}K_\text{a4}K_\text{a5} + K_\text{a1}K_\text{a2}K_\text{a3}K_\text{a4}K_\text{a5}K_\text{a6} \nonumber$ 43. Calculate or sketch titration curves for the following redox titration reactions at 25oC. Assume the analyte initially is present at a concentration of 0.0100 M and that a 25.0-mL sample is taken for analysis. The titrant, which is the bold species in each reaction, has a concentration of 0.0100 M. (a) V2+(aq) + Ce4+(aq) $\rightarrow$ V3+(aq) + Ce3+(aq) (b) Sn2+(aq) + 2Ce4+(aq) $\rightarrow$ Sn4+(aq) +2Ce3+(aq) (c) 5Fe2+(aq) + $\mathbf{MnO}_\mathbf{4}^\mathbf{-}$(aq) + 8H+(aq) $\rightarrow$ 5Fe3+(aq) + Mn2+(aq) +4H2O(l) at a pH of 1 44. What is the equivalence point for each titration in problem 43? 45. Suggest an appropriate indicator for each titration in problem 43. 46. The iron content of an ore is determined by a redox titration that uses K2Cr2O7 as the titrant. A sample of the ore is dissolved in concentrated HCl using Sn2+ to speed its dissolution by reducing Fe3+ to Fe2+. After the sample is dissolved, Fe2+ and any excess Sn2+ are oxidized to Fe3+ and Sn4+ using $\text{MnO}_4^-$. The iron is then carefully reduced to Fe2+ by adding a 2–3 drop excess of Sn2+. A solution of HgCl2 is added and, if a white precipitate of Hg2Cl2 forms, the analysis is continued by titrating with K2Cr2O7. The sample is discarded without completing the analysis if a precipitate of Hg2Cl2 does not form or if a gray precipitate (due to Hg) forms. (a) Explain why the sample is discarded if a white precipitate of Hg2Cl2 does not form or if a gray precipitate forms. (b) Is a determinate error introduced if the analyst forgets to add Sn2+ in the step where the iron ore is dissolved? (c) Is a determinate error introduced if the iron is not quantitatively oxidized back to Fe3+ by the $\text{MnO}_4^-$? 47. The amount of Cr3+ in an inorganic salt is determined by a redox titration. A portion of sample that contains approximately 0.25 g of Cr3+ is accurately weighed and dissolved in 50 mL of H2O. The Cr3+ is oxidized to $\text{Cr}_2\text{O}_7^{2-}$ by adding 20 mL of 0.1 M AgNO3, which serves as a catalyst, and 50 mL of 10%w/v (NH4)2S2O8, which serves as the oxidizing agent. After the reaction is complete, the resulting solution is boiled for 20 minutes to destroy the excess $\text{S}_2\text{O}_8^{2-}$, cooled to room temperature, and diluted to 250 mL in a volumetric flask. A 50-mL portion of the resulting solution is transferred to an Erlenmeyer flask, treated with 50 mL of a standard solution of Fe2+, and acidified with 200 mL of 1 M H2SO4, reducing the $\text{Cr}_2\text{O}_7^{2-}$ to Cr3+. The excess Fe2+ is then determined by a back titration with a standard solution of K2Cr2O7 using an appropriate indicator. The results are reported as %w/w Cr3+. (a) There are several places in the procedure where a reagent’s volume is specified (see italicized text). Which of these measurements must be made using a volumetric pipet. (b) Excess peroxydisulfate, $\text{S}_2\text{O}_8^{2-}$ is destroyed by boiling the solution. What is the effect on the reported %w/w Cr3+ if some of the $\text{S}_2\text{O}_8^{2-}$ is not destroyed during this step? (c) Solutions of Fe2+ undergo slow air oxidation to Fe3+. What is the effect on the reported %w/w Cr3+ if the standard solution of Fe2+ is inadvertently allowed to be partially oxidized? 48. The exact concentration of H2O2 in a solution that is nominally 6% w/v H2O2 is determined by a redox titration using $\text{MnO}_4^-$ as the titrant. A 25-mL aliquot of the sample is transferred to a 250-mL volumetric flask and diluted to volume with distilled water. A 25-mL aliquot of the diluted sample is added to an Erlenmeyer flask, diluted with 200 mL of distilled water, and acidified with 20 mL of 25% v/v H2SO4. The resulting solution is titrated with a standard solution of KMnO4 until a faint pink color persists for 30 s. The results are reported as %w/v H2O2. (a) Many commercially available solutions of H2O2 contain an inorganic or an organic stabilizer to prevent the autodecomposition of the peroxide to H2O and O2. What effect does the presence of this stabilizer have on the reported %w/v H2O2 if it also reacts with $\text{MnO}_4^-$? (b) Laboratory distilled water often contains traces of dissolved organic material that may react with $\text{MnO}_4^-$. Describe a simple method to correct for this potential interference. (c) What modifications to the procedure, if any, are needed if the sample has a nominal concentration of 30% w/v H2O2. 49. The amount of iron in a meteorite is determined by a redox titration using KMnO4 as the titrant. A 0.4185-g sample is dissolved in acid and the liberated Fe3+ quantitatively reduced to Fe2+ using a Walden reductor. Titrating with 0.02500 M KMnO4 requires 41.27 mL to reach the end point. Determine the %w/w Fe2O3 in the sample of meteorite. 50. Under basic conditions, $\text{MnO}_4^-$ is used as a titrant for the analysis of Mn2+, with both the analyte and the titrant forming MnO2. In the analysis of a mineral sample for manganese, a 0.5165-g sample is dissolved and the manganese reduced to Mn2+. The solution is made basic and titrated with 0.03358 M KMnO4, requiring 34.88 mL to reach the end point. Calculate the %w/w Mn in the mineral sample. Some of the problems that follow require one or more equilibrium constants or standard state potentials. For your convenience, here are hyperlinks to the appendices containing these constants: Appendix 10: Solubility Products Appendix 11: Acid Dissociation Constants Appendix 12: Metal-Ligand Formation Constants Appendix 13: Standard State Reduction Potentials 51. The amount of uranium in an ore is determined by an indirect redox titration. The analysis is accomplished by dissolving the ore in sulfuric acid and reducing $\text{UO}_2^+$ to U4+ with a Walden reductor. The solution is treated with an excess of Fe3+, forming Fe2+ and U6+. The Fe2+ is titrated with a standard solution of K2Cr2O7. In a typical analysis a 0.315-g sample of ore is passed through the Walden reductor and treated with 50.00 mL of 0.0125 M Fe3+. Back titrating with 0.00987 M K2Cr2O7 requires 10.52 mL. What is the %w/w U in the sample? 52. The thickness of the chromium plate on an auto fender is determined by dissolving a 30.0-cm2 section in acid and oxidizing Cr3+ to $\text{Cr}_2\text{O}_7^{2-}$ with peroxydisulfate. After removing excess peroxydisulfate by boiling, 500.0 mg of Fe(NH4)2(SO4)2•6H2O is added, reducing the $\text{Cr}_2\text{O}_7^{2-}$ to Cr3+. The excess Fe2+ is back titrated, requiring 18.29 mL of 0.00389 M K2Cr2O7 to reach the end point. Determine the average thickness of the chromium plate given that the density of Cr is 7.20 g/cm3. 53. The concentration of CO in air is determined by passing a known volume of air through a tube that contains I2O5, forming CO2 and I2. The I2 is removed from the tube by distilling it into a solution that contains an excess of KI, producing $\text{I}_3^-$. The $\text{I}_3^-$ is titrated with a standard solution of Na2S2O3. In a typical analysis a 4.79-L sample of air is sampled as described here, requiring 7.17 mL of 0.00329 M Na2S2O3 to reach the end point. If the air has a density of $1.23 \times 10^{-3}$ g/mL, determine the parts per million CO in the air. 54. The level of dissolved oxygen in a water sample is determined by the Winkler method. In a typical analysis a 100.0-mL sample is made basic and treated with a solution of MnSO4, resulting in the formation of MnO2. An excess of KI is added and the solution is acidified, resulting in the formation of Mn2+ and I2. The liberated I2 is titrated with a solution of 0.00870 M Na2S2O3, requiring 8.90 mL to reach the starch indicator end point. Calculate the concentration of dissolved oxygen as parts per million O2. 55. Calculate or sketch the titration curve for the titration of 50.0 mL of 0.0250 M KI with 0.0500 M AgNO3. Prepare separate titration curves using pAg and pI on the y-axis. 56. Calculate or sketch the titration curve for the titration of a 25.0 mL mixture of 0.0500 M KI and 0.0500 M KSCN using 0.0500 M AgNO3 as the titrant. 57. The analysis for Cl using the Volhard method requires a back titration. A known amount of AgNO3 is added, precipitating AgCl. The unreacted Ag+ is determined by back titrating with KSCN. There is a complication, however, because AgCl is more soluble than AgSCN. (a) Why do the relative solubilities of AgCl and AgSCN lead to a titration error? (b) Is the resulting titration error a positive or a negative determinate error? (c) How might you modify the procedure to eliminate this source of determinate error? (d) Is this source of determinate error of concern when using the Volhard method to determine Br? 58. Voncina and co-workers suggest that a precipitation titration can be monitored by measuring pH as a function of the volume of titrant if the titrant is a weak base [VonČina, D. B.; DobČnik, D.; GomiŠČek, S. Anal. Chim. Acta 1992, 263, 147–153]. For example, when titrating Pb2+ with K2CrO4 the solution that contains the analyte initially is acidified to a pH of 3.50 using HNO3. Before the equivalence point the concentration of $\text{CrO}_4^{2-}$ is controlled by the solubility product of PbCrO4. After the equivalence point the concentration of $\text{CrO}_4^{2-}$ is determined by the amount of excess titrant. Considering the reactions that control the concentration of $\text{CrO}_4^{2-}$, sketch the expected titration curve of pH versus volume of titrant. 59. A 0.5131-g sample that contains KBr is dissolved in 50 mL of distilled water. Titrating with 0.04614 M AgNO3 requires 25.13 mL to reach the Mohr end point. A blank titration requires 0.65 mL to reach the same end point. Report the %w/w KBr in the sample. 60. A 0.1093-g sample of impure Na2CO3 is analyzed by the Volhard method. After adding 50.00 mL of 0.06911 M AgNO3, the sample is back titrated with 0.05781 M KSCN, requiring 27.36 mL to reach the end point. Report the purity of the Na2CO3 sample. 61. A 0.1036-g sample that contains only BaCl2 and NaCl is dissolved in 50 mL of distilled water. Titrating with 0.07916 M AgNO3 requires 19.46 mL to reach the Fajans end point. Report the %w/w BaCl2 in the sample. Some of the problems that follow require one or more equilibrium constants or standard state potentials. For your convenience, here are hyperlinks to the appendices containing these constants: Appendix 10: Solubility Products Appendix 11: Acid Dissociation Constants Appendix 12: Metal-Ligand Formation Constants Appendix 13: Standard State Reduction Potentials
textbooks/chem/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/09%3A_Titrimetric_Methods/9.06%3A_Problems.txt
The following set of experiments introduce students to the applications of titrimetry. Experiments are grouped into four categories based on the type of reaction (acid–base, complexation, redox, and precipitation). Additional experiments emphasizing potentiometric electrodes are found in Chapter 11. Acid–base Titrimetry • Boiani, J. A. “The Gran Plot Analysis of an Acid Mixture,” J. Chem. Educ. 1986, 63, 724–726. • Castillo, C. A.; Jaramillo, A. “An Alternative Procedure for Titration Curves of a Mixture of Acids of Different Strengths,” J. Chem. Educ. 1989, 66, 341. • Clark, R. W.; White, G. D.; Bonicamp, J. M.; Watts, E. D. “From Titration Data to Buffer Capacities: A Computer Experiment for the Chemistry Lab or Lecture,” J. Chem. Educ. 1995, 72, 746–750. • Clay, J. T.; Walters, E. A.; Brabson, G. D. “A Dibasic Acid Titration for the Physical Chemistry Laboratory” J. Chem. Educ. 1995, 72, 665–667. • Crossno, S. K; Kalbus, L. H.; Kalbus, G. E. “Determinations of Carbon Dioxide by Titration,” J. Chem. Educ. 1996, 73, 175–176. • Flowers, P. A. “Potentiometric Measurement of Transition Ranges and Titration Errors for Acid/Base Indicators,” J. Chem. Educ. 1997, 74, 846–847. • Fuchsam, W. H.; Garg, Sandhya “Acid Content of Beverages,” J. Chem. Educ. 1990, 67, 67–68 • Graham. R.C.; DePew, S. “Determination of Ammonia in Household Cleaners,” J. Chem. Educ. 1983, 60, 765–766. • Kalbus, L. H.; Petrucci, R. H.; Forman, J. E.; Kalbus, G. E. “Titration of Chromate-Dichromate Mixtures,” J. Chem. Educ. 1991, 68, 677–678. • Kooser, A. S.; Jenkins, J. L.; Welch, L. E. “Acid–Base Indicators: A New Look at an Old Topic,” J. Chem. Educ. 2001, 78, 1504–1506. • Kraft, A. “The Determination of the pKa of Multiprotic, Weak Acids by Analyzing Potentiometric Acid–Base Titration Data with Difference Plots,” J. Chem. Educ. 2003, 80, 554–559. • Murphy, J. “Determination of Phosphoric Acid in Cola Beverages,” J. Chem. Educ. 1983, 60, 420–421. • Nyasulu, F.; Barlag, R.; Macklin, J. Chem. Educator 2008, 13, 289–294. • Ophardt, C. E. “Acid Rain Analysis by Standard Addition Titration,” J. Chem. Educ. 1985, 62, 257– 258. • Partanen, J. I.; Kärki, M. H. “Determination of the Thermodynamic Dissociation Constant of a Weak Acid by Potentiometric Acid-Base Titration,” J. Chem. Educ. 1994, 71, A120–A122. • Thompson, R. Q. “Identification of Weak Acids and Bases by Titration with Primary Standards,” J. Chem. Educ. 1988, 65, 179–180. • Tucker, S. A.; Amszi, V. L.; Acree, Jr. W. E. “Studying Acid-Base Equilibria in Two-Phase Solvent Media,” J. Chem. Educ. 1993, 70, 80–82. • Tucker, S. A.; Acree, Jr., W. E. “A Student-Designed Analytical Laboratory Method,” J. Chem. Educ. 1994, 71, 71–74. • Werner, J. A.; Werner, T. C. “Multifunctional Base Unknowns in the Introductory Analytical Chemistry Lab,” J. Chem. Educ. 1991, 68, 600–601. Complexation Titrimetry • Ceretti, H.; Hughes, E. A.; Zalts, A. “The Softening of Hard Water and Complexometric Titrations,” J. Chem. Educ. 1999, 76, 1420–1421. • Fulton, R.; Ross, M.; Schroeder, K. “Spectrophotometric Titration of a Mixture of Calcium and Magnesium,” J. Chem. Educ. 1986, 63, 721–723. • Novick, S. G. “Complexometric Titration of Zinc,” J. Chem. Educ. 1997, 74, 1463. • Olsen, K. G.; Ulicny, L. J. “Reduction of Calcium Concentrations by the Brita Water Filtration System: A Practical Experiment in Titrimetry and Atomic Absorption Spectroscopy,” J. Chem. Educ. 2001, 78, 941. • Smith, R. L.; Popham, R. E. “The Quantitative Resolution of a Mixture of Group II Metal Ions by Thermometric Titration with EDTA,” J. Chem. Educ. 1983, 60, 1076–1077. • Yappert, M. C.; DuPré, D. B. “Complexometric Titrations: Competition of Complexing Agents in the Determination of Water Hardness with EDTA,” J. Chem. Educ. 1997, 74, 1422–1423. Redox Titrimetry • Guenther, W. B. “Supertitrations: High-Precision Methods,” J. Chem. Educ. 1988, 65, 1097–1098. • Haddad, P. “Vitamin C Content of Commercial Orange Juices,” J. Chem. Educ. 1977, 54, 192–193. • Harris, D. C.; Hills, M. E.; Hewston, T. A. “Preparation, Iodometric Analysis and Classroom Demonstration of Superconductivity in YBa2Cu3O8–x,” J. Chem. Educ. 1987, 64, 847–850. • Lau, O.-W.; Luk, S.-F.; Cheng, N. L. N.; Woo, H.-O. “Determination of Free Lime in Clinker and Cement by Iodometry,” J. Chem. Educ. 2001, 78, 1671–1673. • Michalowski, T.; Asuero, A. G.; Ponikvar-Svet, M.; Michalowska-Kaczmarczyk, A. M.; Wybraniec, S. “Some Examples of Redox Back Titrations,” Chem. Educator 2014, 19, 217–222. • Phinyocheep, P.; Tang, I. M. “Determination of the Hole Concentration (Copper Valency) in the High Tc Superconductors,” J. Chem. Educ. 1994, 71, A115–A118. • Powell, J. R.; Tucker, S. A.; Acree, Jr., W. E.; Sees, J. A.; Hall, L. M. “A Student-Designed Potentiometric Titration: Quantitative Determination of Iron(II) by Caro’s Acid Titration,” J. Chem. Educ. 1996, 73, 984–986 Precipitation Titrimetry • Ueno, K.; Kina, K. “Colloid Titration - A Rapid Method for the Determination of Charged Colloid,” J. Chem . Educ. 1985, 62, 627–629. For a general history of titrimetry, see the following sources. • A History of Analytical Chemistry; Laitinen, H. A.; Ewing, G. W., Eds.; The Division of Analytical Chemistry of the American Chemical Society: Washington, D. C., 1977, pp. 52–93. • Kolthoff, I. M. “Analytical Chemistry in the USA in the First Quarter of This Century,” Anal. Chem. 1994, 66, 241A–249A. The use of weight instead of volume as a signal for titrimetry is reviewed in the following paper. • Kratochvil, B.; Maitra, C. “Weight Titrations: Past and Present,” Am. Lab. 1983, January, 22–29. A more thorough discussion of non-aqueous titrations, with numerous practical examples, is provided in the following text. • Fritz, J. S. Acid-Base Titrations in Nonaqueous Solvents; Allyn and Bacon, Boston; 1973. The sources listed below provides more details on the how potentiometric titration data may be used to calculate equilibrium constants. • BabiĆ, S.; Horvat, A. J. M.; PavloviĆ, D. M.; KaŠtelan-Macan, M. “Determination of pKa values of active pharmaceutical ingredients,” Trends Anal. Chem. 2007, 26, 1043–1061. • Meloun, M.; Havel, J.; Högfeldt, E. Computation of Solution Equilibria, Ellis Horwood Limited:Chichester, England; 1988. The following provides additional information about Gran plots. • Michalowski, T.; Kupiec, K.; Rymanowski, M. Anal. Chim. Acta 2008, 606, 172–183. • Schwartz, L. M. “Advances in Acid-Base Gran Plot Methodology,” J. Chem. Educ. 1987, 64, 947–950. • Schwartz, L. M. “Uncertainty of a Titration Equivalence Point,” J. Chem. Educ. 1992, 69, 879–883. The following provide additional information about calculating or sketching titration curves. • Barnum, D. “Predicting Acid–Base Titration Curves without Calculations,” J. Chem. Educ. 1999, 76, 938–942. • de Levie, R. “A simple expression for the redox titration curve,” J. Electroanal. Chem. 1992, 323, 347–355. • Gonzálex-Gómez, D.; Rogríguez, D. A.; Cañada-Cañada, F.; Jeong, J. S. “A Comprehensive Application to Assist in Acid–Base Titration Self-Learning: An Approach for High School and Undergraduate Students,” J. Chem. Educ. 2015, 92, 855–863. • King, D. W. “A General Approach for Calculating Speciation and Posing Capacity of Redox Systems with Multiple Oxidation States: Application to Redox Titrations and the Generation of pe–pH,” J. Chem. Educ. 2002, 79, 1135–1140. • Smith, G. C.; Hossain, M. M; MacCarthy, P. “3-D Surface Visualization of pH Titration Topos: Equivalence Cliffs, Dilution Ramps, and Buffer Plateaus,” J. Chem. Educ. 2014, 91, 225–231. For a complete discussion of the application of complexation titrimetry see the texts and articles listed below. • Pribil, R. Applied Complexometry, Pergamon Press: Oxford, 1982. • Reilly, C. N.; Schmid, R. W. “Principles of End Point Detection in Chelometric Titrations Using Metal- lochromic Indicators: Characterization of End Point Sharpness,” Anal. Chem. 1959, 31, 887–897. • Ringbom, A. Complexation in Analytical Chemistry, John Wiley and Sons, Inc.: New York, 1963. • Schwarzenbach, G. Complexometric Titrations, Methuen & Co. Ltd: London, 1957. A good source for additional examples of the application of all forms of titrimetry is • Vogel’s Textbook of Quantitative Inorganic Analysis, Longman: London, 4th Ed., 1981 9.08: Chapter Summary and Key Terms Chapter Summary In a titrimetric method of analysis, the volume of titrant that reacts stoichiometrically with a titrand provides quantitative information about the amount of analyte in a sample. The volume of titrant that corresponds to this stoichiometric reaction is called the equivalence point. Experimentally we determine the titration’s end point using an indicator that changes color near the equivalence point. Alternatively, we can locate the end point by monitoring a property of the titrand’s solution—absorbance, potential, and temperature are typical examples—that changes as the titration progresses. In either case, an accurate result requires that the end point closely match the equivalence point. Knowing the shape of a titration curve is critical to evaluating the feasibility of a titrimetric method. Many titrations are direct, in which the analyte participates in the titration as the titrand or the titrant. Other titration strategies are possible when a direct reaction between the analyte and titrant is not feasible. In a back titration a reagent is added in excess to a solution that contains the analyte. When the reaction between the reagent and the analyte is complete, the amount of excess reagent is determined by a titration. In a displacement titration the analyte displaces a reagent, usually from a complex, and the amount of displaced reagent is determined by an appropriate titration. Titrimetric methods have been developed using acid–base, complexation, oxidation–reduction, and precipitation reactions. Acid–base titrations use a strong acid or a strong base as a titrant. The most common titrant for a complexation titration is EDTA. Because of their stability against air oxidation, most redox titrations use an oxidizing agent as a titrant. Titrations with reducing agents also are possible. Precipitation titrations often involve Ag+ as either the analyte or titrant. Key Terms acid–base titration argentometric titration auxiliary oxidizing agent buret direct titration equivalence point Gran plot Kjeldahl analysis Mohr method redox indicator symmetric equivalence point titrant titrimetry acidity asymmetric equivalence point auxiliary reducing agent complexation titration displacement titration Fajans method indicator leveling potentiometric titration redox titration thermometric titration titration curve Volhard method alkalinity auxiliary complexing agent back titration conditional formation constant end point formal potential Jones reductor metallochromic indicator precipitation titration spectrophotometric titration titrand titration error Walden reductor
textbooks/chem/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/09%3A_Titrimetric_Methods/9.07%3A_Additional_Resources.txt
An early example of a colorimetric analysis is Nessler’s method for ammonia, which was introduced in 1856. Nessler found that adding an alkaline solution of HgI2 and KI to a dilute solution of ammonia produced a yellow-to-reddish brown colloid, in which the colloid’s color depended on the concentration of ammonia. By visually comparing the color of a sample to the colors of a series of standards, Nessler was able to determine the concentration of ammonia. Colorimetry, in which a sample absorbs visible light, is one example of a spectroscopic method of analysis. At the end of the nineteenth century, spectroscopy was limited to the absorption, emission, and scattering of visible, ultraviolet, and infrared electromagnetic radiation. Since then, spectroscopy has expanded to include other forms of electromagnetic radiation—such as X-rays, microwaves, and radio waves—and other energetic particles—such as electrons and ions. • 10.1: Overview of Spectroscopy The focus of this chapter is on the interaction of ultraviolet, visible, and infrared radiation with matter. Because these techniques use optical materials to disperse and focus the radiation, they often are identified as optical spectroscopies. For convenience we will use the simpler term spectroscopy in place of optical spectroscopy; however, you should understand we will consider only a limited piece of what is a much broader area of analytical techniques. • 10.2: Spectroscopy Based on Absorption In absorption spectroscopy a beam of electromagnetic radiation passes through a sample. Much of the radiation passes through the sample without a loss in intensity. At selected wavelengths, however, the radiation’s intensity is attenuated. This process of attenuation is called absorption. • 10.3: UV/Vis and IR Spectroscopy Earlier we examined Nessler’s method for matching the color of a sample to the color of a standard. Matching colors is labor intensive for the analyst and, not surprisingly, spectroscopic methods of analysis were slow to find favor. With the introduction of photoelectric transducers for ultraviolet and visible radiation, and thermocouples for infrared radiation, modern instrumentation for absorption spectroscopy routinely became available in the 1940s—further progress has been rapid ever since. • 10.4: Atomic Absorption Spectroscopy Guystav Kirchoff and Robert Bunsen first used atomic absorption in 1859 and 1860 to identify atoms in flames and hot gases. Although atomic emission continued to develop as an analytical technique, progress languished for almost a centurybefore the work of A. C. Walsh and C. T. J. Alkemade in 1955. Commercial instruments were in place by the early 1960s, and the importance of atomic absorption as an analytical technique soon was evident. • 10.5: Emission Spectroscopy An analyte in an excited state possesses an energy, \(E_2\), that is greater than its energy when it is in a lower energy state, \(E_1\). When the analyte returns to its lower energy state, the excess energy, \(\Delta E = E_2 - E_1\), is released as a photon, a process called emission. • 10.6: Photoluminescent Spectroscopy The release of a photon following thermal excitation is called emission and that following the absorption of a photon is called photoluminescence, which is divided into two categories: fluorescence and phosphorescence. • 10.7: Atomic Emission Spectroscopy The focus of this section is on the emission of ultraviolet and visible radiation following the thermal excitation of atoms. Atomic emission occurs when a valence electron in a higher energy atomic orbital returns to a lower energy atomic orbital. • 10.8: Spectroscopy Based on Scattering The blue color of the sky during the day and the red color of the sun at sunset are the result of light scattered by small particles of dust, molecules of water, and other gases in the atmosphere. The earliest quantitative applications of scattering, which date from the early 1900s, used the elastic scattering of light by colloidal suspensions to determine the concentration of colloidal particles. • 10.9: Problems End-of-chapter problems to test your understanding of topics in this chapter. • 10.10: Additional Resources A compendium of resources to accompany topics in this chapter. • 10.11: Chapter Summary and Key Terms Summary of chapter's main topics and a list of key terms introduced in this chapter. 10: Spectroscopic Methods The focus of this chapter is on the interaction of ultraviolet, visible, and infrared radiation with matter. Because these techniques use optical materials to disperse and focus the radiation, they often are identified as optical spectroscopies. For convenience we will use the simpler term spectroscopy in place of optical spectroscopy; however, you should understand we will consider only a limited piece of what is a much broader area of analytical techniques. Despite the difference in instrumentation, all spectroscopic techniques share several common features. Before we consider individual examples in greater detail, let’s take a moment to consider some of these similarities. As you work through the chapter, this overview will help you focus on the similarities between different spectroscopic methods of analysis. You will find it easier to understand a new analytical method when you can see its relationship to other similar methods. What is Electromagnetic Radiation? Electromagnetic radiation—light—is a form of energy whose behavior is described by the properties of both waves and particles. Some properties of electromagnetic radiation, such as its refraction when it passes from one medium to another (Figure 10.1.1 ), are explained best when we describe light as a wave. Other properties, such as absorption and emission, are better described by treating light as a particle. The exact nature of electromagnetic radiation remains unclear, as it has since the development of quantum mechanics in the first quarter of the 20th century [Home, D.; Gribbin, J. New Scientist 1991, 2 Nov. 30–33]. Nevertheless, this dual model of wave and particle behavior provide a useful description for electromagnetic radiation. Wave Properties of Electromagnetic Radiation Electromagnetic radiation consists of oscillating electric and magnetic fields that propagate through space along a linear path and with a constant velocity. In a vacuum, electromagnetic radiation travels at the speed of light, c, which is $2.99792 \times 10^8$ m/s. When electromagnetic radiation moves through a medium other than a vacuum, its velocity, v, is less than the speed of light in a vacuum. The difference between v and c is sufficiently small (<0.1%) that the speed of light to three significant figures, $3.00 \times 10^8$ m/s, is accurate enough for most purposes. The oscillations in the electric field and the magnetic field are perpendicular to each other and to the direction of the wave’s propagation. Figure 10.1.2 shows an example of plane-polarized electromagnetic radiation, which consists of a single oscillating electric field and a single oscillating magnetic field. An electromagnetic wave is characterized by several fundamental properties, including its velocity, amplitude, frequency, phase angle, polarization, and direction of propagation [Ball, D. W. Spectroscopy 1994, 9(5), 24–25]. For example, the amplitude of the oscillating electric field at any point along the propagating wave is $A_{t}=A_{e} \sin (2 \pi \nu t+\Phi) \nonumber$ where At is the magnitude of the electric field at time t, Ae is the electric field’s maximum amplitude, $\nu$ is the wave’s frequency—the number of oscillations in the electric field per unit time—and $\Phi$ is a phase angle that accounts for the fact that At need not have a value of zero at t = 0. The identical equation for the magnetic field is $A_{t}=A_{m} \sin (2 \pi \nu t+\Phi) \nonumber$ where Am is the magnetic field’s maximum amplitude. Other properties also are useful for characterizing the wave behavior of electromagnetic radiation. The wavelength, $\lambda$, is defined as the distance between successive maxima (see Figure 10.1.2 ). For ultraviolet and visible electromagnetic radiation the wavelength usually is expressed in nanometers (1 nm = 10–9 m), and for infrared radiation it is expressed in microns (1 mm = 10–6 m). The relationship between wavelength and frequency is $\lambda = \frac {c} {\nu} \nonumber$ Another unit useful unit is the wavenumber, $\overline{\nu}$, which is the reciprocal of wavelength $\overline{\nu} = \frac {1} {\lambda} \nonumber$ Wavenumbers frequently are used to characterize infrared radiation, with the units given in cm–1. When electromagnetic radiation moves between different media—for example, when it moves from air into water—its frequency, $\nu$, remains constant. Because its velocity depends upon the medium in which it is traveling, the electromagnetic radiation’s wavelength, $\lambda$, changes. If we replace the speed of light in a vacuum, c, with its speed in the medium, $v$, then the wavelength is $\lambda = \frac {v} {\nu} \nonumber$ This change in wavelength as light passes between two media explains the refraction of electromagnetic radiation shown in Figure 10.1.1 . Example 10.1.1 In 1817, Josef Fraunhofer studied the spectrum of solar radiation, observing a continuous spectrum with numerous dark lines. Fraunhofer labeled the most prominent of the dark lines with letters. In 1859, Gustav Kirchhoff showed that the D line in the sun’s spectrum was due to the absorption of solar radiation by sodium atoms. The wavelength of the sodium D line is 589 nm. What are the frequency and the wavenumber for this line? Solution The frequency and wavenumber of the sodium D line are $\nu=\frac{c}{\lambda}=\frac{3.00 \times 10^{8} \ \mathrm{m} / \mathrm{s}}{589 \times 10^{-9} \ \mathrm{m}}=5.09 \times 10^{14} \ \mathrm{s}^{-1} \nonumber$ $\overline{\nu}=\frac{1}{\lambda}=\frac{1}{589 \times 10^{-9} \ \mathrm{m}} \times \frac{1 \ \mathrm{m}}{100 \ \mathrm{cm}}=1.70 \times 10^{4} \ \mathrm{cm}^{-1} \nonumber$ Exercise 10.1.1 Another historically important series of spectral lines is the Balmer series of emission lines from hydrogen. One of its lines has a wavelength of 656.3 nm. What are the frequency and the wavenumber for this line? Answer The frequency and wavenumber for the line are $\nu=\frac{c}{\lambda}=\frac{3.00 \times 10^{8} \ \mathrm{m} / \mathrm{s}}{656.3 \times 10^{-9} \ \mathrm{m}}=4.57 \times 10^{14} \ \mathrm{s}^{-1} \nonumber$ $\overline{\nu}=\frac{1}{\lambda}=\frac{1}{656.3 \times 10^{-9} \ \mathrm{m}} \times \frac{1 \ \mathrm{m}}{100 \ \mathrm{cm}}=1.524 \times 10^{4} \ \mathrm{cm}^{-1} \nonumber$ Particle Properties of Electromagnetic Radiation When matter absorbs electromagnetic radiation it undergoes a change in energy. The interaction between matter and electromagnetic radiation is easiest to understand if we assume that radiation consists of a beam of energetic particles called photons. When a photon is absorbed by a sample it is “destroyed” and its energy acquired by the sample [Ball, D. W. Spectroscopy 1994, 9(6) 20–21]. The energy of a photon, in joules, is related to its frequency, wavelength, and wavenumber by the following equalities $E=h \nu=\frac{h c}{\lambda}=h c \overline{\nu} \nonumber$ where h is Planck’s constant, which has a value of $6.626 \times 10^{-34}$ Js. Example 10.1.2 What is the energy of a photon from the sodium D line at 589 nm? Solution The photon’s energy is $E=\frac{h c}{\lambda}=\frac{\left(6.626 \times 10^{-34} \ \mathrm{Js}\right)\left(3.00 \times 10^{8} \ \mathrm{m} / \mathrm{s}\right)}{589 \times 10^{-7} \ \mathrm{m}}=3.37 \times 10^{-19} \ \mathrm{J} \nonumber$ Exercise 10.1.2 What is the energy of a photon for the Balmer line at a wavelength of 656.3 nm? Answer The photon’s energy is $E=\frac{h c}{\lambda}=\frac{\left(6.626 \times 10^{-34} \ \mathrm{Js}\right)\left(3.00 \times 10^{8} \ \mathrm{m} / \mathrm{s}\right)}{656.3 \times 10^{-9} \ \mathrm{m}}=3.03 \times 10^{-19} \ \mathrm{J} \nonumber$ The Electromagnetic Spectrum The frequency and the wavelength of electromagnetic radiation vary over many orders of magnitude. For convenience, we divide electromagnetic radiation into different regions—the electromagnetic spectrum—based on the type of atomic or molecular transitions that gives rise to the absorption or emission of photons (Figure 10.1.3 ). The boundaries between the regions of the electromagnetic spectrum are not rigid and overlap between spectral regions is possible. Photons as a Signal Source In the previous section we defined several characteristic properties of electromagnetic radiation, including its energy, velocity, amplitude, frequency, phase angle, polarization, and direction of propagation. A spectroscopic measurement is possible only if the photon’s interaction with the sample leads to a change in one or more of these characteristic properties. We will divide spectroscopy into two broad classes of techniques. In one class of techniques there is a transfer of energy between the photon and the sample. Table 10.1.1 provides a list of several representative examples. Table 10.1.1 . Examples of Spectroscopic Techniques That Involve an Exchange of Energy Between a Photon and the Sample type of energy transfer region of electromagnetic spectrum spectroscopic technique absorption $\gamma$-ray Mossbauer spectroscopy X-ray X-ray absorption spectroscopy UV/Vis UV/Vis spectroscopy IR infrared spectroscopy microwave raman spectroscopy radio wave electron spin resonance nuclear magnetic resonance emission (thermal excitation) UV/Vis atomic emission spectroscopy photoluminescence X-ray X-ray fluorescence UV/Vis fluorescence spectroscopy phosphorescence spectroscopy atomic fluorescence spectroscopy chemiluminescence UV/Vis chemiluminescence spectroscopy techniques discussed in this chapter are shown in italics In absorption spectroscopy a photon is absorbed by an atom or molecule, which undergoes a transition from a lower-energy state to a higher-energy, or excited state (Figure 10.1.4 ). The type of transition depends on the photon’s energy. The electromagnetic spectrum in Figure 10.1.3 , for example, shows that absorbing a photon of visible light promotes one of the atom’s or molecule’s valence electrons to a higher-energy level. When an molecule absorbs infrared radiation, on the other hand, one of its chemical bonds experiences a change in vibrational energy. When it absorbs electromagnetic radiation the number of photons passing through a sample decreases. The measurement of this decrease in photons, which we call absorbance, is a useful analytical signal. Note that each energy level in Figure 10.1.4 has a well-defined value because each is quantized. Absorption occurs only when the photon’s energy, $h \nu$, matches the difference in energy, $\Delta E$, between two energy levels. A plot of absorbance as a function of the photon’s energy is called an absorbance spectrum. Figure 10.1.5 , for example, shows the absorbance spectrum of cranberry juice. When an atom or molecule in an excited state returns to a lower energy state, the excess energy often is released as a photon, a process we call emission (Figure 10.1.4 ). There are several ways in which an atom or a molecule may end up in an excited state, including thermal energy, absorption of a photon, or as the result of a chemical reaction. Emission following the absorption of a photon is also called photoluminescence, and that following a chemical reaction is called chemiluminescence. A typical emission spectrum is shown in Figure 10.1.6 . Molecules also can release energy in the form of heat. We will return to this point later in the chapter. In the second broad class of spectroscopic techniques, the electromagnetic radiation undergoes a change in amplitude, phase angle, polarization, or direction of propagation as a result of its refraction, reflection, scattering, diffraction, or dispersion by the sample. Several representative spectroscopic techniques are listed in Table 10.1.2 . Table 10.1.2 . Examples of Spectroscopic Techniques That Do Not Involve an Exchange of Energy Between a Photon and the Sample region of electromagnetic spectrum type of interaction spectroscopic technique X-ray diffraction X-ray diffraction UV/Vis refraction refractrometry scattering nephelometry turbidimetry dispersion optical rotary dispersion techniques discussed in this chapter are shown in italics Basic Components of Spectroscopic Instruments The spectroscopic techniques in Table 10.1.1 and Table 10.1.2 use instruments that share several common basic components, including a source of energy, a means for isolating a narrow range of wavelengths, a detector for measuring the signal, and a signal processor that displays the signal in a form convenient for the analyst. In this section we introduce these basic components. Specific instrument designs are considered in later sections. You will find a more detailed treatment of these components in the additional resources for this chapter. Sources of Energy All forms of spectroscopy require a source of energy. In absorption and scattering spectroscopy this energy is supplied by photons. Emission and photoluminescence spectroscopy use thermal, radiant (photon), or chemical energy to promote the analyte to a suitable excited state. Sources of Electromagnetic Radiation. A source of electromagnetic radiation must provide an output that is both intense and stable. Sources of electromagnetic radiation are classified as either continuum or line sources. A continuum source emits radiation over a broad range of wavelengths, with a relatively smooth variation in intensity (Figure 10.1.7 ). A line source, on the other hand, emits radiation at selected wavelengths (Figure 10.1.8 ). Table 10.1.3 provides a list of the most common sources of electromagnetic radiation. Table 10.1.3 . Common Source of Electromagnetic Radiation source wavelength region useful for... H2 and D2 lamp continuum source from 160–380 nm molecular absorption tungsten lamp continuum source from 320–2400 nm molecular absorption Xe arc lamp continuum source from 200–1000 nm molecular fluorescence nernst glower continuum source from 0.4–20 µm molecular absorption globar continuum source from 1–40 µm molecular absorption nichrome wire continuum source from 0.75–20 µm molecular absorption hollow cathode lamp line source in UV/Vis atomic absorption Hg vapor lamp line source in UV/Vis molecular fluorescence laser line source in UV/Vis/Ir atomic and molecular absorption, fluorescence, and scattering Sources of Thermal Radiation. The most common sources of thermal energy are flames and plasmas. A flame source uses a combustion of a fuel and an oxidant to achieve temperatures of 2000–3400 K. Plasmas, which are hot, ionized gases, provide temperatures of 6000–10000 K. Chemical Sources of Energy. Exothermic reactions also may serve as a source of energy. In chemiluminescence the analyte is raised to a higher-energy state by means of a chemical reaction, emitting characteristic radiation when it returns to a lower-energy state. When the chemical reaction results from a biological or enzymatic reaction, the emission of radiation is called bioluminescence. Commercially available “light sticks” and the flash of light from a firefly are examples of chemiluminescence and bioluminescence. Wavelength Selection In Nessler’s original colorimetric method for ammonia, which was described at the beginning of the chapter, the sample and several standard solutions of ammonia are placed in separate tall, flat-bottomed tubes. As shown in Figure 10.1.9 , after adding the reagents and allowing the color to develop, the analyst evaluates the color by passing ambient light through the bottom of the tubes and looking down through the solutions. By matching the sample’s color to that of a standard, the analyst is able to determine the concentration of ammonia in the sample. In Figure 10.1.9 every wavelength of light from the source passes through the sample. This is not a problem if there is only one absorbing species in the sample. If the sample contains two components, then a quantitative analysis using Nessler’s original method is impossible unless the standards contains the second component at the same concentration it has in the sample. To overcome this problem, we want to select a wavelength that only the analyte absorbs. Unfortunately, we can not isolate a single wavelength of radiation from a continuum source, although we can narrow the range of wavelengths that reach the sample. As seen in Figure 10.1.10 , a wavelength selector always passes a narrow band of radiation characterized by a nominal wavelength, an effective bandwidth, and a maximum throughput of radiation. The effective bandwidth is defined as the width of the radiation at half of its maximum throughput. The ideal wavelength selector has a high throughput of radiation and a narrow effective bandwidth. A high throughput is desirable because the more photons that pass through the wavelength selector, the stronger the signal and the smaller the background noise. A narrow effective bandwidth provides a higher resolution, with spectral features separated by more than twice the effective bandwidth being resolved. As shown in Figure 10.1.11 , these two features of a wavelength selector often are in opposition. A larger effective bandwidth favors a higher throughput of radiation, but provide less resolution. Decreasing the effective bandwidth improves resolution, but at the cost of a noisier signal [Jiang, S.; Parker, G. A. Am. Lab. 1981, October, 38–43]. For a qualitative analysis, resolution usually is more important than noise and a smaller effective bandwidth is desirable; however, in a quantitative analysis less noise usually is desirable. Wavelength Selection Using Filters. The simplest method for isolating a narrow band of radiation is to use an absorption or interference filter. Absorption filters work by selectively absorbing radiation from a narrow region of the electromagnetic spectrum. Interference filters use constructive and destructive interference to isolate a narrow range of wavelengths. A simple example of an absorption filter is a piece of colored glass. A purple filter, for example, removes the complementary color green from 500–560 nm. Commercially available absorption filters provide effective bandwidths of 30–250 nm, although the throughput at the low end of this range often is only 10% of the source’s emission intensity. Interference filters are more expensive than absorption filters, but have narrower effective bandwidths, typically 10–20 nm, with maximum throughputs of at least 40%. Wavelength Selection Using Monochromators. A filter has one significant limitation—because a filter has a fixed nominal wavelength, if we need to make measurements at two different wavelengths, then we must use two different filters. A monochromator is an alternative method for selecting a narrow band of radiation that also allows us to continuously adjust the band’s nominal wavelength. The construction of a typical monochromator is shown in Figure 10.1.12 . Radiation from the source enters the monochromator through an entrance slit. The radiation is collected by a collimating mirror, which reflects a parallel beam of radiation to a diffraction grating. The diffraction grating is an optically reflecting surface with a large number of parallel grooves (see insert to Figure 10.1.12 ). The diffraction grating disperses the radiation and a second mirror focuses the radiation onto a planar surface that contains an exit slit. In some monochromators a prism is used in place of the diffraction grating. Radiation exits the monochromator and passes to the detector. As shown in Figure 10.1.12 , a monochromator converts a polychromatic source of radiation at the entrance slit to a monochromatic source of finite effective bandwidth at the exit slit. The choice of which wavelength exits the monochromator is determined by rotating the diffraction grating. A narrower exit slit provides a smaller effective bandwidth and better resolution than does a wider exit slit, but at the cost of a smaller throughput of radiation. Polychromatic means many colored. Polychromatic radiation contains many different wavelengths of light. Monochromatic means one color, or one wavelength. Although the light exiting a monochromator is not strictly of a single wavelength, its narrow effective bandwidth allows us to think of it as monochromatic. Monochromators are classified as either fixed-wavelength or scanning. In a fixed-wavelength monochromator we manually select the wavelength by rotating the grating. Normally a fixed-wavelength monochromator is used for a quantitative analysis where measurements are made at one or two wavelengths. A scanning monochromator includes a drive mechanism that continuously rotates the grating, which allows successive wavelengths of light to exit from the monochromator. A scanning monochromator is used to acquire spectra, and, when operated in a fixed-wavelength mode, for a quantitative analysis. Interferometers. An interferometer provides an alternative approach for wavelength selection. Instead of filtering or dispersing the electromagnetic radiation, an interferometer allows source radiation of all wavelengths to reach the detector simultaneously (Figure 10.1.13 ). Radiation from the source is focused on a beam splitter that reflects half of the radiation to a fixed mirror and transmits the other half to a moving mirror. The radiation recombines at the beam splitter, where constructive and destructive interference determines, for each wavelength, the intensity of light that reaches the detector. As the moving mirror changes position, the wavelength of light that experiences maximum constructive interference and maximum destructive interference also changes. The signal at the detector shows intensity as a function of the moving mirror’s position, expressed in units of distance or time. The result is called an interferogram or a time domain spectrum. The time domain spectrum is converted mathematically, by a process called a Fourier transform, to a spectrum (a frequency domain spectrum) that shows intensity as a function of the radiation’s energy. The mathematical details of the Fourier transform are beyond the level of this textbook. You can consult the chapter’s additional resources for additional information. In comparison to a monochromator, an interferometer has two significant advantages. The first advantage, which is termed Jacquinot’s advantage, is the greater throughput of source radiation. Because an interferometer does not use slits and has fewer optical components from which radiation is scattered and lost, the throughput of radiation reaching the detector is $80-200 \times$ greater than that for a monochromator. The result is less noise. The second advantage, which is called Fellgett’s advantage, is a savings in the time needed to obtain a spectrum. Because the detector monitors all frequencies simultaneously, a spectrum takes approximately one second to record, as compared to 10–15 minutes when using a scanning monochromator. Detectors In Nessler’s original method for determining ammonia (Figure 10.1.9 ) the analyst’s eye serves as the detector, matching the sample’s color to that of a standard. The human eye, of course, has a poor range—it responds only to visible light—and it is not particularly sensitive or accurate. Modern detectors use a sensitive transducer to convert a signal consisting of photons into an easily measured electrical signal. Ideally the detector’s signal, S, is a linear function of the electromagnetic radiation’s power, P, $S=k P+D \nonumber$ where k is the detector’s sensitivity, and D is the detector’s dark current, or the background current when we prevent the source’s radiation from reaching the detector. There are two broad classes of spectroscopic transducers: thermal transducers and photon transducers. Table 10.1.4 provides several representative examples of each class of transducers. Transducer is a general term that refers to any device that converts a chemical or a physical property into an easily measured electrical signal. The retina in your eye, for example, is a transducer that converts photons into an electrical nerve impulse; your eardrum is a transducer that converts sound waves into a different electrical nerve impulse. Table 10.1.4 . Examples of Transducers for Spectroscopy transducer class wavelength range output signal phototube photon 200–1000 nm current photomultiplier photon 110–1000 nm current Si photodiode photon 250–1100 nm current photoconductor photon 750–6000 nm change in resistance photovoltaic cell photon 400–5000 nm current or voltage thermocouple thermal 0.8–40 µm voltage thermistor thermal 0.8–40 µm change in resistance pneumatic thermal 0.8–1000 µm membrane displacement pyroelectric thermal 0.3–1000 µm current Photon Transducers. Phototubes and photomultipliers use a photosensitive surface that absorbs radiation in the ultraviolet, visible, or near IR to produce an electrical current that is proportional to the number of photons reaching the transducer (Figure 10.1.14 ). Other photon detectors use a semiconductor as the photosensitive surface. When the semiconductor absorbs photons, valence electrons move to the semiconductor’s conduction band, producing a measurable current. One advantage of the Si photodiode is that it is easy to miniaturize. Groups of photodiodes are gathered together in a linear array that contains 64–4096 individual photodiodes. With a width of 25 μm per diode, a linear array of 2048 photodiodes requires only 51.2 mm of linear space. By placing a photodiode array along the monochromator’s focal plane, it is possible to monitor simultaneously an entire range of wavelengths. Thermal Transducers. Infrared photons do not have enough energy to produce a measurable current with a photon transducer. A thermal transducer, therefore, is used for infrared spectroscopy. The absorption of infrared photons increases a thermal transducer’s temperature, changing one or more of its characteristic properties. A pneumatic transducer, for example, is a small tube of xenon gas with an IR transparent window at one end and a flexible membrane at the other end. Photons enter the tube and are absorbed by a blackened surface, increasing the temperature of the gas. As the temperature inside the tube fluctuates, the gas expands and contracts and the flexible membrane moves in and out. Monitoring the membrane’s displacement produces an electrical signal. Signal Processors A transducer’s electrical signal is sent to a signal processor where it is displayed in a form that is more convenient for the analyst. Examples of signal processors include analog or digital meters, recorders, and computers equipped with digital acquisition boards. A signal processor also is used to calibrate the detector’s response, to amplify the transducer’s signal, to remove noise by filtering, or to mathematically transform the signal. If the retina in your eye and the eardrum in your ear are transducers, then your brain is the signal processor.
textbooks/chem/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/10%3A_Spectroscopic_Methods/10.01%3A_Overview_of_Spectroscopy.txt
In absorption spectroscopy a beam of electromagnetic radiation passes through a sample. Much of the radiation passes through the sample without a loss in intensity. At selected wavelengths, however, the radiation’s intensity is attenuated. This process of attenuation is called absorption. Absorption Spectra There are two general requirements for an analyte’s absorption of electromagnetic radiation. First, there must be a mechanism by which the radiation’s electric field or magnetic field interacts with the analyte. For ultraviolet and visible radiation, absorption of a photon changes the energy of the analyte’s valence electrons. A bond’s vibrational energy is altered by the absorption of infrared radiation. Figure 10.1.3 provides a list of the types of atomic and molecular transitions associated with different types of electromagnetic radiation. The second requirement is that the photon’s energy, $h \nu$, must exactly equal the difference in energy, $\Delta E$, between two of the analyte’s quantized energy states. Figure 10.1.4 shows a simplified view of a photon’s absorption, which is useful because it emphasizes that the photon’s energy must match the difference in energy between a lower-energy state and a higher-energy state. What is missing, however, is information about what types of energy states are involved, which transitions between energy states are likely to occur, and the appearance of the resulting spectrum. We can use the energy level diagram in Figure 10.2.1 to explain an absorbance spectrum. The lines labeled E0 and E1 represent the analyte’s ground (lowest) electronic state and its first electronic excited state. Superimposed on each electronic energy level is a series of lines representing vibrational energy levels. Infrared Spectra for Molecules and Polyatomic Ions The energy of infrared radiation produces a change in a molecule’s or a polyatomic ion’s vibrational energy, but is not sufficient to effect a change in its electronic energy. As shown in Figure 10.2.1 , vibrational energy levels are quantized; that is, a molecule or polyatomic ion has only certain, discrete vibrational energies. The energy for an allowed vibrational mode, $E_{\nu}$, is $E_{\nu}=\nu+\frac{1}{2} h \nu_{0} \nonumber$ where $\nu$ is the vibrational quantum number, which has values of 0, 1, 2, ..., and $\nu_0$ is the bond’s fundamental vibrational frequency. The value of $\nu_0$, which is determined by the bond’s strength and by the mass at each end of the bond, is a characteristic property of a bond. For example, a carbon-carbon single bond (C–C) absorbs infrared radiation at a lower energy than a carbon-carbon double bond (C=C) because a single bond is weaker than a double bond. At room temperature most molecules are in their ground vibrational state ($\nu = 0$) . A transition from the ground vibrational state to the first vibrational excited state ($\nu = 1$) requires absorption of a photon with an energy of $h \nu_0$. Transitions in which $\Delta \nu = \pm 1$ give rise to the fundamental absorption lines. Weaker absorption lines, called overtones, result from transitions in which $\Delta \nu$ is ±2 or ±3. The number of possible normal vibrational modes for a linear molecule is 3N – 5, and for a non-linear molecule is 3N – 6, where N is the number of atoms in the molecule. Not surprisingly, infrared spectra often show a considerable number of absorption bands. Even a relatively simple molecule, such as ethanol (C2H6O), for example, has $3 \times 9 - 6$, or 21 possible normal modes of vibration, although not all of these vibrational modes give rise to an absorption. The IR spectrum for ethanol is shown in Figure 10.2.2 . Why does a non-linear molecule have 3N – 6 vibrational modes? Consider a molecule of methane, CH4. Each of methane’s five atoms can move in one of three directions (x, y, and z) for a total of $5 \times 3 = 15$ different ways in which the molecule’s atoms can move. A molecule can move in three ways: it can move from one place to another, which we call translational motion; it can rotate around an axis, which we call rotational motion; and its bonds can stretch and bend, which we call vibrational motion. Because the entire molecule can move in the x, y, and z directions, three of methane’s 15 different motions are translational. In addition, the molecule can rotate about its x, y, and z axes, accounting for three additional forms of motion. This leaves 15 – 3 – 3 = 9 vibrational modes. A linear molecule, such as CO2, has 3N – 5 vibrational modes because it can rotate around only two axes. UV/Vis Spectra for Molecules and Ions The valence electrons in organic molecules and polyatomic ions, such as $\text{CO}_3^{2-}$, occupy quantized sigma bonding ($\sigma$), pi bonding ($\pi$), and non-bonding (n) molecular orbitals (MOs). Unoccupied sigma antibonding ($\sigma^*$) and pi antibonding ($\pi^*$) molecular orbitals are slightly higher in energy. Because the difference in energy between the highest-energy occupied MOs and the lowest-energy unoccupied MOs corresponds to ultraviolet and visible radiation, absorption of a photon is possible. Four types of transitions between quantized energy levels account for most molecular UV/Vis spectra. Table 10.2.1 lists the approximate wavelength ranges for these transitions, as well as a partial list of bonds, functional groups, or molecules responsible for these transitions. Of these transitions, the most important are $n \rightarrow \pi^*$ and $\pi \rightarrow \pi^*$ because they involve important functional groups that are characteristic of many analytes and because the wavelengths are easily accessible. The bonds and functional groups that give rise to the absorption of ultraviolet and visible radiation are called chromophores. Table 10.2.1 . Electronic Transitions Involving n, $\sigma$, and $\pi$ Molecular Orbitals transition wavelength range examples $\sigma \rightarrow \sigma^*$ <200 nm C—C, C—H $n \rightarrow \sigma^*$ 160–260 nm H2O, CH3OH, CH3Cl $\pi \rightarrow \pi^*$ 200–500 nm C=C, C=O, C=N, C≡C $n \rightarrow \pi^*$ 250–600 nm C=O, C=N, N=N, N=O Many transition metal ions, such as Cu2+ and Co2+, form colorful solutions because the metal ion absorbs visible light. The transitions that give rise to this absorption are valence electrons in the metal ion’s d-orbitals. For a free metal ion, the five d-orbitals are of equal energy. In the presence of a complexing ligand or solvent molecule, however, the d-orbitals split into two or more groups that differ in energy. For example, in an octahedral complex of $\text{Cu(H}_2\text{O)}_6^{2+}$ the six water molecules perturb the d-orbitals into the two groups shown in Figure 10.2.3 . The resulting $d \rightarrow d$ transitions for transition metal ions are relatively weak. A more important source of UV/Vis absorption for inorganic metal–ligand complexes is charge transfer, in which absorption of a photon produces an excited state in which there is transfer of an electron from the metal, M, to the ligand, L. $M-L+h \nu \rightarrow\left(M^{+}-L^{-}\right)^{*} \nonumber$ Charge-transfer absorption is important because it produces very large absorbances. One important example of a charge-transfer complex is that of o-phenanthroline with Fe2+, the UV/Vis spectrum for which is shown in Figure 10.2.4 . Charge-transfer absorption in which an electron moves from the ligand to the metal also is possible. Why is a larger absorbance desirable? An analytical method is more sensitive if a smaller concentration of analyte gives a larger signal. Comparing the IR spectrum in Figure 10.2.2 to the UV/Vis spectrum in Figure 10.2.4 shows us that UV/Vis absorption bands are often significantly broader than those for IR absorption. We can use Figure 10.2.1 to explain why this is true. When a species absorbs UV/Vis radiation, the transition between electronic energy levels may also include a transition between vibrational energy levels. The result is a number of closely spaced absorption bands that merge together to form a single broad absorption band. UV/Vis Spectra for Atoms The energy of ultraviolet and visible electromagnetic radiation is sufficient to cause a change in an atom’s valence electron configuration. Sodium, for example, has a single valence electron in its 3s atomic orbital. As shown in Figure 10.2.5 , unoccupied, higher energy atomic orbitals also exist. The valence shell energy level diagram in Figure 10.2.5 might strike you as odd because it shows that the 3p orbitals are split into two groups of slightly different energy. The reasons for this splitting are unimportant in the context of our treatment of atomic absorption. For further information about the reasons for this splitting, consult the chapter’s additional resources. Absorption of a photon is accompanied by the excitation of an electron from a lower-energy atomic orbital to an atomic orbital of higher energy. Not all possible transitions between atomic orbitals are allowed. For sodium the only allowed transitions are those in which there is a change of ±1 in the orbital quantum number (l); thus transitions from $s \rightarrow p$ orbitals are allowed, but transitions from $s \rightarrow s$ and from $s \rightarrow d$ orbitals are forbidden. The atomic absorption spectrum for Na is shown in Figure 10.2.6 , and is typical of that found for most atoms. The most obvious feature of this spectrum is that it consists of a small number of discrete absorption lines that correspond to transitions between the ground state (the 3s atomic orbital) and the 3p and the 4p atomic orbitals. Absorption from excited states, such as the $3p \rightarrow 4s$ and the $3p \rightarrow 3d$ transitions included in Figure 10.2.5 , are too weak to detect. Because an excited state’s lifetime is short—an excited state atom typically returns to a lower energy state in 10–7 to 10–8 seconds—an atom in the exited state is likely to return to the ground state before it has an opportunity to absorb a photon. Another feature of the atomic absorption spectrum in Figure 10.2.6 is the narrow width of the absorption lines, which is a consequence of the fixed difference in energy between the ground state and the excited state, and the lack of vibrational and rotational energy levels. Natural line widths for atomic absorption, which are governed by the uncertainty principle, are approximately 10–5 nm. Other contributions to broadening increase this line width to approximately 10–3 nm. Transmittance and Absorbance As light passes through a sample, its power decreases as some of it is absorbed. This attenuation of radiation is described quantitatively by two separate, but related terms: transmittance and absorbance. As shown in Figure 10.2.7 a, transmittance is the ratio of the source radiation’s power as it exits the sample, PT, to that incident on the sample, P0. $T=\frac{P_{\mathrm{T}}}{P_{0}} \label{10.1}$ Multiplying the transmittance by 100 gives the percent transmittance, %T, which varies between 100% (no absorption) and 0% (complete absorption). All methods of detecting photons—including the human eye and modern photoelectric transducers—measure the transmittance of electromagnetic radiation. Equation \ref{10.1} does not distinguish between different mechanisms that prevent a photon emitted by the source from reaching the detector. In addition to absorption by the analyte, several additional phenomena contribute to the attenuation of radiation, including reflection and absorption by the sample’s container, absorption by other components in the sample’s matrix, and the scattering of radiation. To compensate for this loss of the radiation’s power, we use a method blank. As shown in Figure 10.2.7 b, we redefine P0 as the power exiting the method blank. An alternative method for expressing the attenuation of electromagnetic radiation is absorbance, A, which we define as $A=-\log T=-\log \frac{P_{\mathrm{T}}}{P_{0}} \label{10.2}$ Absorbance is the more common unit for expressing the attenuation of radiation because it is a linear function of the analyte’s concentration. We will show that this is true in the next section when we introduce Beer’s law. Example 10.2.1 A sample has a percent transmittance of 50%. What is its absorbance? Solution A percent transmittance of 50.0% is the same as a transmittance of 0.500. Substituting into Equation \ref{10.2} gives $A=-\log T=-\log (0.500)=0.301 \nonumber$ Exercise 10.2.1 What is the %T for a sample if its absorbance is 1.27? Answer To find the transmittance, T, we begin by noting that $A=1.27=-\log T \nonumber$ Solving for T $\begin{array}{c}{-1.27=\log T} \ {10^{-1.27}=T}\end{array} \nonumber$ gives a transmittance of 0.054, or a %T of 5.4%. Equation \ref{10.1} has an important consequence for atomic absorption. As we learned from Figure 10.2.6 , atomic absorption lines are very narrow. Even with a high quality monochromator, the effective bandwidth for a continuum source is $100-1000 \times$ greater than the width of an atomic absorption line. As a result, little radiation from a continuum source is absorbed when it passes through a sample of atoms; because P0PT the measured absorbance effectively is zero. For this reason, atomic absorption requires that we use a line source instead of a continuum source. Absorbance and Concentration: Beer's Law When monochromatic electromagnetic radiation passes through an infinitesimally thin layer of sample of thickness dx, it experiences a decrease in its power of dP (Figure 10.2.8 ). This fractional decrease in power is proportional to the sample’s thickness and to the analyte’s concentration, C; thus $-\frac{d P}{P}=\alpha C d x \label{10.3}$ where P is the power incident on the thin layer of sample and $\alpha$ is a proportionality constant. Integrating the left side of Equation \ref{10.3} over the sample’s full thickness $-\int_{P=P_0}^{P=P_t} \frac{d P}{P}=\alpha C \int_{x=0}^{x=b} d x \nonumber$ $\ln \frac{P_{0}}{P_T}=\alpha b C \nonumber$ converting from ln to log, and substituting into Equation \ref{10.2}, gives $A=a b C \label{10.4}$ where a is the analyte’s absorptivity with units of cm–1 conc–1. If we express the concentration using molarity, then we replace a with the molar absorptivity, $\varepsilon$, which has units of cm–1 M–1. $A=\varepsilon b C \label{10.5}$ The absorptivity and the molar absorptivity are proportional to the probability that the analyte absorbs a photon of a given energy. As a result, values for both a and $\varepsilon$ depend on the wavelength of the absorbed photon. Example 10.2.2 A $5.00 \times 10^{-4}$ M solution of analyte is placed in a sample cell that has a pathlength of 1.00 cm. At a wavelength of 490 nm, the solution’s absorbance is 0.338. What is the analyte’s molar absorptivity at this wavelength? Solution Solving Equation \ref{10.5} for $\epsilon$ and making appropriate substitutions gives $\varepsilon=\frac{A}{b C}=\frac{0.338}{(1.00 \ \mathrm{cm})\left(5.00 \times 10^{-4} \ \mathrm{M}\right)}=676 \ \mathrm{cm}^{-1} \ \mathrm{M}^{-1} \nonumber$ Exercise 10.2.2 A solution of the analyte from Example 10.2.2 has an absorbance of 0.228 in a 1.00-cm sample cell. What is the analyte’s concentration? Answer Making appropriate substitutions into Beer’s law $A=0.228=\varepsilon b C=\left(676 \ \mathrm{M}^{-1} \ \mathrm{cm}^{-1}\right)(1 \ \mathrm{cm}) C \nonumber$ and solving for C gives a concentration of $3.37 \times 10^{-4}$ M. Equation \ref{10.4} and Equation \ref{10.5}, which establish the linear relationship between absorbance and concentration, are known as Beer’s law. Calibration curves based on Beer’s law are common in quantitative analyses. As is often the case, the formulation of a law is more complicated than its name suggests. This is the case, for example, with Beer’s law, which also is known as the Beer-Lambert law or the Beer-Lambert-Bouguer law. Pierre Bouguer, in 1729, and Johann Lambert, in 1760, noted that the transmittance of light decreases exponentially with an increase in the sample’s thickness. $T \propto e^{-b} \nonumber$ Later, in 1852, August Beer noted that the transmittance of light decreases exponentially as the concentration of the absorbing species increases. $T \propto e^{-C} \nonumber$ Together, and when written in terms of absorbance instead of transmittance, these two relationships make up what we know as Beer’s law. Beer's Law and Multicomponent Samples We can extend Beer’s law to a sample that contains several absorbing components. If there are no interactions between the components, then the individual absorbances, Ai, are additive. For a two-component mixture of analyte’s X and Y, the total absorbance, Atot, is $A_{tot}=A_{X}+A_{Y}=\varepsilon_{X} b C_{X}+\varepsilon_{Y} b C_{Y} \nonumber$ Generalizing, the absorbance for a mixture of n components, Amix, is $A_{m i x}=\sum_{i=1}^{n} A_{i}=\sum_{i=1}^{n} \varepsilon_{i} b C_{i} \label{10.6}$ Limitations to Beer's Law Beer’s law suggests that a plot of absorbance vs. concentration—we will call this a Beer’s law plot—is a straight line with a y-intercept of zero and a slope of ab or $\varepsilon b$. In some cases a Beer’s law plot deviates from this ideal behavior (see Figure 10.2.9 ), and such deviations from linearity are divided into three categories: fundamental, chemical, and instrumental. Fundamental Limitations to Beer's Law Beer’s law is a limiting law that is valid only for low concentrations of analyte. There are two contributions to this fundamental limitation to Beer’s law. At higher concentrations the individual particles of analyte no longer are independent of each other. The resulting interaction between particles of analyte may change the analyte’s absorptivity. A second contribution is that an analyte’s absorptivity depends on the solution’s refractive index. Because a solution’s refractive index varies with the analyte’s concentration, values of a and $\varepsilon$ may change. For sufficiently low concentrations of analyte, the refractive index essentially is constant and a Beer’s law plot is linear. Chemical Limitations to Beer's Law A chemical deviation from Beer’s law may occur if the analyte is involved in an equilibrium reaction. Consider, for example, the weak acid, HA. To construct a Beer’s law plot we prepare a series of standard solutions—each of which contains a known total concentration of HA—and then measure each solution’s absorbance at the same wavelength. Because HA is a weak acid, it is in equilibrium with its conjugate weak base, A. In the equations that follow, the conjugate weak base A is written as A as it is easy to mistake the symbol for anionic charge as a minus sign. $\mathrm{HA}(a q)+\mathrm{H}_{2} \mathrm{O}(l)\rightleftharpoons\mathrm{H}_{3} \mathrm{O}^{+}(a q)+\mathrm{A}^{-}(a q) \nonumber$ If both HA and A absorb at the selected wavelength, then Beer’s law is $A=\varepsilon_{\mathrm{HA}} b C_{\mathrm{HA}}+\varepsilon_{\mathrm{A}} b C_{\mathrm{A}} \label{10.7}$ Because the weak acid’s total concentration, Ctotal, is $C_{\mathrm{total}}=C_{\mathrm{HA}}+C_{\mathrm{A}} \nonumber$ we can write the concentrations of HA and A as $C_{\mathrm{HA}}=\alpha_{\mathrm{HA}} C_{\mathrm{total}} \label{10.8}$ $C_{\text{A}} = (1 - \alpha_\text{HA})C_\text{total} \label{10.9}$ where $\alpha_\text{HA}$ is the fraction of weak acid present as HA. Substituting Equation \ref{10.8} and Equation \ref{10.9} into Equation \ref{10.7} and rearranging, gives $A=\left(\varepsilon_{\mathrm{HA}} \alpha_{\mathrm{HA}}+\varepsilon_{\mathrm{A}}-\varepsilon_{\mathrm{A}} \alpha_{\mathrm{A}}\right) b C_{\mathrm{total}} \label{10.10}$ To obtain a linear Beer’s law plot, we must satisfy one of two conditions. If $\varepsilon_\text{HA}$ and $\varepsilon_{\text{A}}$ have the same value at the selected wavelength, then Equation \ref{10.10} simplifies to $A = \varepsilon_{\text{A}}bC_\text{total} = \varepsilon_\text{HA}bC_\text{total} \nonumber$ Alternatively, if $\alpha_\text{HA}$ has the same value for all standard solutions, then each term within the parentheses of Equation \ref{10.10} is constant—which we replace with k—and a linear calibration curve is obtained at any wavelength. $A=k b C_{\mathrm{total}} \nonumber$ Because HA is a weak acid, the value of $\alpha_\text{HA}$ varies with pH. To hold $\alpha_\text{HA}$ constant we buffer each standard solution to the same pH. Depending on the relative values of $\alpha_\text{HA}$ and $\alpha_{\text{A}}$, the calibration curve has a positive or a negative deviation from Beer’s law if we do not buffer the standards to the same pH. Instrumental Limitations to Beer's Law There are two principal instrumental limitations to Beer’s law. The first limitation is that Beer’s law assumes that radiation reaching the sample is of a single wavelength—that is, it assumes a purely monochromatic source of radiation. As shown in Figure 10.1.10, even the best wavelength selector passes radiation with a small, but finite effective bandwidth. Polychromatic radiation always gives a negative deviation from Beer’s law, but the effect is smaller if the value of $\varepsilon$ essentially is constant over the wavelength range passed by the wavelength selector. For this reason, as shown in Figure 10.2.10 , it is better to make absorbance measurements at the top of a broad absorption peak. In addition, the deviation from Beer’s law is less serious if the source’s effective bandwidth is less than one-tenth of the absorbing species’ natural bandwidth [(a) Strong, F. C., III Anal. Chem. 1984, 56, 16A–34A; Gilbert, D. D. J. Chem. Educ. 1991, 68, A278–A281]. When measurements must be made on a slope, linearity is improved by using a narrower effective bandwidth. Stray radiation is the second contribution to instrumental deviations from Beer’s law. Stray radiation arises from imperfections in the wavelength selector that allow light to enter the instrument and to reach the detector without passing through the sample. Stray radiation adds an additional contribution, Pstray, to the radiant power that reaches the detector; thus $A=-\log \frac{P_{\mathrm{T}}+P_{\text { stray }}}{P_{0}+P_{\text { stray }}} \nonumber$ For a small concentration of analyte, Pstray is significantly smaller than P0 and PT, and the absorbance is unaffected by the stray radiation. For higher concentrations of analyte, less light passes through the sample and PT and Pstray become similar in magnitude. This results is an absorbance that is smaller than expected, and a negative deviation from Beer’s law.
textbooks/chem/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/10%3A_Spectroscopic_Methods/10.02%3A_Spectroscopy_Based_on_Absorption.txt
In Figure 10.1.9 we examined Nessler’s original method for matching the color of a sample to the color of a standard. Matching colors is a labor intensive process for the analyst and, not surprisingly, spectroscopic methods of analysis were slow to find favor. The 1930s and 1940s saw the introduction of photoelectric transducers for ultraviolet and visible radiation, and thermocouples for infrared radiation. As a result, modern instrumentation for absorption spectroscopy routinely became available in the 1940s—further progress has been rapid ever since. Instrumentation Frequently an analyst must select from among several instruments of different design, the one instrument best suited for a particular analysis. In this section we examine several different instruments for molecular absorption spectroscopy, with an emphasis on their advantages and limitations. Methods of sample introduction also are covered in this section. Instrument Designs for Molecular UV/Vis Absorption Filter Photometer. The simplest instrument for molecular UV/Vis absorption is a filter photometer (Figure 10.3.1 ), which uses an absorption or interference filter to isolate a band of radiation. The filter is placed between the source and the sample to prevent the sample from decomposing when exposed to higher energy radiation. A filter photometer has a single optical path between the source and detector, and is called a single-beam instrument. The instrument is calibrated to 0% T while using a shutter to block the source radiation from the detector. After opening the shutter, the instrument is calibrated to 100% T using an appropriate blank. The blank is then replaced with the sample and its transmittance measured. Because the source’s incident power and the sensitivity of the detector vary with wavelength, the photometer is recalibrated whenever the filter is changed. Photometers have the advantage of being relatively inexpensive, rugged, and easy to maintain. Another advantage of a photometer is its portability, making it easy to take into the field. Disadvantages of a photometer include the inability to record an absorption spectrum and the source’s relatively large effective bandwidth, which limits the calibration curve’s linearity. The percent transmittance varies between 0% and 100%. As we learned from Figure 10.2.7, we use a blank to determine P0, which corresponds to 100%T. Even in the absence of light the detector records a signal. Closing the shutter allows us to assign 0%T to this signal. Together, setting 0% T and 100%T calibrates the instrument. The amount of light that passes through a sample produces a signal that is greater than or equal to 0%T and smaller than or equal to 100%T. Figure 10.3.1 . Schematic diagram of a filter photometer. The analyst either inserts a removable filter or the filters are placed in a carousel, an example of which is shown in the photographic inset. The analyst selects a filter by rotating it into place. Single-Beam Spectrophotometer. An instrument that uses a monochromator for wavelength selection is called a spectrophotometer. The simplest spectrophotometer is a single-beam instrument equipped with a fixed-wavelength monochromator (Figure 10.3.2 ). Single-beam spectrophotometers are calibrated and used in the same manner as a photometer. One example of a single-beam spectrophotometer is Thermo Scientific’s Spectronic 20D+, which is shown in the photographic insert to Figure 10.3.2 . The Spectronic 20D+ has a wavelength range of 340–625 nm (950 nm when using a red-sensitive detector), and a fixed effective bandwidth of 20 nm. Battery-operated, hand-held single-beam spectrophotometers are available, which are easy to transport into the field. Other single-beam spectrophotometers also are available with effective bandwidths of 2–8 nm. Fixed wavelength single-beam spectrophotometers are not practical for recording spectra because manually adjusting the wavelength and recalibrating the spectrophotometer is awkward and time-consuming. The accuracy of a single-beam spectrophotometer is limited by the stability of its source and detector over time. Double-Beam Spectrophotometer. The limitations of a fixed-wavelength, single-beam spectrophotometer is minimized by using a double-beam spectrophotometer (Figure 10.3.3 ). A chopper controls the radiation’s path, alternating it between the sample, the blank, and a shutter. The signal processor uses the chopper’s speed of rotation to resolve the signal that reaches the detector into the transmission of the blank, P0, and the sample, PT. By including an opaque surface as a shutter, it also is possible to continuously adjust 0%T. The effective bandwidth of a double-beam spectrophotometer is controlled by adjusting the monochromator’s entrance and exit slits. Effective bandwidths of 0.2–3.0 nm are common. A scanning monochromator allows for the automated recording of spectra. Double-beam instruments are more versatile than single-beam instruments, being useful for both quantitative and qualitative analyses, but also are more expensive and not particularly portable. Diode Array Spectrometer. An instrument with a single detector can monitor only one wavelength at a time. If we replace a single photomultiplier with an array of photodiodes, we can use the resulting detector to record a full spectrum in as little as 0.1 s. In a diode array spectrometer the source radiation passes through the sample and is dispersed by a grating (Figure 10.3.4 ). The photodiode array detector is situated at the grating’s focal plane, with each diode recording the radiant power over a narrow range of wavelengths. Because we replace a full monochromator with just a grating, a diode array spectrometer is small and compact. One advantage of a diode array spectrometer is the speed of data acquisition, which allows us to collect multiple spectra for a single sample. Individual spectra are added and averaged to obtain the final spectrum. This signal averaging improves a spectrum’s signal-to-noise ratio. If we add together n spectra, the sum of the signal at any point, x, increases as nSx, where Sx is the signal. The noise at any point, Nx, is a random event, which increases as $\sqrt{n} N_x$ when we add together n spectra. The signal-to-noise ratio after n scans, (S/N)n is $\left(\frac{S}{N}\right)_{n}=\frac{n S_{x}}{\sqrt{n} N_{x}}=\sqrt{n} \frac{S_{x}}{N_{x}} \nonumber$ where Sx/Nx is the signal-to-noise ratio for a single scan. The impact of signal averaging is shown in Figure 10.3.5 . The first spectrum shows the signal after one scan, which consists of a single, noisy peak. Signal averaging using 4 scans and 16 scans decreases the noise and improves the signal-to-noise ratio. One disadvantage of a photodiode array is that the effective bandwidth per diode is roughly an order of magnitude larger than that for a high quality monochromator. For more details on signals and noise, see Introduction to Signals and Noise by Steven Petrovic, an on-line resource that is part of the Analytical Sciences Digital Library. Sample Cells. The sample compartment provides a light-tight environment that limits stray radiation. Samples normally are in a liquid or solution state, and are placed in cells constructed with UV/Vis transparent materials, such as quartz, glass, and plastic (Figure 10.3.6 ). A quartz or fused-silica cell is required when working at a wavelength <300 nm where other materials show a significant absorption. The most common pathlength is 1 cm (10 mm), although cells with shorter (as little as 0.1 cm) and longer pathlengths (up to 10 cm) are available. Longer pathlength cells are useful when analyzing a very dilute solution or for gas samples. The highest quality cells allow the radiation to strike a flat surface at a 90o angle, minimizing the loss of radiation to reflection. A test tube often is used as a sample cell with simple, single-beam instruments, although differences in the cell’s pathlength and optical properties add an additional source of error to the analysis. If we need to monitor an analyte’s concentration over time, it may not be possible to remove samples for analysis. This often is the case, for example, when monitoring an industrial production line or waste line, when monitoring a patient’s blood, or when monitoring an environmental system, such as stream. With a fiber-optic probe we can analyze samples in situ. An example of a remote sensing fiber-optic probe is shown in Figure 10.3.7 . The probe consists of two bundles of fiber-optic cable. One bundle transmits radiation from the source to the probe’s tip, which is designed to allow the sample to flow through the sample cell. Radiation from the source passes through the solution and is reflected back by a mirror. The second bundle of fiber-optic cable transmits the nonabsorbed radiation to the wavelength selector. Another design replaces the flow cell shown in Figure 10.3.7 with a membrane that contains a reagent that reacts with the analyte. When the analyte diffuses into the membrane it reacts with the reagent, producing a product that absorbs UV or visible radiation. The nonabsorbed radiation from the source is reflected or scattered back to the detector. Fiber optic probes that show chemical selectivity are called optrodes [(a) Seitz, W. R. Anal. Chem. 1984, 56, 16A–34A; (b) Angel, S. M. Spectroscopy 1987, 2(2), 38–48]. Instrument Designs for Infrared Adsorption Filter Photometer. The simplest instrument for IR absorption spectroscopy is a filter photometer similar to that shown in Figure 10.3.1 for UV/Vis absorption. These instruments have the advantage of portability and typically are used as dedicated analyzers for gases such as HCN and CO. Double-beam spectrophotometer. Infrared instruments using a monochromator for wavelength selection use double-beam optics similar to that shown in Figure 10.3.3 . Double-beam optics are preferred over single-beam optics because the sources and detectors for infrared radiation are less stable than those for UV/Vis radiation. In addition, it is easier to correct for the absorption of infrared radiation by atmospheric CO2 and H2O vapor when using double-beam optics. Resolutions of 1–3 cm–1 are typical for most instruments. Fourier transform spectrometer. In a Fourier transform infrared spectrometer, or FT–IR, the monochromator is replaced with an interferometer (Figure 10.1.13). Because an FT-IR includes only a single optical path, it is necessary to collect a separate spectrum to compensate for the absorbance of atmospheric CO2 and H2O vapor. This is done by collecting a background spectrum without the sample and storing the result in the instrument’s computer memory. The background spectrum is removed from the sample’s spectrum by taking the ratio the two signals. In comparison to other instrument designs, an FT–IR provides for rapid data acquisition, which allows for an enhancement in signal-to-noise ratio through signal-averaging. Sample Cells. Infrared spectroscopy routinely is used to analyze gas, liquid, and solid samples. Sample cells are made from materials, such as NaCl and KBr, that are transparent to infrared radiation. Gases are analyzed using a cell with a pathlength of approximately 10 cm. Longer pathlengths are obtained by using mirrors to pass the beam of radiation through the sample several times. A liquid sample may be analyzed using a variety of different sample cells (Figure 10.3.8 ). For non-volatile liquids a suitable sample is prepared by placing a drop of the liquid between two NaCl plates, forming a thin film that typically is less than 0.01 mm thick. Volatile liquids are placed in a sealed cell to prevent their evaporation. The analysis of solution samples is limited by the solvent’s IR absorbing properties, with CCl4, CS2, and CHCl3 being the most common solvents. Solutions are placed in cells that contain two NaCl windows separated by a Teflon spacer. By changing the Teflon spacer, pathlengths from 0.015–1.0 mm are obtained. Transparent solid samples are analyzed by placing them directly in the IR beam. Most solid samples, however, are opaque, and are first dispersed in a more transparent medium before recording the IR spectrum. If a suitable solvent is available, then the solid is analyzed by preparing a solution and analyzing as described above. When a suitable solvent is not available, solid samples are analyzed by preparing a mull of the finely powdered sample with a suitable oil. Alternatively, the powdered sample is mixed with KBr and pressed into an optically transparent pellet. The analysis of an aqueous sample is complicated by the solubility of the NaCl cell window in water. One approach to obtaining an infrared spectrum of an aqueous solution is to use attenuated total reflectance instead of transmission. Figure 10.3.9 shows a diagram of a typical attenuated total reflectance (ATR) FT–IR instrument. The ATR cell consists of a high refractive index material, such as ZnSe or diamond, sandwiched between a low refractive index substrate and a lower refractive index sample. Radiation from the source enters the ATR crystal where it undergoes a series of internal reflections before exiting the crystal. During each reflection the radiation penetrates into the sample to a depth of a few microns, which results in a selective attenuation of the radiation at those wavelengths where the sample absorbs. ATR spectra are similar, but not identical, to those obtained by measuring the transmission of radiation. Solid samples also can be analyzed using an ATR sample cell. After placing the solid in the sample slot, a compression tip ensures that it is in contact with the ATR crystal. Examples of solids analyzed by ATR include polymers, fibers, fabrics, powders, and biological tissue samples. Another reflectance method is diffuse reflectance, in which radiation is reflected from a rough surface, such as a powder. Powdered samples are mixed with a non-absorbing material, such as powdered KBr, and the reflected light is collected and analyzed. As with ATR, the resulting spectrum is similar to that obtained by conventional transmission methods. Further details about these, and other methods for preparing solids for infrared analysis can be found in this chapter’s additional resources. Quantitative Applications The determination of an analyte’s concentration based on its absorption of ultraviolet or visible radiation is one of the most frequently encountered quantitative analytical methods. One reason for its popularity is that many organic and inorganic compounds have strong absorption bands in the UV/Vis region of the electromagnetic spectrum. In addition, if an analyte does not absorb UV/Vis radiation—or if its absorbance is too weak—we often can react it with another species that is strongly absorbing. For example, a dilute solution of Fe2+ does not absorb visible light. Reacting Fe2+ with o-phenanthroline, however, forms an orange–red complex of $\text{Fe(phen)}_3^{2+}$ that has a strong, broad absorbance band near 500 nm. An additional advantage to UV/Vis absorption is that in most cases it is relatively easy to adjust experimental and instrumental conditions so that Beer’s law is obeyed. A quantitative analysis based on the absorption of infrared radiation, although important, is encountered less frequently than with UV/Vis absorption. One reason is the greater tendency for instrumental deviations from Beer’s law when using infrared radiation. Because an infrared absorption band is relatively narrow, any deviation due to the lack of monochromatic radiation is more pronounced. In addition, infrared sources are less intense than UV/Vis sources, which makes stray radiation more of a problem. Differences between the pathlengths for samples and for standards when using thin liquid films or KBr pellets are a problem, although an internal standard can correct for any difference in pathlength. Finally, establishing a 100%T (A = 0) baseline often is difficult because the optical properties of NaCl sample cells may change significantly with wavelength due to contamination and degradation. We can minimize this problem by measuring absorbance relative to a baseline established for the absorption band. Figure 10.3.10 shows how this is accomplished. Another approach is to use a cell with a fixed pathlength, such as that shown in Figure 10.3.8 b. Environmental Applications The analysis of waters and wastewaters often relies on the absorption of ultraviolet and visible radiation. Many of these methods are outlined in Table 10.3.1 . Several of these methods are described here in more detail. Table 10.3.1 . Examples of Molecular UV/Vis Analysis of Waters and Wastewaters analyte method $\lambda$ (nm) trace metals aluminum react with Eriochrome cyanide R dye at pH 6; forms red to pink complex 535 arsenic reduce to AsH3 using Zn and react with silver diethyldithiocarbamate; forms red complex 535 cadmium extract into CHCl3 containing dithizone from a sample made basic with NaOH; forms pink to red complex 518 chromium oxidize to Cr(VI) and react with diphenylcarbazide; forms 540 red-violet product 540 copper react with neocuprine in neutral to slightly acid solution and extract into CHCl3/CH3OH; forms yellow complex 457 iron reduce to Fe2+ and react with o-phenanthroline; forms orange-red complex 510 lead extract into CHCl3 containing dithizone from sample made basic with NH3/ NH4+ buffer; forms cherry red complex 510 manganese oxidize to MnO4 with persulfate; forms purple solution 525 mercury extract into CHCl3 containing dithizone from acidic sample; forms orange complex 492 zinc react with zincon at pH 9; forms blue complex 620 inorganic nonmetals ammonia reaction with hypochlorite and phenol using a manganous 630 salt catalyst; forms blue indophenol as product 630 cyanide react with chloroamine-T to form CNCl and then with a pyridine-barbituric acid; forms a red-blue dye 578 fluoride react with red Zr-SPADNS lake; formation of ZrF62– decreases color of the red lake 570 chlorine (residual) react with leuco crystal violet; forms blue product 592 nitrate react with Cd to form NO2 and then react with sulfanilamide and N-(1-napthyl)-ethylenediamine; forms red azo 543 dye 543 phosphate react with ammonium molybdate and then reduce with SnCl2; forms molybdenum blue 690 organics phenol react with 4-aminoantipyrine and K3Fe(CN)6; forms yellow antipyrine dye 460 anionic surfactants react with cationic methylene blue dye and extract into CHCl3; forms blue ion pair 652 Although the quantitative analysis of metals in waters and wastewaters is accomplished primarily by atomic absorption or atomic emission spectroscopy, many metals also can be analyzed following the formation of a colored metal–ligand complex. One advantage to these spectroscopic methods is that they easily are adapted to the analysis of samples in the field using a filter photometer. One ligand used for the analysis of several metals is diphenylthiocarbazone, also known as dithizone. Dithizone is not soluble in water, but when a solution of dithizone in CHCl3 is shaken with an aqueous solution that contains an appropriate metal ion, a colored metal–dithizonate complex forms that is soluble in CHCl3. The selectivity of dithizone is controlled by adjusting the sample’s pH. For example, Cd2+ is extracted from solutions made strongly basic with NaOH, Pb2+ from solutions made basic with an NH3/ NH4+ buffer, and Hg2+ from solutions that are slightly acidic. The structure of dithizone is shown below. See Chapter 7 for a discussion of extracting metal ions using dithizone. When chlorine is added to water the portion available for disinfection is called the chlorine residual. There are two forms of chlorine residual. The free chlorine residual includes Cl2, HOCl, and OCl. The combined chlorine residual, which forms from the reaction of NH3 with HOCl, consists of monochloramine, NH2Cl, dichloramine, NHCl2, and trichloramine, NCl3. Because the free chlorine residual is more efficient as a disinfectant, there is an interest in methods that can distinguish between the total chlorine residual’s different forms. One such method is the leuco crystal violet method. The free residual chlorine is determined by adding leuco crystal violet to the sample, which instantaneously oxidizes to give a blue-colored compound that is monitored at 592 nm. Completing the analysis in less than five minutes prevents a possible interference from the combined chlorine residual. The total chlorine residual (free + combined) is determined by reacting a separate sample with iodide, which reacts with both chlorine residuals to form HOI. When the reaction is complete, leuco crystal violet is added and oxidized by HOI, giving the same blue-colored product. The combined chlorine residual is determined by difference. In Chapter 9 we explored how the total chlorine residual can be determined by a redox titration; see Representative Method 9.4.1 for further details. The method described here allows us to divide the total chlorine residual into its component parts. The concentration of fluoride in drinking water is determined indirectly by its ability to form a complex with zirconium. In the presence of the dye SPADNS, a solution of zirconium forms a red colored compound, called a lake, that absorbs at 570 nm. When fluoride is added, the formation of the stable $\text{ZrF}_6^{2-}$ complex causes a portion of the lake to dissociate, decreasing the absorbance. A plot of absorbance versus the concentration of fluoride, therefore, has a negative slope. SPADNS, the structure of which is shown below, is an abbreviation for the sodium salt of 2-(4-sulfophenylazo)-1,8-dihydroxy-3,6-napthalenedisulfonic acid, which is a mouthful to say. Spectroscopic methods also are used to determine organic constituents in water. For example, the combined concentrations of phenol and ortho- and meta-substituted phenols are determined by using steam distillation to separate the phenols from nonvolatile impurities. The distillate reacts with 4-aminoantipyrine at pH 7.9 ± 0.1 in the presence of K3Fe(CN)6 to a yellow colored antipyrine dye. After extracting the dye into CHCl3, its absorbance is monitored at 460 nm. A calibration curve is prepared using only the unsubstituted phenol, C6H5OH. Because the molar absorptivity of substituted phenols generally are less than that for phenol, the reported concentration represents the minimum concentration of phenolic compounds. 4-aminoantipyrene Molecular absorption also is used for the analysis of environmentally significant airborne pollutants. In many cases the analysis is carried out by collecting the sample in water, converting the analyte to an aqueous form that can be analyzed by methods such as those described in Table 10.3.1 . For example, the concentration of NO2 is determined by oxidizing NO2 to $\text{NO}_3^-$. The concentration of $\text{NO}_3^-$ is then determined by first reducing it to $\text{NO}_2^-$ with Cd, and then reacting $\text{NO}_2^-$ with sulfanilamide and N-(1-naphthyl)-ethylenediamine to form a red azo dye. Another important application is the analysis for SO2, which is determined by collecting the sample in an aqueous solution of $\text{HgCl}_4^{2-}$ where it reacts to form $\text{Hg(SO}_3)_2^{2-}$. Addition of p-rosaniline and formaldehyde produces a purple complex that is monitored at 569 nm. Infrared absorption is useful for the analysis of organic vapors, including HCN, SO2, nitrobenzene, methyl mercaptan, and vinyl chloride. Frequently, these analyses are accomplished using portable, dedicated infrared photometers. Clinical Applications The analysis of clinical samples often is complicated by the complexity of the sample’s matrix, which may contribute a significant background absorption at the desired wavelength. The determination of serum barbiturates provides one example of how this problem is overcome. The barbiturates are first extracted from a sample of serum with CHCl3 and then extracted from the CHCl3 into 0.45 M NaOH (pH ≈ 13). The absorbance of the aqueous extract is measured at 260 nm, and includes contributions from the barbiturates as well as other components extracted from the serum sample. The pH of the sample is then lowered to approximately 10 by adding NH4Cl and the absorbance remeasured. Because the barbiturates do not absorb at this pH, we can use the absorbance at pH 10, ApH 10, to correct the absor-ance at pH 13, ApH 13 $A_\text{barb} = A_\text{pH 13} - \frac {V_\text{samp} + V_{\text{NH}_4\text{Cl}}} {V_\text{samp}} \times A_\text{pH 10} \nonumber$ where Abarb is the absorbance due to the serum barbiturates and Vsamp and $V_{\text{NH}_4\text{Cl}}$ are the volumes of sample and NH4Cl, respectively. Table 10.3.2 provides a summary of several other methods for analyzing clinical samples. Table 10.3.2 . Examples of the Molecular UV/Vis Analysis of Clinical Samples analyte method $\lambda$ (nm) total serum protein react with NaOH and Cu2+; forms blue-violet complex 540 serum cholesterol react with Fe3+ in presence of isopropanol, acetic acid, and H2SO4; forms blue-violet complex 540 uric acid react with phosphotungstic acid; forms tungsten blue 710 serum barbituates extract into CHCl3 to isolate from interferents and then extract into 0.45 M NaOH 260 glucose react with o-toludine at 100oC; forms blue-green complex 630 protein-bound iodine decompose protein to release iodide, which catalyzes redox reaction between Ce3+ and As3+; forms yellow colored Ce4+ 420 Industrial Applications UV/Vis molecular absorption is used for the analysis of a diverse array of industrial samples including pharmaceuticals, food, paint, glass, and metals. In many cases the methods are similar to those described in Table 10.3.1 and in Table 10.3.2 . For example, the amount of iron in food is determined by bringing the iron into solution and analyzing using the o-phenanthroline method listed in Table 10.3.1 . Many pharmaceutical compounds contain chromophores that make them suitable for analysis by UV/Vis absorption. Products analyzed in this fashion include antibiotics, hormones, vitamins, and analgesics. One example of the use of UV absorption is in determining the purity of aspirin tablets, for which the active ingredient is acetylsalicylic acid. Salicylic acid, which is produced by the hydrolysis of acetylsalicylic acid, is an undesirable impurity in aspirin tablets, and should not be present at more than 0.01% w/w. Samples are screened for unacceptable levels of salicylic acid by monitoring the absorbance at a wavelength of 312 nm. Acetylsalicylic acid absorbs at 280 nm, but absorbs poorly at 312 nm. Conditions for preparing the sample are chosen such that an absorbance of greater than 0.02 signifies an unacceptable level of salicylic acid. Forensic Applications UV/Vis molecular absorption routinely is used for the analysis of narcotics and for drug testing. One interesting forensic application is the determination of blood alcohol using the Breathalyzer test. In this test a 52.5-mL breath sample is bubbled through an acidified solution of K2Cr2O7, which oxidizes ethanol to acetic acid. The concentration of ethanol in the breath sample is determined by a decrease in the absorbance at 440 nm where the dichromate ion absorbs. A blood alcohol content of 0.10%, which is above the legal limit, corresponds to 0.025 mg of ethanol in the breath sample. Developing a Quantitative Method for a Single Component To develop a quantitative analytical method, the conditions under which Beer’s law is obeyed must be established. First, the most appropriate wavelength for the analysis is determined from an absorption spectrum. In most cases the best wavelength corresponds to an absorption maximum because it provides greater sensitivity and is less susceptible to instrumental limitations. Second, if the instrument has adjustable slits, then an appropriate slit width is chosen. The absorption spectrum also aids in selecting a slit width by choosing a width that is narrow enough to avoid instrumental limita- tions to Beer’s law, but wide enough to increase the throughput of source radiation. Finally, a calibration curve is constructed to determine the range of concentrations for which Beer’s law is valid. Additional considerations that are important in any quantitative method are the effect of potential interferents and establishing an appropriate blank. Representative Method 10.3.1: Determination of Iron in Water and Wastewater The best way to appreciate the theoretical and the practical details discussed in this section is to carefully examine a typical analytical method. Although each method is unique, the following description of the determination of iron in water and waste- water provides an instructive example of a typical procedure. The description here is based on Method 3500-Fe B as published in Standard Methods for the Examination of Water and Wastewater, 20th Ed., American Public Health Association: Washington, D. C., 1998. Description of Method Iron in the +2 oxidation state reacts with o-phenanthroline to form the orange-red $\text{Fe(phen)}_3^{2+}$ complex. The intensity of the complex’s color is independent of the solution’s acidity between a pH of 3 and 9. Because the complex forms more rapidly at lower pH levels, the reaction usually is carried out within a pH range of 3.0–3.5. Any iron present in the +3 oxidation state is reduced with hydroxylamine before adding o-phenanthroline. The most important interferents are strong oxidizing agents, polyphosphates, and metal ions such as Cu2+, Zn2+, Ni2+, and Cd2+. An interference from oxidizing agents is minimized by adding an excess of hydroxylamine, and an interference from polyphosphate is minimized by boiling the sample in the presence of acid. The absorbance of samples and standards are measured at a wavelength of 510 nm using a 1-cm cell (longer pathlength cells also may be used). Beer’s law is obeyed for concentrations of within the range of 0.2–4.0 mg Fe/L. Procedure For a sample that contains less than 2 mg Fe/L, directly transfer a 50-mL portion to a 125-mL Erlenmeyer flask. Samples that contain more than 2 mg Fe/L are diluted before acquiring the 50-mL portion. Add 2 mL of concentrated HCl and 1 mL of hydroxylamine to the sample. Bring the solution to a boil and continue boiling until the solution’s volume is reduced to between 15 and 20 mL. After cooling to room temperature, transfer the solution to a 50-mL volumetric flask, add 10 mL of an ammonium acetate buffer, 2 mL of a 1000 ppm solution of o-phenanthroline, and dilute to volume. Allow 10–15 minutes for color development before measuring the absorbance, using distilled water to set 100% T. Calibration standards, including a blank, are prepared by the same procedure using a stock solution that contains a known concentration of Fe2+. Questions 1. Explain why strong oxidizing agents are interferents and why an excess of hydroxylamine prevents the interference. A strong oxidizing agent will oxidize some Fe2+ to Fe3+. Because $\text{Fe(phen)}_3^{3+}$ does not absorb as strongly as $\text{Fe(phen)}_3^{2+}$, the absorbance is smaller than expected, which produces a negative determinate error. The excess hydroxylamine reacts with the oxidizing agents, removing them from the solution. 2. The color of the complex is stable between pH levels of 3 and 9. What are some possible complications at more acidic or at more basic pH’s? Because o-phenanthroline is a weak base, its conditional formation constant for $\text{Fe(phen)}_3^{2+}$ becomes smaller at more acidic pH levels, where o-phenanthroline is present in its protonated form. The result is a decrease in absorbance and a less sensitive analytical method. When the pH is greater than 9, competition between OHand o-phenanthroline for Fe2+ also decreases the absorbance. In addition, if the pH is sufficiently basic there is a risk that the iron will precipitate as Fe(OH)2. 3. Cadmium is an interferent because it forms a precipitate with o-phenanthroline. What effect does the formation of precipitate have on the determination of iron? Because o-phenanthroline is present in large excess (2000 μg of o-phenanthroline for 100 μg of Fe2+), it is not likely that the interference is due to an insufficient amount of o-phenanthroline being available to react with the Fe2+. The presence of a precipitate in the sample cell results in the scattering of radiation, which causes an apparent increase in absorbance. Because the measured absorbance increases, the reported concentration is too high. Although scattering is a problem here, it can serve as the basis of a useful analytical method. See Chapter 10.8 for further details. 4. Even high quality ammonium acetate contains a significant amount of iron. Why is this source of iron not a problem? Because all samples and standards are prepared using the same volume of ammonium acetate buffer, the contribution of this source of iron is accounted for by the calibration curve’s reagent blank. Quantitative Analysis for a Single Sample To determine the concentration of an analyte we measure its absorbance and apply Beer’s law using any of the standardization methods described in Chapter 5. The most common methods are a normal calibration curve using external standards and the method of standard additions. A single point standardization also is possible, although we must first verify that Beer’s law holds for the concentration of analyte in the samples and the standard. Example 10.3.1 The determination of iron in an industrial waste stream is carried out by the o-phenanthroline described in Representative Method 10.3.1. Using the data in the following table, determine the mg Fe/L in the waste stream. mg Fe/L absorbance 0.00 0.000 1.00 0.183 2.00 0.364 3.00 0.546 4.00 0.727 sample 0.269 Solution Linear regression of absorbance versus the concentration of Fe in the standards gives the calibration curve and calibration equation shown here $A=0.0006+\left(0.1817 \ \mathrm{mg}^{-1} \mathrm{L}\right) \times(\mathrm{mg} \mathrm{Fe} / \mathrm{L}) \nonumber$ Substituting the sample’s absorbance into the calibration equation gives the concentration of Fe in the waste stream as 1.48 mg Fe/L Exercise 10.3.1 The concentration of Cu2+ in a sample is determined by reacting it with the ligand cuprizone and measuring its absorbance at 606 nm in a 1.00-cm cell. When a 5.00-mL sample is treated with cuprizone and diluted to 10.00 mL, the resulting solution has an absorbance of 0.118. A second 5.00-mL sample is mixed with 1.00 mL of a 20.00 mg/L standard of Cu2+, treated with cuprizone and diluted to 10.00 mL, giving an absorbance of 0.162. Report the mg Cu2+/L in the sample. Answer For this standard addition we write equations that relate absorbance to the concentration of Cu2+ in the sample before the standard addition $0.118=\varepsilon b \left[ C_{\mathrm{Cu}} \times \frac{5.00 \text{ mL}}{10.00 \text{ mL}}\right] \nonumber$ and after the standard addition $0.162=\varepsilon b\left(C_{\mathrm{Cu}} \times \frac{5.00 \text{ mL}}{10.00 \text{ mL}}+\frac{20.00 \ \mathrm{mg} \ \mathrm{Cu}}{\mathrm{L}} \times \frac{1.00 \ \mathrm{mL}}{10.00 \ \mathrm{mL}}\right) \nonumber$ in each case accounting for the dilution of the original sample and for the standard. The value of $\varepsilon b$ is the same in both equation. Solving each equation for $\varepsilon b$ and equating $\frac{0.162}{C_{\mathrm{Cu}} \times \frac{5.00 \text{ mL}}{10.00 \text{ mL}}+\frac{20.00 \ \mathrm{mg} \ \mathrm{Cu}}{\mathrm{L}} \times \frac{1.00 \ \mathrm{mL}}{10.00 \ \mathrm{mL}}}=\frac{0.118}{C_{\mathrm{Cu}} \times \frac{5.00 \text{ mL}}{10.00 \text{ mL}}} \nonumber$ leaves us with an equation in which CCu is the only variable. Solving for CCu gives its value as $\frac{0.162}{0.500 \times C_{\mathrm{Cu}}+2.00 \ \mathrm{mg} \ \mathrm{Cu} / \mathrm{L}}=\frac{0.118}{0.500 \times C_{\mathrm{Cu}}} \nonumber$ $0.0810 \times C_{\mathrm{Cu}}=0.0590 \times C_{\mathrm{Ca}}+0.236 \ \mathrm{mg} \ \mathrm{Cu} / \mathrm{L} \nonumber$ $0.0220 \times C_{\mathrm{Cu}}=0.236 \ \mathrm{mg} \ \mathrm{Cu} / \mathrm{L} \nonumber$ $C_{\mathrm{Cu}}=10.7 \ \mathrm{mg} \ \mathrm{Cu} / \mathrm{L} \nonumber$ Quantitative Analysis of Mixtures Suppose we need to determine the concentration of two analytes, X and Y, in a sample. If each analyte has a wavelength where the other analyte does not absorb, then we can proceed using the approach in Example 10.3.5 . Unfortunately, UV/Vis absorption bands are so broad that frequently it is not possible to find suitable wavelengths. Because Beer’s law is additive the mixture’s absorbance, Amix, is $\left(A_{m i x}\right)_{\lambda_{1}}=\left(\varepsilon_{x}\right)_{\lambda_{1}} b C_{X}+\left(\varepsilon_{Y}\right)_{\lambda_{1}} b C_{Y} \label{10.1}$ where $\lambda_1$ is the wavelength at which we measure the absorbance. Because Equation \ref{10.1} includes terms for the concentration of both X and Y, the absorbance at one wavelength does not provide enough information to determine either CX or CY. If we measure the absorbance at a second wavelength $\left(A_{m i x}\right)_{\lambda_{2}}=\left(\varepsilon_{x}\right)_{\lambda_{2}} b C_{X}+\left(\varepsilon_{Y}\right)_{\lambda_{2}} b C_{Y} \label{10.2}$ then we can determine CX and CY by solving simultaneously Equation \ref{10.1} and Equation \ref{10.2}. Of course, we also must determine the value for $\varepsilon_X$ and $\varepsilon_Y$ at each wavelength. For a mixture of n components, we must measure the absorbance at n different wavelengths. Example 10.3.2 The concentrations of Fe3+ and Cu2+ in a mixture are determined following their reaction with hexacyanoruthenate (II), $\text{Ru(CN)}_6^{4-}$, which forms a purple-blue complex with Fe3+ ($\lambda_\text{max}$ = 550 nm) and a pale-green complex with Cu2+ ($\lambda_\text{max}$ = 396 nm) [DiTusa, M. R.; Schlit, A. A. J. Chem. Educ. 1985, 62, 541–542]. The molar absorptivities (M–1 cm–1) for the metal complexes at the two wavelengths are summarized in the following table. analyte $\varepsilon_{550}$ $\varepsilon_{396}$ Fe3+ 9970 84 Cu2+ 34 856 When a sample that contains Fe3+ and Cu2+ is analyzed in a cell with a pathlength of 1.00 cm, the absorbance at 550 nm is 0.183 and the absorbance at 396 nm is 0.109. What are the molar concentrations of Fe3+ and Cu2+ in the sample? Solution Substituting known values into Equation \ref{10.1} and Equation \ref{10.2} gives \begin{aligned} A_{550} &=0.183=9970 C_{\mathrm{Fe}}+34 C_{\mathrm{Cu}} \ A_{396} &=0.109=84 C_{\mathrm{Fe}}+856 C_{\mathrm{Cu}} \end{aligned} \nonumber To determine CFe and CCu we solve the first equation for CCu $C_{\mathrm{Cu}}=\frac{0.183-9970 C_{\mathrm{Fe}}}{34} \nonumber$ and substitute the result into the second equation. \begin{aligned} 0.109 &=84 C_{\mathrm{Fe}}+856 \times \frac{0.183-9970 C_{\mathrm{Fe}}}{34} \ &=4.607-\left(2.51 \times 10^{5}\right) C_{\mathrm{Fe}} \end{aligned} \nonumber Solving for CFe gives the concentration of Fe3+ as $1.8 \times 10^{-5}$ M. Substituting this concentration back into the equation for the mixture’s absorbance at 396 nm gives the concentration of Cu2+ as $1.3 \times 10^{-4}$ M. Another approach to solving Example 10.3.2 is to multiply the first equation by 856/34 giving $4.607=251009 C_{\mathrm{Fe}}+856 C_\mathrm{Cu} \nonumber$ Subtracting the second equation from this equation \begin{aligned} 4.607 &=251009 C_{\mathrm{Fe}}+856 C_{\mathrm{Cu}} \-0.109 &=84 C_{\mathrm{Fe}}+856 C_{\mathrm{Cu}} \end{aligned} \nonumber gives $4.498=250925 C_{\mathrm{Fe}} \nonumber$ and we find that CFe is $1.8 \times 10^{-5}$. Having determined CFe we can substitute back into one of the other equations to solve for CCu, which is $1.3 \times 10^{-5}$. Exercise 10.3.2 The absorbance spectra for Cr3+ and Co2+ overlap significantly. To determine the concentration of these analytes in a mixture, its absorbance is measured at 400 nm and at 505 nm, yielding values of 0.336 and 0.187, respectively. The individual molar absorptivities (M–1 cm–1) for Cr3+ are 15.2 at 400 nm and 0.533 at 505 nm; the values for Co2+ are 5.60 at 400 nm and 5.07 at 505 nm. Answer Substituting into Equation \ref{10.1} and Equation \ref{10.2} gives $A_{400} = 0.336 = 15.2C_\text{Cr} + 5.60C_\text{Co} \nonumber$ $A_{400} = 0187 = 0.533C_\text{Cr} + 5.07C_\text{Co} \nonumber$ To determine CCr and CCo we solve the first equation for CCo $C_{\mathrm{Co}}=\frac{0.336-15.2 \mathrm{C}_{\mathrm{Co}}}{5.60} \nonumber$ and substitute the result into the second equation. $0.187=0.533 C_{\mathrm{Cr}}+5.07 \times \frac{0.336-15.2 C_{\mathrm{Co}}}{5.60} \nonumber$ $0.187=0.3042-13.23 C_{\mathrm{Cr}} \nonumber$ Solving for CCr gives the concentration of Cr3+ as $8.86 \times 10^{-3}$ M. Substituting this concentration back into the equation for the mixture’s absorbance at 400 nm gives the concentration of Co2+ as $3.60 \times 10^{-2}$ M. To obtain results with good accuracy and precision the two wavelengths should be selected so that $\varepsilon_X > \varepsilon_Y$ at one wavelength and $\varepsilon_X < \varepsilon_Y$ at the other wavelength. It is easy to appreciate why this is true. Because the absorbance at each wavelength is dominated by one analyte, any uncertainty in the concentration of the other analyte has less of an impact. Figure 10.3.11 shows that the choice of wavelengths for Practice Exercise 10.3.2 are reasonable. When the choice of wavelengths is not obvious, one method for locating the optimum wavelengths is to plot $\varepsilon_X / \varepsilon_y$ as function of wavelength, and determine the wavelengths where $\varepsilon_X / \varepsilon_y$ reaches maximum and minimum values [Mehra, M. C.; Rioux, J. J. Chem. Educ. 1982, 59, 688–689]. When the analyte’s spectra overlap severely, such that $\varepsilon_X \approx \varepsilon_Y$ at all wavelengths, other computational methods may provide better accuracy and precision. In a multiwavelength linear regression analysis, for example, a mixture’s absorbance is compared to that for a set of standard solutions at several wavelengths [Blanco, M.; Iturriaga, H.; Maspoch, S.; Tarin, P. J. Chem. Educ. 1989, 66, 178–180]. If ASX and ASY are the absorbance values for standard solutions of components X and Y at any wavelength, then $A_{SX}=\varepsilon_{X} b C_{SX} \label{10.3}$ $A_{SY}=\varepsilon_{Y} b C_{SY} \label{10.4}$ where CSX and CSY are the known concentrations of X and Y in the standard solutions. Solving Equation \ref{10.3} and Equation \ref{10.4} for $\varepsilon_X$ and for $\varepsilon_Y$, substituting into Equation \ref{10.1}, and rearranging, gives $\frac{A_{\operatorname{mix}}}{A_{S X}}=\frac{C_{X}}{C_{S X}}+\frac{C_{Y}}{C_{S Y}} \times \frac{A_{S Y}}{A_{S X}} \nonumber$ To determine CX and CY the mixture’s absorbance and the absorbances of the standard solutions are measured at several wavelengths. Graphing Amix/ASX versus ASY/ASX gives a straight line with a slope of CY/CSY and a y-intercept of CX/CSX. This approach is particularly helpful when it is not possible to find wavelengths where $\varepsilon_X > \varepsilon_Y$ and $\varepsilon_X < \varepsilon_Y$. The approach outlined here for a multiwavelength linear regression uses a single standard solution for each analyte. A more rigorous approach uses multiple standards for each analyte. The math behind the analysis of such data—which we call a multiple linear regression—is beyond the level of this text. For more details about multiple linear regression see Brereton, R. G. Chemometrics: Data Analysis for the Laboratory and Chemical Plant, Wiley: Chichester, England, 2003. Example 10.3.3 Figure $PageIndex{10.11}$ shows visible absorbance spectra for a standard solution of 0.0250 M Cr3+, a standard solution of 0.0750 M Co2+, and a mixture that contains unknown concentrations of each ion. The data for these spectra are shown here. $\lambda$ (nm) ACr ACu Amix $\lambda$ (nm) ACr ACu Amix 375 0.26 0.01 0.53 520 0.19 0.38 0.63 400 0.43 0.03 0.88 530 0.24 0.33 0.70 425 0.39 0.07 0.83 540 0.28 0.26 0.73 440 0.29 0.13 0.67 550 0.32 0.18 0.76 455 0.20 0.21 0.54 570 0.38 0.08 0.81 470 0.14 0.28 0.47 575 0.39 0.06 0.82 480 0.12 0.30 0.44 580 0.38 0.05 0.79 490 0.11 0.34 0.45 600 0.34 0.03 0.70 500 0.13 0.38 0.51 625 0.24 0.02 0.49 Use a multiwavelength regression analysis to determine the composition of the unknown. Solution First we need to calculate values for Amix/ASX and for ASY/ASX. Let’s define X as Co2+ and Y as Cr3+. For example, at a wavelength of 375 nm Amix/ASX is 0.53/0.01, or 53 and ASY/ASX is 0.26/0.01, or 26. Completing the calculation for all wavelengths and graphing Amix/ASX versus ASY/ASX gives the calibration curve shown in Figure 10.3.12 . Fitting a straight-line to the data gives a regression model of $\frac{A_{\operatorname{mix}}}{A_{S X}}=0.636+2.01 \times \frac{A_{S Y}}{A_{S X}} \nonumber$ Using the y-intercept, the concentration of Co2+ is $\frac{C_{X}}{C_{S X}}=\frac{\left[\mathrm{Co}^{2+}\right]}{0.0750 \mathrm{M}}=0.636 \nonumber$ or [Co2+] = 0.048 M; using the slope the concentration of Cr3+ is $\frac{C_{Y}}{C_{S Y}}=\frac{\left[\mathrm{Cr}^{3+}\right]}{0.0250 \mathrm{M}}=2.01 \nonumber$ or [Cr3+] = 0.050 M. Exercise 10.3.3 A mixture of $\text{MnO}_4^{-}$ and $\text{Cr}_2\text{O}_7^{2-}$, and standards of 0.10 mM KMnO4 and of 0.10 mM K2Cr2O7 give the results shown in the following table. Determine the composition of the mixture. The data for this problem is from Blanco, M. C.; Iturriaga, H.; Maspoch, S.; Tarin, P. J. Chem. Educ. 1989, 66, 178–180. $\lambda$ (nm) AMn ACr Amix 266 0.042 0.410 0.766 288 0.082 0.283 0.571 320 0.168 0.158 0.422 350 0.125 0.318 0.672 360 0.036 0.181 0.366 Answer Letting X represent $\text{MnO}_4^{-}$ and letting Y represent $\text{Cr}_2\text{O}_7^{2-}$, we plot the equation $\frac{A_{\operatorname{mix}}}{A_{SX}}=\frac{C_{X}}{C_{SX}}+\frac{C_{Y}}{C_{S Y}} \times \frac{A_{S Y}}{A_{SX}} \nonumber$ placing Amix/ASX on the y-axis and ASY/ASX on the x-axis. For example, at a wavelength of 266 nm the value Amix/ASX of is 0.766/0.042, or 18.2, and the value of ASY/ASX is 0.410/0.042, or 9.76. Completing the calculations for all wavelengths and plotting the data gives the result shown here Fitting a straight-line to the data gives a regression model of $\frac{A_{\text { mix }}}{A_{\text { SX }}}=0.8147+1.7839 \times \frac{A_{SY}}{A_{SX}} \nonumber$ Using the y-intercept, the concentration of $\text{MnO}_4^{-}$ is $\frac{C_{X}}{C_{S X}}=0.8147=\frac{\left[\mathrm{MnO}_{4}^{-}\right]}{1.0 \times 10^{-4} \ \mathrm{M} \ \mathrm{MnO}_{4}^{-}} \nonumber$ or $8.15 \times 10^{-5}$ M $\text{MnO}_4^{-}$, and using the slope, the concentration of $\text{Cr}_2\text{O}_7^{2-}$ is $\frac{C_{Y}}{C_{S Y}}=1.7839=\frac{\left[\mathrm{Cr}_{2} \mathrm{O}_{7}^{2-}\right]}{1.00 \times 10^{-4} \ \mathrm{M} \ \text{Cr}_{2} \mathrm{O}_{7}^{2-}} \nonumber$ or $1.78 \times 10^{-4}$ M $\text{Cr}_2\text{O}_7^{2-}$. Qualitative Applications As discussed in Chapter 10.2, ultraviolet, visible, and infrared absorption bands result from the absorption of electromagnetic radiation by specific valence electrons or bonds. The energy at which the absorption occurs, and the intensity of that absorption, is determined by the chemical environment of the absorbing moiety. For example, benzene has several ultraviolet absorption bands due to $\pi \rightarrow \pi^*$ transitions. The position and intensity of two of these bands, 203.5 nm ($\varepsilon$ = 7400 M–1 cm–1) and 254 nm ($\varepsilon$ = 204 M–1 cm–1), are sensitive to substitution. For benzoic acid, in which a carboxylic acid group replaces one of the aromatic hydrogens, the two bands shift to 230 nm ($\varepsilon$ = 11600 M–1 cm–1) and 273 nm ($\varepsilon$ = 970 M–1 cm–1). A variety of rules have been developed to aid in correlating UV/Vis absorption bands to chemical structure. Similar correlations are available for infrared absorption bands. For example a carbonyl’s C=O stretch is sensitive to adjacent functional groups, appearing at 1650 cm–1 for acids, 1700 cm–1 for ketones, and 1800 cm–1 for acid chlorides. The interpretation of UV/ Vis and IR spectra receives adequate coverage elsewhere in the chemistry curriculum, notably in organic chemistry, and is not considered further in this text. With the availability of computerized data acquisition and storage it is possible to build digital libraries of standard reference spectra. The identity of an a unknown compound often can be determined by comparing its spectrum against a library of reference spectra, a process known as spectral searching. Comparisons are made using an algorithm that calculates the cumulative difference between the sample’s spectrum and a reference spectrum. For example, one simple algorithm uses the following equation $D = \sum_{i = 1}^n | (A_{sample})_i - (A_{reference})_i | \nonumber$ where D is the cumulative difference, Asample is the sample’s absorbance at wavelength or wavenumber i, Areference is the absorbance of the reference compound at the same wavelength or wavenumber, and n is the number of digitized points in the spectra. The cumulative difference is calculated for each reference spectrum. The reference compound with the smallest value of D is the closest match to the unknown compound. The accuracy of spectral searching is limited by the number and type of compounds included in the library, and by the effect of the sample’s matrix on the spectrum. Another advantage of computerized data acquisition is the ability to subtract one spectrum from another. When coupled with spectral searching it is possible to determine the identity of several components in a sample without the need of a prior separation step by repeatedly searching and sub- tracting reference spectra. An example is shown in Figure 10.3.13 in which the composition of a two-component mixture is determined by successive searching and subtraction. Figure 10.3.13 a shows the spectrum of the mixture. A search of the spectral library selects cocaine•HCl (Figure 10.3.13 b) as a likely component of the mixture. Subtracting the reference spectrum for cocaine•HCl from the mixture’s spectrum leaves a result (Figure 10.3.13 c) that closely matches mannitol’s reference spectrum (Figure 10.3.13 d). Subtracting the reference spectrum for mannitol leaves a small residual signal (Figure 10.3.13 e). Characterization Applications Molecular absorption, particularly in the UV/Vis range, has been used for a variety of different characterization studies, including determining the stoichiometry of metal–ligand complexes and determining equilibrium constants. Both of these examples are examined in this section. Stoichiometry of a Metal-Ligand Complex We can determine the stoichiometry of the metal–ligand complexation reaction $\mathrm{M}+y \mathrm{L} \rightleftharpoons \mathrm{ML}_{y} \nonumber$ using one of three methods: the method of continuous variations, the mole-ratio method, and the slope-ratio method. Of these approaches, the method of continuous variations, also called Job’s method, is the most popular. In this method a series of solutions is prepared such that the total moles of metal and of ligand, ntotal, in each solution is the same. If (nM)i and (nL)i are, respectively, the moles of metal and ligand in solution i, then $n_{\text { total }}=\ \left(n_{\mathrm{M}}\right)_{i} \ + \ \left(n_{\mathrm{L}}\right)_{i} \nonumber$ The relative amount of ligand and metal in each solution is expressed as the mole fraction of ligand, (XL)i, and the mole fraction of metal, (XM)i, $\left(X_{\mathrm{L}}\right)_{i}=\frac{\left(n_{\mathrm{L}}\right)_{i}}{n_{\mathrm{total}}} \nonumber$ $\left(X_{M}\right)_{i}=1-\frac{\left(n_\text{L}\right)_{i}}{n_{\text { total }}}=\frac{\left(n_\text{M}\right)_{i}}{n_{\text { total }}} \nonumber$ The concentration of the metal–ligand complex in any solution is determined by the limiting reagent, with the greatest concentration occurring when the metal and the ligand are mixed stoichiometrically. If we monitor the complexation reaction at a wavelength where only the metal–ligand complex absorbs, a graph of absorbance versus the mole fraction of ligand has two linear branches—one when the ligand is the limiting reagent and a second when the metal is the limiting reagent. The intersection of the two branches represents a stoichiometric mixing of the metal and the ligand. We use the mole fraction of ligand at the intersection to determine the value of y for the metal–ligand complex MLy. $y=\frac{n_{\mathrm{L}}}{n_{\mathrm{M}}}=\frac{X_{\mathrm{L}}}{X_{\mathrm{M}}}=\frac{X_{\mathrm{L}}}{1-X_{\mathrm{L}}} \nonumber$ You also can plot the data as absorbance versus the mole fraction of metal. In this case, y is equal to (1 – XM)/XM. Example 10.3.4 To determine the formula for the complex between Fe2+ and o-phenanthroline, a series of solutions is prepared in which the total concentration of metal and ligand is held constant at $3.15 \times 10^{-4}$ M. The absorbance of each solution is measured at a wavelength of 510 nm. Using the following data, determine the formula for the complex. XL absorbance XL absorbance 0.000 0.000 0.600 0.693 0.100 0.116 0.700 0.809 0.200 0.231 0.800 0.693 0.300 0.347 0.900 0.347 0.400 0.462 1.000 0.000 0.500 0.578 Solution A plot of absorbance versus the mole fraction of ligand is shown in Figure 10.3.14 . To find the maximum absorbance, we extrapolate the two linear portions of the plot. The two lines intersect at a mole fraction of ligand of 0.75. Solving for y gives $y=\frac{X_{L}}{1-X_{L}}=\frac{0.75}{1-0.75}=3 \nonumber$ The formula for the metal–ligand complex is $\text{Fe(phen)}_3^{2+}$. Exercise 10.3.4 Use the continuous variations data in the following table to determine the formula for the complex between Fe2+ and SCN. The data for this problem is adapted from Meloun, M.; Havel, J.; Högfeldt, E. Computation of Solution Equilibria, Ellis Horwood: Chichester, England, 1988, p. 236. XL absorbance XL absorbance XL absorbance XL absorbance 0.0200 0.068 0.2951 0.670 0.5811 0.790 0.8923 0.324 0.0870 0.262 0.3887 0.767 0.6860 0.701 0.9787 0.071 0.1792 0.471 0.4964 0.807 0.7885 0.540 Answer The figure below shows a continuous variations plot for the data in this exercise. Although the individual data points show substantial curvature—enough curvature that there is little point in trying to draw linear branches for excess metal and excess ligand—the maximum absorbance clearly occurs at XL ≈ 0.5. The complex’s stoichiometry, therefore, is Fe(SCN)2+. Several precautions are necessary when using the method of continuous variations. First, the metal and the ligand must form only one metal–ligand complex. To determine if this condition is true, plots of absorbance versus XL are constructed at several different wavelengths and for several different values of ntotal. If the maximum absorbance does not occur at the same value of XL for each set of conditions, then more than one metal–ligand complex is present. A second precaution is that the metal–ligand complex’s absorbance must obey Beer’s law. Third, if the metal–ligand complex’s formation constant is relatively small, a plot of absorbance versus XL may show significant curvature. In this case it often is difficult to determine the stoichiometry by extrapolation. Finally, because the stability of a metal–ligand complex may be influenced by solution conditions, it is necessary to control carefully the composition of the solutions. When the ligand is a weak base, for example, each solutions must be buffered to the same pH. In the mole-ratio method the moles of one reactant, usually the metal, is held constant, while the moles of the other reactant is varied. The absorbance is monitored at a wavelength where the metal–ligand complex absorbs. A plot of absorbance as a function of the ligand-to-metal mole ratio, nL/nM, has two linear branches that intersect at a mole–ratio corresponding to the complex’s formula. Figure 10.3.15 a shows a mole-ratio plot for the formation of a 1:1 complex in which the absorbance is monitored at a wavelength where only the complex absorbs. Figure 10.3.15 b shows a mole-ratio plot for a 1:2 complex in which all three species—the metal, the ligand, and the complex—absorb at the selected wavelength. Unlike the method of continuous variations, the mole-ratio method can be used for complexation reactions that occur in a stepwise fashion if there is a difference in the molar absorptivities of the metal–ligand complexes, and if the formation constants are sufficiently different. A typical mole-ratio plot for the step-wise formation of ML and ML2 is shown in Figure 10.3.15 c. For both the method of continuous variations and the mole-ratio method, we determine the complex’s stoichiometry by extrapolating absorbance data from conditions in which there is a linear relationship between absorbance and the relative amounts of metal and ligand. If a metal–ligand complex is very weak, a plot of absorbance versus XL or nL/nM becomes so curved that it is impossible to determine the stoichiometry by extrapolation. In this case the slope-ratio is used. In the slope-ratio method two sets of solutions are prepared. The first set of solutions contains a constant amount of metal and a variable amount of ligand, chosen such that the total concentration of metal, CM, is much larger than the total concentration of ligand, CL. Under these conditions we may assume that essentially all the ligand reacts to form the metal–ligand complex. The concentration of the complex, which has the general form MxLy, is $\left[\mathrm{M}_{x} \mathrm{L_y}\right]=\frac{C_{\mathrm{L}}}{y} \nonumber$ If we monitor the absorbance at a wavelength where only MxLy absorbs, then $A=\varepsilon b\left[\mathrm{M}_{x} \mathrm{L}_{y}\right]=\frac{\varepsilon b C_{\mathrm{L}}}{y} \nonumber$ and a plot of absorbance versus CL is linear with a slope, sL, of $s_{\mathrm{L}}=\frac{\varepsilon b}{y} \nonumber$ A second set of solutions is prepared with a fixed concentration of ligand that is much greater than a variable concentration of metal; thus $\left[\mathrm{M}_{x} \mathrm{L}_{y}\right]=\frac{C_{\mathrm{M}}}{x} \nonumber$ $A=\varepsilon b\left[\mathrm{M}_{x} \mathrm{L}_{y}\right]=\frac{\varepsilon b C_{\mathrm{M}}}{x} \nonumber$ $s_{M}=\frac{\varepsilon b}{x} \nonumber$ A ratio of the slopes provides the relative values of x and y. $\frac{s_{\text{M}}}{s_{\text{L}}}=\frac{\varepsilon b / x}{\varepsilon b / y}=\frac{y}{x} \nonumber$ An important assumption in the slope-ratio method is that the complexation reaction continues to completion in the presence of a sufficiently large excess of metal or ligand. The slope-ratio method also is limited to systems in which only a single complex forms and for which Beer’s law is obeyed. Determination of Equilibrium Constants Another important application of molecular absorption spectroscopy is the determination of equilibrium constants. Let’s consider, as a simple example, an acid–base reaction of the general form $\operatorname{HIn}(a q)+ \ \mathrm{H}_{2} \mathrm{O}(l) \rightleftharpoons \ \mathrm{H}_{3} \mathrm{O}^{+}(a q)+\operatorname{In}^{-}(a q) \nonumber$ where HIn and In are the conjugate weak acid and weak base forms of an acid–base indicator. The equilibrium constant for this reaction is $K_{\mathrm{a}}=\frac{\left[\mathrm{H}_{3} \mathrm{O}^{+}\right][\mathrm{A^-}]}{[\mathrm{HA}]} \nonumber$ To determine the equilibrium constant’s value, we prepare a solution in which the reaction is in a state of equilibrium and determine the equilibrium concentration for H3O+, HIn, and In. The concentration of H3O+ is easy to determine by measuring the solution’s pH. To determine the concentration of HIn and In we can measure the solution’s absorbance. If both HIn and In absorb at the selected wavelength, then, from Beer's law, we know that $A=\varepsilon_{\mathrm{Hln}} b[\mathrm{HIn}]+\varepsilon_{\mathrm{ln}} b[\mathrm{In}^-] \label{10.5}$ where $\varepsilon_\text{HIn}$ and $\varepsilon_{\text{In}}$ are the molar absorptivities for HIn and In. The indicator’s total concentration, C, is given by a mass balance equation $C=[\mathrm{HIn}]+ [\text{In}^-] \label{10.6}$ Solving Equation \ref{10.6} for [HIn] and substituting into Equation \ref{10.5} gives $A=\varepsilon_{\mathrm{Hln}} b\left(C-\left[\mathrm{In}^{-}\right]\right)+\varepsilon_{\mathrm{ln}} b\left[\mathrm{In}^{-}\right] \nonumber$ which we simplify to $A=\varepsilon_{\mathrm{Hln}} bC- \varepsilon_{\mathrm{Hln}}b\left[\mathrm{In}^{-}\right]+\varepsilon_{\mathrm{ln}} b\left[\mathrm{In}^{-}\right] \nonumber$ $A=A_{\mathrm{HIn}}+b\left[\operatorname{In}^{-}\right]\left(\varepsilon_{\mathrm{ln}}-\varepsilon_{\mathrm{HIn}}\right) \label{10.7}$ where AHIn, which is equal to $\varepsilon_\text{HIn}bC$, is the absorbance when the pH is acidic enough that essentially all the indicator is present as HIn. Solving Equation \ref{10.7} for the concentration of In gives $\left[\operatorname{In}^{-}\right]=\frac{A-A_{\mathrm{Hln}}}{b\left(\varepsilon_{\mathrm{ln}}-\varepsilon_{\mathrm{HIn}}\right)} \label{10.8}$ Proceeding in the same fashion, we derive a similar equation for the concentration of HIn $[\mathrm{HIn}]=\frac{A_{\mathrm{In}}-A}{b\left(\varepsilon_{\mathrm{ln}}-\varepsilon_{\mathrm{Hln}}\right)} \label{10.9}$ where AIn, which is equal to $\varepsilon_{\text{In}}bC$, is the absorbance when the pH is basic enough that only Incontributes to the absorbance. Substituting Equation \ref{10.8} and Equation \ref{10.9} into the equilibrium constant expression for HIn gives $K_a = \frac {[\text{H}_3\text{O}^+][\text{In}^-]} {[\text{HIn}]} = [\text{H}_3\text{O}^+] \times \frac {A - A_\text{HIn}} {A_{\text{In}} - A} \label{10.10}$ We can use Equation \ref{10.10} to determine Ka in one of two ways. The simplest approach is to prepare three solutions, each of which contains the same amount, C, of indicator. The pH of one solution is made sufficiently acidic such that [HIn] >> [In]. The absorbance of this solution gives AHIn. The value of AIn is determined by adjusting the pH of the second solution such that [In] >> [HIn]. Finally, the pH of the third solution is adjusted to an intermediate value, and the pH and absorbance, A, recorded. The value of Ka is calculated using Equation \ref{10.10}. Example 10.3.5 The acidity constant for an acid–base indicator is determined by preparing three solutions, each of which has a total concentration of indicator equal to $5.00 \times 10^{-5}$ M. The first solution is made strongly acidic with HCl and has an absorbance of 0.250. The second solution is made strongly basic and has an absorbance of 1.40. The pH of the third solution is 2.91 and has an absorbance of 0.662. What is the value of Ka for the indicator? Solution The value of Ka is determined by making appropriate substitutions into 10.20 where [H3O+] is $1.23 \times 10^{-3}$; thus $K_{\mathrm{a}}=\left(1.23 \times 10^{-3}\right) \times \frac{0.662-0.250}{1.40-0.662}=6.87 \times 10^{-4} \nonumber$ Exercise 10.3.5 To determine the Ka of a merocyanine dye, the absorbance of a solution of $3.5 \times 10^{-4}$ M dye was measured at a pH of 2.00, a pH of 6.00, and a pH of 12.00, yielding absorbances of 0.000, 0.225, and 0.680, respectively. What is the value of Ka for this dye? The data for this problem is adapted from Lu, H.; Rutan, S. C. Anal. Chem., 1996, 68, 1381–1386. Answer The value of Ka is $K_{\mathrm{a}}=\left(1.00 \times 10^{-6}\right) \times \frac{0.225-0.000}{0.680-0.225}=4.95 \times 10^{-7} \nonumber$ A second approach for determining Ka is to prepare a series of solutions, each of which contains the same amount of indicator. Two solutions are used to determine values for AHIn and AIn. Taking the log of both sides of Equation \ref{10.10} and rearranging leave us with the following equation. $\log \frac{A-A_{\mathrm{Hin}}}{A_{\mathrm{ln}^{-}}-A}=\mathrm{pH}-\mathrm{p} K_{\mathrm{a}} \label{10.11}$ A plot of log[(A AHIn)/(AIn A)] versus pH is a straight-line with a slope of +1 and a y-intercept of –pKa. Exercise 10.3.6 To determine the Ka for the indicator bromothymol blue, the absorbance of each a series of solutions that contain the same concentration of bromothymol blue is measured at pH levels of 3.35, 3.65, 3.94, 4.30, and 4.64, yielding absorbance values of 0.170, 0.287, 0.411, 0.562, and 0.670, respectively. Acidifying the first solution to a pH of 2 changes its absorbance to 0.006, and adjusting the pH of the last solution to 12 changes its absorbance to 0.818. What is the value of Ka for bromothymol blue? The data for this problem is from Patterson, G. S. J. Chem. Educ., 1999, 76, 395–398. Answer To determine Ka we use Equation \ref{10.11}, plotting log[(A AHIn)/(AInA)] versus pH, as shown below. Fitting a straight-line to the data gives a regression model of $\log \frac{A-A_{\mathrm{HIn}}}{A_{\mathrm{ln}}-A}=-3.80+0.962 \mathrm{pH} \nonumber$ The y-intercept is –pKa; thus, the pKa is 3.80 and the Ka is $1.58 \times 10^{-4}$. In developing these approaches for determining Ka we considered a relatively simple system in which the absorbance of HIn and In are easy to measure and for which it is easy to determine the concentration of H3O+. In addition to acid–base reactions, we can adapt these approaches to any reaction of the general form $X(a q)+Y(a q)\rightleftharpoons Z(a q) \nonumber$ including metal–ligand complexation reactions and redox reactions, provided we can determine spectrophotometrically the concentration of the product, Z, and one of the reactants, either X or Y, and that we can determine the concentration of the other reactant by some other method. With appropriate modifications, a more complicated system in which we cannot determine the concentration of one or more of the reactants or products also is possible [Ramette, R. W. Chemical Equilibrium and Analysis, Addison-Wesley: Reading, MA, 1981, Chapter 13]. Evaluation of UV/Vis and IR Spectroscopy Scale of Operations Molecular UV/Vis absorption routinely is used for the analysis of trace analytes in macro and meso samples. Major and minor analytes are determined by diluting the sample before analysis, and concentrating a sample may allow for the analysis of ultratrace analytes. The scale of operations for infrared absorption is generally poorer than that for UV/Vis absorption. Accuracy Under normal conditions a relative error of 1–5% is easy to obtained with UV/Vis absorption. Accuracy usually is limited by the quality of the blank. Examples of the type of problems that are encountered include the pres- ence of particulates in the sample that scatter radiation, and the presence of interferents that react with analytical reagents. In the latter case the interferent may react to form an absorbing species, which leads to a positive determinate error. Interferents also may prevent the analyte from reacting, which leads to a negative determinate error. With care, it is possible to improve the accuracy of an analysis by as much as an order of magnitude. Precision In absorption spectroscopy, precision is limited by indeterminate errors—primarily instrumental noise—which are introduced when we measure absorbance. Precision generally is worse for low absorbances where P0 PT, and for high absorbances where PT approaches 0. We might expect, therefore, that precision will vary with transmittance. We can derive an expression between precision and transmittance by applying the propagation of uncertainty as described in Chapter 4. To do so we rewrite Beer’s law as $C=-\frac{1}{\varepsilon b} \log T \label{10.12}$ Table 4.3.1 in Chapter 4 helps us complete the propagation of uncertainty for Equation \ref{10.12}; thus, the absolute uncertainty in the concentration, sC, is $s_{c}=-\frac{0.4343}{\varepsilon b} \times \frac{s_{T}}{T} \label{10.13}$ where sT is the absolute uncertainty in the transmittance. Dividing Equation \ref{10.13} by Equation \ref{10.12} gives the relative uncertainty in concentration, sC/C, as $\frac{s_c}{C}=\frac{0.4343 s_{T}}{T \log T} \nonumber$ If we know the transmittance’s absolute uncertainty, then we can determine the relative uncertainty in concentration for any measured transmittance. Determining the relative uncertainty in concentration is complicated because sT is a function of the transmittance. As shown in Table 10.3.3 , three categories of indeterminate instrumental error are observed [Rothman, L. D.; Crouch, S. R.; Ingle, J. D. Jr. Anal. Chem. 1975, 47, 1226–1233]. A constant sT is observed for the uncertainty associated with reading %T on a meter’s analog or digital scale. Typical values are ±0.2–0.3% (a k1 of ±0.002–0.003) for an analog scale and ±0.001% a (k1 of ±0.00001) for a digital scale. Table 10.3.3 . Effect of Indeterminate Errors on Relative Uncertainty in Concentration category sources of indeterminate error relative uncertainty in concentration $s_T = k_1$ %T readout resolution noise in thermal detectors $\frac{s_{C}}{C}=\frac{0.4343 k_{1}}{T \log T}$ $s_T = k_2 \sqrt{T^2 + T}$ noise in photon detectors $\frac{s_{C}}{C}=\frac{0.4343 k_{2}}{\log T} \sqrt{1+\frac{1}{T}}$ $s_T = k_3 T$ positioning of sample cell fluctuations in source intensity $\frac{s_{C}}{C}=\frac{0.4343 k_{3}}{\log T}$ A constant sT also is observed for the thermal transducers used in infrared spectrophotometers. The effect of a constant sT on the relative uncertainty in concentration is shown by curve A in Figure 10.3.16 . Note that the relative uncertainty is very large for both high absorbances and low absorbances, reaching a minimum when the absorbance is 0.4343. This source of indeterminate error is important for infrared spectrophotometers and for inexpensive UV/Vis spectrophotometers. To obtain a relative uncertainty in concentration of ±1–2%, the absorbance is kept within the range 0.1–1. Values of sT are a complex function of transmittance when indeterminate errors are dominated by the noise associated with photon detectors. Curve B in Figure 10.3.16 shows that the relative uncertainty in concentration is very large for low absorbances, but is smaller at higher absorbances. Although the relative uncertainty reaches a minimum when the absorbance is 0.963, there is little change in the relative uncertainty for absorbances between 0.5 and 2. This source of indeterminate error generally limits the precision of high quality UV/Vis spectrophotometers for mid-to-high absorbances. Finally, the value of sT is directly proportional to transmittance for indeterminate errors that result from fluctuations in the source’s intensity and from uncertainty in positioning the sample within the spectrometer. The latter is particularly important because the optical properties of a sample cell are not uniform. As a result, repositioning the sample cell may lead to a change in the intensity of transmitted radiation. As shown by curve C in Figure 10.3.16 , the effect is important only at low absorbances. This source of indeterminate errors usually is the limiting factor for high quality UV/Vis spectrophotometers when the absorbance is relatively small. When the relative uncertainty in concentration is limited by the %T readout resolution, it is possible to improve the precision of the analysis by redefining 100% T and 0% T. Normally 100% T is established using a blank and 0% T is established while preventing the source’s radiation from reaching the detector. If the absorbance is too high, precision is improved by resetting 100% T using a standard solution of analyte whose concentration is less than that of the sample (Figure 10.3.17 a). For a sample whose absorbance is too low, precision is improved by redefining 0% T using a standard solution of the analyte whose concentration is greater than that of the analyte (Figure 10.3.17 b). In this case a calibration curve is required because a linear relationship between absorbance and concentration no longer exists. Precision is further increased by combining these two methods (Figure 10.3.17 c). Again, a calibration curve is necessary since the relation- ship between absorbance and concentration is no longer linear. Sensitivity The sensitivity of a molecular absorption method, which is the slope of a Beer’s law calibration curve, is the product of the analyte’s absorptivity and the pathlength of the sample cell ($\varepsilon b$). You can improve a method’s sensitivity by selecting a wavelength where absorbance is at a maximum or by increasing pathlength. See Figure 10.2.10 for an example of how the choice of wavelength affects a calibration curve’s sensitivity. Selectivity Selectivity rarely is a problem in molecular absorption spectrophotometry. In many cases it is possible to find a wavelength where only the analyte absorbs. When two or more species do contribute to the measured absorbance, a multicomponent analysis is still possible, as shown in Example 10.3.2 and Example 10.3.3 . Time, Cost, and Equipment The analysis of a sample by molecular absorption spectroscopy is relatively rapid, although additional time is required if we need to convert a nonabsorbing analyte into an absorbing form. The cost of UV/Vis instrumentation ranges from several hundred dollars for a simple filter photometer, to more than $50,000 for a computer-controlled, high-resolution double-beam instrument equipped with variable slit widths, and operating over an extended range of wavelengths. Fourier transform infrared spectrometers can be obtained for as little as$15,000–\$20,000, although more expensive models are available.
textbooks/chem/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/10%3A_Spectroscopic_Methods/10.03%3A_UV_Vis_and_IR_Spectroscopy.txt
Guystav Kirchoff and Robert Bunsen first used atomic absorption—along with atomic emission—in 1859 and 1860 as a means for identify atoms in flames and hot gases. Although atomic emission continued to develop as an analytical technique, progress in atomic absorption languished for almost a century. Modern atomic absorption spectroscopy has its beginnings in 1955 as a result of the independent work of A. C. Walsh and C. T. J. Alkemade [(a) Walsh, A. Anal. Chem. 1991, 63, 933A–941A; (b) Koirtyohann, S. R. Anal. Chem. 1991, 63, 1024A–1031A; (c) Slavin, W. Anal. Chem. 1991, 63, 1033A–1038A]. Commercial instruments were in place by the early 1960s, and the importance of atomic absorption as an analytical technique soon was evident. Instrumentation Atomic absorption spectrophotometers use the same single-beam or double-beam optics described earlier for molecular absorption spectrophotometers (see Figure 10.3.2 and Figure 10.3.3). There is, however, an important additional need in atomic absorption spectroscopy: we first must covert the analyte into free atoms. In most cases the analyte is in solution form. If the sample is a solid, then we must bring the analyte into solution before the analysis. When analyzing a lake sediment for Cu, Zn, and Fe, for example, we bring the analytes into solution as Cu2+, Zn2+, and Fe3+ by extracting them with a suitable reagent. For this reason, only the introduction of solution samples is considered in this chapter. What reagent we choose to use to bring an analyte into solution depends on our research goals. If we need to know the total amount of metal in the sediment, then we might try a microwave digestion using a mixture of concentrated acids, such as HNO3, HCl, and HF. This destroys the sediment’s matrix and brings everything into solution. On the other hand, if our interest is biologically available metals, we might extract the sample under milder conditions using, for example, a dilute solution of HCl or CH3COOH at room temperature. Atomization The process of converting an analyte to a free gaseous atom is called atomization. Converting an aqueous analyte into a free atom requires that we strip away the solvent, volatilize the analyte, and, if necessary, dissociate the analyte into free atoms. Desolvating an aqueous solution of CuCl2, for example, leaves us with solid particulates of CuCl2. Converting the particulate CuCl2 to gas phases atoms of Cu and Cl requires thermal energy. $\mathrm{CuCl}_{2}(a q) \rightarrow \mathrm{CuCl}_{2}(s) \rightarrow \mathrm{Cu}(g)+2 \mathrm{Cl}(g) \nonumber$ There are two common atomization methods: flame atomization and electrothermal atomization, although a few elements are atomized using other methods. Flame Atomizer Figure 10.4.1 shows a typical flame atomization assembly with close-up views of several key components. In the unit shown here, the aqueous sample is drawn into the assembly by passing a high-pressure stream of compressed air past the end of a capillary tube immersed in the sample. When the sample exits the nebulizer it strikes a glass impact bead, which converts it into a fine aerosol mist within the spray chamber. The aerosol mist is swept through the spray chamber by the combustion gases—compressed air and acetylene in this case—to the burner head where the flame’s thermal energy desolvates the aerosol mist to a dry aerosol of small, solid particulates. The flame’s thermal energy then volatilizes the particles, producing a vapor that consists of molecular species, ionic species, and free atoms. Burner. The slot burner in Figure 10.4.1 a provides a long optical pathlength and a stable flame. Because absorbance is directly proportional to pathlength, a long pathlength provides greater sensitivity. A stable flame minimizes uncertainty due to fluctuations in the flame. The burner is mounted on an adjustable stage that allows the entire assembly to move horizontally and vertically. Horizontal adjustments ensure the flame is aligned with the instrument’s optical path. Vertical adjustments change the height within the flame from which absorbance is monitored. This is important because two competing processes affect the concentration of free atoms in the flame. The more time an analyte spends in the flame the greater the atomization efficiency; thus, the production of free atoms increases with height. On the other hand, a longer residence time allows more opportunity for the free atoms to combine with oxygen to form a molecular oxide. As seen in Figure 10.4.2 , for a metal this is easy to oxidize, such as Cr, the concentration of free atoms is greatest just above the burner head. For a metal, such as Ag, which is difficult to oxidize, the concentration of free atoms increases steadily with height. Flame. The flame’s temperature, which affects the efficiency of atomization, depends on the fuel–oxidant mixture, several examples of which are listed in Table 10.4.1 . Of these, the air–acetylene and the nitrous oxide–acetylene flames are the most popular. Normally the fuel and oxidant are mixed in an approximately stoichiometric ratio; however, a fuel-rich mixture may be necessary for easily oxidized analytes. Table 10.4.1 . Fuels and Oxidants Used for Flame Combustion fuel oxidant temperature range (oC) natural gas air 1700–1900 hydrogen air 2000–2100 acetylene air 2100–2400 acetylene nitrous oxide 2600–2800 acetylene oxygen 3050–3150 Figure 10.4.3 shows a cross-section through the flame, looking down the source radiation’s optical path. The primary combustion zone usually is rich in gas combustion products that emit radiation, limiting is useful- ness for atomic absorption. The interzonal region generally is rich in free atoms and provides the best location for measuring atomic absorption. The hottest part of the flame typically is 2–3 cm above the primary combustion zone. As atoms approach the flame’s secondary combustion zone, the decrease in temperature allows for formation of stable molecular species. Sample Introduction. The most common means for introducing a sample into a flame atomizer is a continuous aspiration in which the sample flows through the burner while we monitor absorbance. Continuous aspiration is sample intensive, typically requiring from 2–5 mL of sample. Flame microsampling allows us to introduce a discrete sample of fixed volume, and is useful if we have a limited amount of sample or when the sample’s matrix is incompatible with the flame atomizer. For example, continuously aspirating a sample that has a high concentration of dissolved solids—sea water, for example, comes to mind—may build-up a solid de- posit on the burner head that obstructs the flame and that lowers the absorbance. Flame microsampling is accomplished using a micropipet to place 50–250 μL of sample in a Teflon funnel connected to the nebulizer, or by dipping the nebulizer tubing into the sample for a short time. Dip sampling usually is accomplished with an automatic sampler. The signal for flame microsampling is a transitory peak whose height or area is proportional to the amount of analyte that is injected. Advantages and Disadvantages of Flame Atomization. The principal advantage of flame atomization is the reproducibility with which the sample is introduced into the spectrophotometer; a significant disadvantage is that the efficiency of atomization is quite poor. There are two reasons for poor atomization efficiency. First, the majority of the aerosol droplets produced during nebulization are too large to be carried to the flame by the combustion gases. Consequently, as much as 95% of the sample never reaches the flame, which is the reason for the waste line shown at the bottom of the spray chamber in Figure 10.4.1 . A second reason for poor atomization efficiency is that the large volume of combustion gases significantly dilutes the sample. Together, these contributions to the efficiency of atomization reduce sensitivity because the analyte’s concentration in the flame may be a factor of $2.5 \times 10^{-6}$ less than that in solution [Ingle, J. D.; Crouch, S. R. Spectrochemical Analysis, Prentice-Hall: Englewood Cliffs, NJ, 1988; p. 275]. Electrothermal Atomizers A significant improvement in sensitivity is achieved by using the resistive heating of a graphite tube in place of a flame. A typical electrothermal atomizer, also known as a graphite furnace, consists of a cylindrical graphite tube approximately 1–3 cm in length and 3–8 mm in diameter. As shown in Figure 10.4.4 , the graphite tube is housed in an sealed assembly that has an optically transparent window at each end. A continuous stream of inert gas is passed through the furnace, which protects the graphite tube from oxidation and removes the gaseous products produced during atomization. A power supply is used to pass a current through the graphite tube, resulting in resistive heating. Samples of between 5–50 μL are injected into the graphite tube through a small hole at the top of the tube. Atomization is achieved in three stages. In the first stage the sample is dried to a solid residue using a current that raises the temperature of the graphite tube to about 110oC. In the second stage, which is called ashing, the temperature is increased to between 350–1200oC. At these temperatures organic material in the sample is converted to CO2 and H2O, and volatile inorganic materials are vaporized. These gases are removed by the inert gas flow. In the final stage the sample is atomized by rapidly increasing the temperature to between 2000–3000oC. The result is a transient absorbance peak whose height or area is proportional to the absolute amount of analyte injected into the graphite tube. Together, the three stages take approximately 45–90 s, with most of this time used for drying and ashing the sample. Electrothermal atomization provides a significant improvement in sensitivity by trapping the gaseous analyte in the small volume within the graphite tube. The analyte’s concentration in the resulting vapor phase is as much as $1000 \times$ greater than in a flame atomization [Parsons, M. L.; Major, S.; Forster, A. R. Appl. Spectrosc. 1983, 37, 411–418]. This improvement in sensitivity—and the resulting improvement in detection limits—is offset by a significant decrease in precision. Atomization efficiency is influenced strongly by the sample’s contact with the graphite tube, which is difficult to control reproducibly. Miscellaneous Atomization Methods A few elements are atomized by using a chemical reaction to produce a volatile product. Elements such as As, Se, Sb, Bi, Ge, Sn, Te, and Pb, for example, form volatile hydrides when they react with NaBH4 in the presence of acid. An inert gas carries the volatile hydride to either a flame or to a heated quartz observation tube situated in the optical path. Mercury is determined by the cold-vapor method in which it is reduced to elemental mercury with SnCl2. The volatile Hg is carried by an inert gas to an unheated observation tube situated in the instrument’s optical path. Quantitative Applications Atomic absorption is used widely for the analysis of trace metals in a variety of sample matrices. Using Zn as an example, there are standard atomic absorption methods for its determination in samples as diverse as water and wastewater, air, blood, urine, muscle tissue, hair, milk, breakfast cereals, shampoos, alloys, industrial plating baths, gasoline, oil, sediments, and rocks. Developing a quantitative atomic absorption method requires several considerations, including choosing a method of atomization, selecting the wavelength and slit width, preparing the sample for analysis, minimizing spectral and chemical interferences, and selecting a method of standardization. Each of these topics is considered in this section. Developing a Quantitative Method Flame or Electrothermal Atomization? The most important factor in choosing a method of atomization is the analyte’s concentration. Because of its greater sensitivity, it takes less analyte to achieve a given absorbance when using electrothermal atomization. Table 10.4.2 , which compares the amount of analyte needed to achieve an absorbance of 0.20 when using flame atomization and electrothermal atomization, is useful when selecting an atomization method. For example, flame atomization is the method of choice if our samples contain 1–10 mg Zn2+/L, but electrothermal atomization is the best choice for samples that contain 1–10 μg Zn2+/L. Table 10.4.2 . Concentration of Analyte (in mg/L) That Yields an Absorbance of 0.20 element flame atomization electrothermal atomization Ag 1.5 0.0035 Al 40 0.015 As 40 0.050 Ca 0.8 0.003 Cd 0.6 0.001 Co 2.5 0.021 Cr 2.5 0.0075 Cu 1.5 0.012 Fe 2.5 0.006 Hg 70 0.52 Mg 0.15 0.00075 Mn 1 0.003 Na 0.3 0.00023 Ni 2 0.024 Pb 5 0.080 Pt 70 0.29 Sn 50 0.023 Zn 0.3 0.00071 Source: Varian Cookbook, SpectraAA Software Version 4.00 Pro. As: 10 mg/L by hydride vaporization; Hg: 11.5 mg/L by cold-vapor; and Sn:18 mg/L by hydride vaporization Selecting the Wavelength and Slit Width. The source for atomic absorption is a hollow cathode lamp that consists of a cathode and anode enclosed within a glass tube filled with a low pressure of an inert gas, such as Ne or Ar (Figure 10.4.5 ). Applying a potential across the electrodes ionizes the filler gas. The positively charged gas ions collide with the negatively charged cathode, sputtering atoms from the cathode’s surface. Some of the sputtered atoms are in the excited state and emit radiation characteristic of the metal(s) from which the cathode is manufactured. By fashioning the cathode from the metallic analyte, a hollow cathode lamp provides emission lines that correspond to the analyte’s absorption spectrum. Because atomic absorption lines are narrow, we need to use a line source instead of a continuum source (compare, for example, Figure 10.2.4 with Figure 10.2.6). The effective bandwidth when using a continuum source is roughly $1000 \times$ larger than an atomic absorption line; thus, PT P0, %T ≈ 100, and A ≈ 0. Because a hollow cathode lamp is a line source, PT and P0 have different values giving a %T < 100 and A > 0. Each element in a hollow cathode lamp provides several atomic emission lines that we can use for atomic absorption. Usually the wavelength that provides the best sensitivity is the one we choose to use, although a less sensitive wavelength may be more appropriate for a sample that has higher concentration of analyte. For the Cr hollow cathode lamp in Table 10.4.3 , the best sensitivity is obtained using a wavelength of 357.9 nm. Table 10.4.3 . Atomic Emission Lines for a Cr Hollow Cathode Lamp wavelength (nm) slit width (nm) mg Cr/L giving A = 0.20 P0 (relative) 357.9 0.2 2.5 40 425.4 0.2 12 85 429.0 0.5 20 100 520.5 0.2 1500 15 520.8 0.2 500 20 Another consideration is the emission line's intensity. If several emission lines meet our requirements for sensitivity, we may wish to use the emission line with the largest relative P0 because there is less uncertainty in measuring P0 and PT. When analyzing a sample that is ≈10 mg Cr/L, for example, the first three wavelengths in Table 10.4.3 provide an appropriate sensitivity; the wavelengths of 425.4 nm and 429.0 nm, however, have a greater P0 and will provide less uncertainty in the measured absorbance. The emission spectrum for a hollow cathode lamp includes, in addition to the analyte's emission lines, additional emission lines from impurities present in the metallic cathode and from the filler gas. These additional lines are a potential source of stray radiation that could result in an instrumental deviation from Beer’s law. The monochromator’s slit width is set as wide as possible to improve the throughput of radiation and narrow enough to eliminate these sources of stray radiation. Preparing the Sample. Flame and electrothermal atomization require that the analyte is in solution. Solid samples are brought into solution by dissolving in an appropriate solvent. If the sample is not soluble it is digested, either on a hot-plate or by microwave, using HNO3, H2SO4, or HClO4. Alternatively, we can extract the analyte using a Soxhlet extractor. Liquid samples are analyzed directly or the analytes extracted if the matrix is in- compatible with the method of atomization. A serum sample, for instance, is difficult to aspirate when using flame atomization and may produce an unacceptably high background absorbance when using electrothermal atomization. A liquid–liquid extraction using an organic solvent and a chelating agent frequently is used to concentrate analytes. Dilute solutions of Cd2+, Co2+, Cu2+, Fe3+, Pb2+, Ni2+, and Zn2+, for example, are concentrated by extracting with a solution of ammonium pyrrolidine dithiocarbamate in methyl isobutyl ketone. Minimizing Spectral Interference. A spectral interference occurs when an analyte’s absorption line overlaps with an interferent’s absorption line or band. Because they are so narrow, the overlap of two atomic absorption lines seldom is a problem. On the other hand, a molecule’s broad absorption band or the scattering of source radiation is a potentially serious spectral interference. An important consideration when using a flame as an atomization source is its effect on the measured absorbance. Among the products of combustion are molecular species that exhibit broad absorption bands and particulates that scatter radiation from the source. If we fail to compensate for these spectral interferences, then the intensity of transmitted radiation is smaller than expected. The result is an apparent increase in the sample’s absorbance. Fortunately, absorption and scattering of radiation by the flame are corrected by analyzing a blank. Spectral interferences also occur when components of the sample’s matrix other than the analyte react to form molecular species, such as oxides and hydroxides. The resulting absorption and scattering constitutes the sample’s background and may present a significant problem, particularly at wavelengths below 300 nm where the scattering of radiation becomes more important. If we know the composition of the sample’s matrix, then we can prepare our samples using an identical matrix. In this case the background absorption is the same for both the samples and the standards. Alternatively, if the background is due to a known matrix component, then we can add that component in excess to all samples and standards so that the contribution of the naturally occurring interferent is insignificant. Finally, many interferences due to the sample’s matrix are eliminated by increasing the atomization temperature. For example, switching to a higher temperature flame helps prevents the formation of interfering oxides and hydroxides. If the identity of the matrix interference is unknown, or if it is not possible to adjust the flame or furnace conditions to eliminate the interference, then we must find another method to compensate for the background interference. Several methods have been developed to compensate for matrix interferences, and most atomic absorption spectrophotometers include one or more of these methods. One of the most common methods for background correction is to use a continuum source, such as a D2 lamp. Because a D2 lamp is a continuum source, absorbance of its radiation by the analyte’s narrow absorption line is negligible. Only the background, therefore, absorbs radiation from the D2 lamp. Both the analyte and the background, on the other hand, absorb the hollow cathode’s radiation. Subtracting the absorbance for the D2 lamp from that for the hollow cathode lamp gives a corrected absorbance that compensates for the background interference. Although this method of background correction is effective, it does assume that the background absorbance is constant over the range of wavelengths passed by the monochromator. If this is not true, then subtracting the two absorbances underestimates or overestimates the background. Other methods of background correction have been developed, including Zeeman effect background correction and Smith–Hieftje background correction, both of which are included in some commercially available atomic absorption spectrophotometers. Consult the chapter’s additional resources for additional information. Minimizing Chemical Interferences. The quantitative analysis of some elements is complicated by chemical interferences that occur during atomization. The most common chemical interferences are the formation of nonvolatile compounds that contain the analyte and ionization of the analyte. One example of the formation of a nonvolatile compound is the effect of $\text{PO}_4^{3-}$ or Al3+ on the flame atomic absorption analysis of Ca2+. In one study, for example, adding 100 ppm Al3+ to a solution of 5 ppm Ca2+ decreased calcium ion’s absorbance from 0.50 to 0.14, while adding 500 ppm $\text{PO}_4^{3-}$ to a similar solution of Ca2+ decreased the absorbance from 0.50 to 0.38. These interferences are attributed to the formation of nonvolatile particles of Ca3(PO4)2 and an Al–Ca–O oxide [Hosking, J. W.; Snell, N. B.; Sturman, B. T. J. Chem. Educ. 1977, 54, 128–130]. When using flame atomization, we can minimize the formation of non-volatile compounds by increasing the flame’s temperature by changing the fuel-to-oxidant ratio or by switching to a different combination of fuel and oxidant. Another approach is to add a releasing agent or a protecting agent to the sample. A releasing agent is a species that reacts preferentially with the interferent, releasing the analyte during atomization. For example, Sr2+ and La3+ serve as releasing agents for the analysis of Ca2+ in the presence of $\text{PO}_4^{3-}$ or Al3+. Adding 2000 ppm SrCl2 to the Ca2+/ $\text{PO}_4^{3-}$ and to the Ca2+/Al3+ mixtures described in the previous paragraph increased the absorbance to 0.48. A protecting agent reacts with the analyte to form a stable volatile complex. Adding 1% w/w EDTA to the Ca2+/ $\text{PO}_4^{3-}$ solution described in the previous paragraph increased the absorbance to 0.52. An ionization interference occurs when thermal energy from the flame or the electrothermal atomizer is sufficient to ionize the analyte $\mathrm{M}(s)\rightleftharpoons \ \mathrm{M}^{+}(a q)+e^{-} \label{10.1}$ where M is the analyte. Because the absorption spectra for M and M+ are different, the position of the equilibrium in reaction \ref{10.1} affects the absorbance at wavelengths where M absorbs. To limit ionization we add a high concentration of an ionization suppressor, which is a species that ionizes more easily than the analyte. If the ionization suppressor's concentration is sufficient, then the increased concentration of electrons in the flame pushes reaction \ref{10.1} to the left, preventing the analyte’s ionization. Potassium and cesium frequently are used as an ionization suppressor because of their low ionization energy. Standardizing the Method. Because Beer’s law also applies to atomic absorption, we might expect atomic absorption calibration curves to be linear. In practice, however, most atomic absorption calibration curves are nonlinear or linear over a limited range of concentrations. Nonlinearity in atomic absorption is a consequence of instrumental limitations, including stray radiation from the hollow cathode lamp and the variation in molar absorptivity across the absorption line. Accurate quantitative work, therefore, requires a suitable means for computing the calibration curve from a set of standards. When possible, a quantitative analysis is best conducted using external standards. Unfortunately, matrix interferences are a frequent problem, particularly when using electrothermal atomization. For this reason the method of standard additions often is used. One limitation to this method of standardization, however, is the requirement of a linear relationship between absorbance and concentration. Most instruments include several different algorithms for computing the calibration curve. The instrument in my lab, for example, includes five algorithms. Three of the algorithms fit absorbance data using linear, quadratic, or cubic polynomial functions of the analyte’s concentration. It also includes two algorithms that fit the concentrations of the standards to quadratic functions of the absorbance. Representative Method 10.4.1: Determination of Cu and Zn in Tissue Samples The best way to appreciate the theoretical and the practical details discussed in this section is to carefully examine a typical analytical method. Although each method is unique, the following description of the determination of Cu and Zn in biological tissues provides an instructive example of a typical procedure. The description here is based on Bhattacharya, S. K.; Goodwin, T. G.; Crawford, A. J. Anal. Lett. 1984, 17, 1567–1593, and Crawford, A. J.; Bhattacharya, S. K. Varian Instruments at Work, Number AA–46, April 1985. Description of Method. Copper and zinc are isolated from tissue samples by digesting the sample with HNO3 after first removing any fatty tissue. The concentration of copper and zinc in the supernatant are determined by atomic absorption using an air-acetylene flame. Procedure. Tissue samples are obtained by a muscle needle biopsy and dried for 24–30 h at 105oC to remove all traces of moisture. The fatty tissue in a dried sample is removed by extracting overnight with anhydrous ether. After removing the ether, the sample is dried to obtain the fat-free dry tissue weight (FFDT). The sample is digested at 68oC for 20–24 h using 3 mL of 0.75 M HNO3. After centrifuging at 2500 rpm for 10 minutes, the supernatant is transferred to a 5-mL volumetric flask. The digestion is repeated two more times, for 2–4 hours each, using 0.9-mL aliquots of 0.75 M HNO3. These supernatants are added to the 5-mL volumetric flask, which is diluted to volume with 0.75 M HNO3. The concentrations of Cu and Zn in the diluted supernatant are determined by flame atomic absorption spectroscopy using an air-acetylene flame and external standards. Copper is analyzed at a wavelength of 324.8 nm with a slit width of 0.5 nm, and zinc is analyzed at 213.9 nm with a slit width of 1.0 nm. Background correction using a D2 lamp is necessary for zinc. Results are reported as μg of Cu or Zn per gram of FFDT. Questions. 1. Describe the appropriate matrix for the external standards and for the blank? The matrix for the standards and the blank should match the matrix of the samples; thus, an appropriate matrix is 0.75 M HNO3. Any interferences from other components of the sample matrix are minimized by background correction. 2. Why is a background correction necessary for the analysis of Zn, but not for the analysis of Cu? Background correction compensates for background absorption and scattering due to interferents in the sample. Such interferences are most severe when using a wavelength less than 300 nm. This is the case for Zn, but not for Cu. 3. A Cu hollow cathode lamp has several emission lines, the properties of which are shown in the following table. Explain why this method uses the line at 324.8 nm. wavelength (nm) slit width (nm) mg Cu/L for A = 0.20 P0 (relative) 217.9 0.2 15 3 218.2 0.2 15 3 222.6 0.2 60 5 244.2 0.2 400 15 249.2 0.5 200 24 324.8 0.5 1.5 100 327.4 0.5 3 87 With 1.5 mg Cu/L giving an absorbance of 0.20, the emission line at 324.8 nm has the best sensitivity. In addition, it is the most intense emission line, which decreases the uncertainty in the measured absorbance. Example 10.4.1 To evaluate the method described in Representative Method 10.4.1, a series of external standard is prepared and analyzed, providing the results shown here [Crawford, A. J.; Bhattacharya, S. K. “Microanalysis of Copper and Zinc in Biopsy-Sized Tissue Specimens by Atomic Absorption Spectroscopy Using a Stoichiometric Air-Acetylene Flame,” Varian Instruments at Work, Number AA–46, April 1985]. µg Cu/mL absorbance µg Cu/mL absorbance 0.000 0.000 0.500 0.033 0.100 0.006 0.600 0.039 0.200 0.013 0.700 0.046 0.300 0.020 1.00 0.066 0.400 0.026 A bovine liver standard reference material is used to evaluate the method’s accuracy. After drying and extracting the sample, a 11.23-mg FFDT tissue sample gives an absorbance of 0.023. Report the amount of copper in the sample as μg Cu/g FFDT. Solution Linear regression of absorbance versus the concentration of Cu in the standards gives the calibration curve shown below and the following calibration equation. $A=-0.0002+0.0661 \times \frac{\mu \mathrm{g} \ \mathrm{Cu}}{\mathrm{mL}} \nonumber$ Substituting the sample’s absorbance into the calibration equation gives the concentration of copper as 0.351 μg/mL. The concentration of copper in the tissue sample, therefore, is $\frac { \frac{0.351 \mu \mathrm{g} \ \mathrm{Cu}}{\mathrm{mL}} \times 5.000 \ \mathrm{mL}} {0.01123 \text{ g sample}}=156 \ \mu \mathrm{g} \ \mathrm{Cu} / \mathrm{g} \ \mathrm{FDT} \nonumber$ Evaluation of Atomic Absorption Spectroscopy Scale of Operation Atomic absorption spectroscopy is ideally suited for the analysis of trace and ultratrace analytes, particularly when using electrothermal atomization. For minor and major analytes, sample are diluted before the analysis. Most analyses use a macro or a meso sample. The small volume requirement for electrothermal atomization or for flame microsampling, however, makes practical the analysis of micro and ultramicro samples. Accuracy If spectral and chemical interferences are minimized, an accuracy of 0.5–5% is routinely attainable. When the calibration curve is nonlinear, accuracy is improved by using a pair of standards whose absorbances closely bracket the sample’s absorbance and assuming that the change in absorbance is linear over this limited concentration range. Determinate errors for electrothermal atomization often are greater than those obtained with flame atomization due to more serious matrix interferences. Precision For an absorbance greater than 0.1–0.2, the relative standard deviation for atomic absorption is 0.3–1% for flame atomization and 1–5% for electrothermal atomization. The principle limitation is the uncertainty in the concentration of free analyte atoms that result from variations in the rate of aspiration, nebulization, and atomization for a flame atomizer, and the consistency of injecting samples for electrothermal atomization. Sensitivity The sensitivity of a flame atomic absorption analysis is influenced by the flame’s composition and by the position in the flame from which we monitor the absorbance. Normally the sensitivity of an analysis is optimized by aspirating a standard solution of analyte and adjusting the fuel-to-oxidant ratio, the nebulizer flow rate, and the height of the burner, to give the greatest absorbance. With electrothermal atomization, sensitivity is influenced by the drying and ashing stages that precede atomization. The temperature and time at each stage is optimized for each type of sample. Sensitivity also is influenced by the sample’s matrix. We already noted, for example, that sensitivity is decreased by a chemical interference. An increase in sensitivity may be realized by adding a low molecular weight alcohol, ester, or ketone to the solution, or by using an organic solvent. Selectivity Due to the narrow width of absorption lines, atomic absorption provides excellent selectivity. Atomic absorption is used for the analysis of over 60 elements at concentrations at or below the level of μg/L. Time, Cost, and Equipment The analysis time when using flame atomization is short, with sample throughputs of 250–350 determinations per hour when using a fully automated system. Electrothermal atomization requires substantially more time per analysis, with maximum sample throughputs of 20–30 determinations per hour. The cost of a new instrument ranges from between $10,000–$50,000 for flame atomization, and from $18,000–$70,000 for electrothermal atomization. The more expensive instruments in each price range include double-beam optics, automatic samplers, and can be programmed for multielemental analysis by allowing the wavelength and hollow cathode lamp to be changed automatically.
textbooks/chem/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/10%3A_Spectroscopic_Methods/10.04%3A_Atomic_Absorption_Spectroscopy.txt
An analyte in an excited state possesses an energy, E2, that is greater than its energy when it is in a lower energy state, E1. When the analyte returns to its lower energy state—a process we call relaxation—the excess energy, $\Delta E$ $\Delta E=E_{2}-E_{1} \nonumber$ is released. Figure 10.1.4 shows a simplified picture of this process. The amount of time an analyte, A, spends in its excited state—what we call the excited state's lifetime—is short, typically 10–5–10–9 s for an electronic excited state and 10–15 s for a vibrational excited state. Relaxation of the analyte's excited state, A*, occurs through several mechanisms, including collisions with other species in the sample, photochemical reactions, and the emission of photons. In the first process, which we call vibrational relaxation or nonradiative relaxation, the excess energy is released as heat. $A^{*} \longrightarrow A+\text { heat } \nonumber$ Relaxation by a photochemical reaction may involve simple decomposition $A^{*} \longrightarrow X+Y \nonumber$ or a reaction between A* and another species $A^{*}+Z \longrightarrow X+Y \nonumber$ In both cases the excess energy is used up in the chemical reaction or released as heat. In the third mechanism, the excess energy is released as a photon of electromagnetic radiation. $A^{*} \longrightarrow A+h \nu \nonumber$ The release of a photon following thermal excitation is called emission and that following the absorption of a photon is called photoluminescence. In chemiluminescence and bioluminescence, excitation results from a chemical or a biochemical reaction, respectively. Spectroscopic methods based on photoluminescence are the subject of the next section and atomic emission is covered in Chapter 10.7. 10.06: Photoluminescent Spectroscopy Photoluminescence is divided into two categories: fluorescence and phosphorescence. A pair of electrons that occupy the same electronic ground state have opposite spins and are in a singlet spin state (Figure 10.6.1 a). When an analyte absorbs an ultraviolet or a visible photon, one of its valence electrons moves from the ground state to an excited state with a conservation of the electron’s spin (Figure 10.6.1 b). Emission of a photon from a singlet excited state to the singlet ground state—or between any two energy levels with the same spin—is called fluorescence. The probability of fluorescence is very high and the average lifetime of an electron in the excited state is only 10–5–10–8 s. Fluorescence, therefore, rapidly decays once the source of excitation is removed. In some cases an electron in a singlet excited state is transformed to a triplet excited state (Figure 10.6.1 c) in which its spin no is longer paired with the ground state. Emission between a triplet excited state and a singlet ground state—or between any two energy levels that differ in their respective spin states–is called phosphorescence. Because the average lifetime for phosphorescence ranges from 10–4–104 s, phosphorescence may continue for some time after we remove the excitation source. The use of molecular fluorescence for qualitative analysis and for semi-quantitative analysis dates to the early to mid 1800s, with more accurate quantitative methods appearing in the 1920s. Instrumentation for fluorescence spectroscopy using a filter or a monochromator for wavelength selection appeared in, respectively, the 1930s and 1950s. Although the discovery of phosphorescence preceded that of fluorescence by almost 200 years, qualitative and quantitative applications of molecular phosphorescence did not receive much attention until after the development of fluorescence instrumentation. As you might expect, the persistence of long-lived phosphorescence made it more noticeable. Fluorescence and Phosphorescence Spectra To appreciate the origin of fluorescence and phosphorescence we must consider what happens to a molecule following the absorption of a photon. Let’s assume the molecule initially occupies the lowest vibrational energy level of its electronic ground state, which is the singlet state labeled S0 in Figure 10.6.2 . Absorption of a photon excites the molecule to one of several vibrational energy levels in the first excited electronic state, S1, or the second electronic excited state, S2, both of which are singlet states. Relaxation to the ground state occurs by a number of mechanisms, some of which result in the emission of a photon and others that occur without the emission of a photon. These relaxation mechanisms are shown in Figure 10.6.2 . The most likely relaxation pathway from any excited state is the one with the shortest lifetime. Radiationless Deactivation When a molecule relaxes without emitting a photon we call the process radiationless deactivation. One example of radiationless deactivation is vibrational relaxation, in which a molecule in an excited vibrational energy level loses energy by moving to a lower vibrational energy level in the same electronic state. Vibrational relaxation is very rapid, with an average lifetime of <10–12 s. Because vibrational relaxation is so efficient, a molecule in one of its excited state’s higher vibrational energy levels quickly returns to the excited state’s lowest vibrational energy level. Another form of radiationless deactivation is an internal conversion in which a molecule in the ground vibrational level of an excited state passes directly into a higher vibrational energy level of a lower energy electronic state of the same spin state. By a combination of internal conversions and vibrational relaxations, a molecule in an excited electronic state may return to the ground electronic state without emitting a photon. A related form of radiationless deactivation is an external conversion in which excess energy is transferred to the solvent or to another component of the sample’s matrix. Let’s use Figure 10.6.2 to illustrate how a molecule can relax back to its ground state without emitting a photon. Suppose our molecule is in the highest vibrational energy level of the second electronic excited state. After a series of vibrational relaxations brings the molecule to the lowest vibrational energy level of S2, it undergoes an internal conversion into a higher vibrational energy level of the first excited electronic state. Vibrational relaxations bring the molecule to the lowest vibrational energy level of S1. Following an internal conversion into a higher vibrational energy level of the ground state, the molecule continues to undergo vibrational relaxation until it reaches the lowest vibrational energy level of S0. A final form of radiationless deactivation is an intersystem crossing in which a molecule in the ground vibrational energy level of an excited electronic state passes into one of the higher vibrational energy levels of a lower energy electronic state with a different spin state. For example, an intersystem crossing is shown in Figure 10.6.2 between the singlet excited state S1 and the triplet excited state T1. Relaxation by Fluorescence Fluorescence occurs when a molecule in an excited state’s lowest vibrational energy level returns to a lower energy electronic state by emitting a photon. Because molecules return to their ground state by the fastest mechanism, fluorescence is observed only if it is a more efficient means of relaxation than a combination of internal conversions and vibrational relaxations. A quantitative expression of fluorescence efficiency is the fluorescent quantum yield, $\Phi_f$, which is the fraction of excited state molecules that return to the ground state by fluorescence. The fluorescent quantum yields range from 1 when every molecule in an excited state undergoes fluorescence, to 0 when fluorescence does not occur. The intensity of fluorescence, If, is proportional to the amount of radiation absorbed by the sample, P0PT, and the fluorescent quantum yield $I_{f}=k \Phi_{f}\left(P_{0}-P_{\mathrm{T}}\right) \label{10.1}$ where k is a constant that accounts for the efficiency of collecting and detecting the fluorescent emission. From Beer’s law we know that $\frac{P_{\mathrm{T}}}{P_{0}}=10^{-\varepsilon b C} \label{10.2}$ where C is the concentration of the fluorescing species. Solving Equation \ref{10.2} for PT and substituting into Equation \ref{10.1} gives, after simplifying $I_{f}=k \Phi_{f} P_{0}\left(1-10^{-\varepsilon b C}\right) \label{10.3}$ When $\varepsilon bC$ < 0.01, which often is the case when the analyte's concentration is small, Equation \ref{10.3} simplifies to $I_{f}=2.303 k \Phi_{f} \varepsilon b C P_{0}=k^{\prime} P_{0} \label{10.4}$ where k′ is a collection of constants. The intensity of fluorescence, therefore, increases with an increase in the quantum efficiency, the source’s incident power, and the molar absorptivity and the concentration of the fluorescing species. Fluorescence generally is observed when the molecule’s lowest energy absorption is a $\pi \rightarrow \pi^*$ transition, although some $n \rightarrow \pi^*$ transitions show weak fluorescence. Many unsubstituted, nonheterocyclic aromatic compounds have a favorable fluorescence quantum yield, although substitutions on the aromatic ring can effect $\Phi_f$ significantly. For example, the presence of an electron-withdrawing group, such as –NO2, decreases $\Phi_f$, while adding an electron-donating group, such as –OH, increases $\Phi_f$. Fluorrescence also increases for aromatic ring systems and for aromatic molecules with rigid planar structures. Figure 10.6.3 shows the fluorescence of quinine under a UV lamp. A molecule’s fluorescent quantum yield also is influenced by external variables, such as temperature and solvent. Increasing the temperature generally decreases $\Phi_f$ because more frequent collisions between the molecule and the solvent increases external conversion. A decrease in the solvent’s viscosity decreases $\Phi_f$ for similar reasons. For an analyte with acidic or basic functional groups, a change in pH may change the analyte’s structure and its fluorescent properties. As shown in Figure 10.6.2 , fluorescence may return the molecule to any of several vibrational energy levels in the ground electronic state. Fluorescence, therefore, occurs over a range of wavelengths. Because the change in energy for fluorescent emission generally is less than that for absorption, a molecule’s fluorescence spectrum is shifted to higher wavelengths than its absorption spectrum. Relaxation by Phosphorescence A molecule in a triplet electronic excited state’s lowest vibrational energy level normally relaxes to the ground state by an intersystem crossing to a singlet state or by an external conversion. Phosphorescence occurs when the molecule relaxes by emitting a photon. As shown in Figure 10.6.2 , phosphorescence occurs over a range of wavelengths, all of which are at lower energies than the molecule’s absorption band. The intensity of phosphorescence, $I_p$, is given by an equation similar to Equation \ref{10.4} for fluorescence \begin{align} I_{P} &=2.303 k \Phi_{P} \varepsilon b C P_{0} \[4pt] &=k^{\prime} P_{0} \label{10.5}\end{align} where $\Phi_p$ is the phosphorescent quantum yield. Phosphorescence is most favorable for molecules with $n \rightarrow \pi^*$ transitions, which have a higher probability for an intersystem crossing than $\pi \rightarrow \pi^*$ transitions. For example, phosphorescence is observed with aromatic molecules that contain carbonyl groups or heteroatoms. Aromatic compounds that contain halide atoms also have a higher efficiency for phosphorescence. In general, an increase in phosphorescence corresponds to a decrease in fluorescence. Because the average lifetime for phosphorescence can be quite long, ranging from 10–4–104 s, the phosphorescent quantum yield usually is quite small. An improvement in $\Phi_p$ is realized by decreasing the efficiency of external conversion. This is accomplished in several ways, including lowering the temperature, using a more viscous solvent, depositing the sample on a solid substrate, or trapping the molecule in solution. Figure 10.6.4 shows an example of phosphorescence. Excitation Versus Emission Spectra Photoluminescence spectra are recorded by measuring the intensity of emitted radiation as a function of either the excitation wavelength or the emission wavelength. An excitation spectrum is obtained by monitoring emission at a fixed wavelength while varying the excitation wavelength. When corrected for variations in the source’s intensity and the detector’s response, a sample’s excitation spectrum is nearly identical to its absorbance spectrum. The excitation spectrum provides a convenient means for selecting the best excitation wavelength for a quantitative or qualitative analysis. In an emission spectrum a fixed wavelength is used to excite the sample and the intensity of emitted radiation is monitored as function of wavelength. Although a molecule has a single excitation spectrum, it has two emission spectra, one for fluorescence and one for phosphorescence. Figure 10.6.5 shows the UV absorption spectrum and the UV fluorescence emission spectrum for quinine. Instrumentation The basic instrumentation for monitoring fluorescence and phosphorescence—a source of radiation, a means of selecting a narrow band of radiation, and a detector—are the same as those for absorption spectroscopy. The unique demands of fluorescence and phosphorescence, however, require some modifications to the instrument designs seen earlier in Figure 10.3.1 (filter photometer), Figure 10.3.2 (single-beam spectrophotometer), Figure 10.3.3 (double-beam spectrophotometer), and Figure 10.3.4 (diode array spectrometer). The most important difference is that the detector cannot be placed directly across from the source. Figure 10.6.6 shows why this is the case. If we place the detector along the source’s axis it receives both the transmitted source radiation, PT, and the fluorescent, If, or phosphorescent, Ip, radiation. Instead, we rotate the director and place it at 90o to the source. Instruments for Measuring Fluorescence Figure 10.6.7 shows the basic design of an instrument for measuring fluorescence, which includes two wavelength selectors, one for selecting the source's excitation wavelength and one for selecting the analyte's emission wavelength. In a fluorimeter the excitation and emission wavelengths are selected using absorption or interference filters. The excitation source for a fluorimeter usually is a low-pressure Hg vapor lamp that provides intense emission lines distributed throughout the ultraviolet and visible region. When a monochromator is used to select the excitation and the emission wavelengths, the instrument is called a spectrofluorometer. With a monochromator the excitation source usually is a high-pressure Xe arc lamp, which has a continuous emission spectrum. Either instrumental design is appropriate for quantitative work, although only a spectrofluorometer can record an excitation or emission spectrum. A Hg vapor lamp has emission lines at 254, 312, 365, 405, 436, 546, 577, 691, and 773 nm. The sample cells for molecular fluorescence are similar to those for molecular absorption (see Figure 10.3.6). Remote sensing using a fiber optic probe (see Figure 10.3.7) is possible using with either a fluorimeter or spectrofluorometer. An analyte that is fluorescent is monitored directly. For an analyte that is not fluorescent, a suitable fluorescent probe molecule is incorporated into the tip of the fiber optic probe. The analyte’s reaction with the probe molecule leads to an increase or decrease in fluorescence. Instruments for Measuring Phosphorescence An instrument for molecular phosphorescence must discriminate between phosphorescence and fluorescence. Because the lifetime for fluorescence is shorter than that for phosphorescence, discrimination is achieved by incorporating a delay between exciting the sample and measuring the phosphorescent emission. Figure 10.6.8 shows how two out-of-phase choppers allow us to block fluorescent emission from reaching the detector when the sample is being excited and to prevent the source radiation from causing fluorescence when we are measuring the phosphorescent emission. Because phosphorescence is such a slow process, we must prevent the excited state from relaxing by external conversion. One way this is accomplished is by dissolving the sample in a suitable organic solvent, usually a mixture of ethanol, isopentane, and diethylether. The resulting solution is frozen at liquid-N2 temperatures to form an optically clear solid. The solid matrix minimizes external conversion due to collisions between the analyte and the solvent. External conversion also is minimized by immobilizing the sample on a solid substrate, making possible room temperature measurements. One approach is to place a drop of a solution that contains the analyte on a small disc of filter paper. After drying the sample under a heat lamp, the sample is placed in the spectrofluorometer for analysis. Other solid substrates include silica gel, alumina, sodium acetate, and sucrose. This approach is particularly useful for the analysis of thin layer chromatography plates. Quantitative Applications Molecular fluorescence and, to a lesser extent, phosphorescence are used for the direct or indirect quantitative analysis of analytes in a variety of matrices. A direct quantitative analysis is possible when the analyte’s fluorescent or phosphorescent quantum yield is favorable. If the analyte is not fluorescent or phosphorescent, or if the quantum yield is unfavorable, then an indirect analysis may be feasible. One approach is to react the analyte with a reagent to form a product that is fluorescent or phosphorescent. Another approach is to measure a decrease in fluorescence or phosphores- cence when the analyte is added to a solution that contains a fluorescent or phosphorescent probe molecule. A decrease in emission is observed when the reaction between the analyte and the probe molecule enhances radiationless deactivation or results in a nonemitting product. The application of fluorescence and phosphorescence to inorganic and organic analytes are considered in this section. Inorganic Analytes Except for a few metal ions, most notably $\text{UO}_2^+$, most inorganic ions are not sufficiently fluorescent for a direct analysis. Many metal ions are determined indirectly by reacting with an organic ligand to form a fluorescent or, less commonly, a phosphorescent metal–ligand complex. One example is the reaction of Al3+ with the sodium salt of 2, 4, 3′-trihydroxyazobenzene-5′-sulfonic acid—also known as alizarin garnet R—which forms a fluorescent metal–ligand complex (Figure 10.6.9 ). The analysis is carried out using an excitation wavelength of 470 nm, with fluorescence monitored at 500 nm. Table 10.6.1 provides additional examples of chelating reagents that form fluorescent metal–ligand complexes with metal ions. A few inorganic nonmetals are determined by their ability to decrease, or quench, the fluorescence of another species. One example is the analysis for F based on its ability to quench the fluorescence of the Al3+–alizarin garnet R complex. Table 10.6.1 . Chelating Agents for the Fluorescent Analysis of Metal Ions chelating agent metal ions 8-hydroxyquinoline Al3+, Be2+, Zn2+, Li+, Mg2+ (and others) flavonal Zr2+, Sn4+ benzoin $\text{B}_4\text{O}_6^{2-}$, Zn2+ $2^{\prime},3^{\prime},4^{\prime},5,7-\text{pentahydroxylflavone}$ Be2+ 2-(o-hydroxyphenyl) benzoxazole Cd2+ Organic Analytes As noted earlier, organic compounds that contain aromatic rings generally are fluorescent and aromatic heterocycles often are phosphorescent. Table 10.6.2 provides examples of several important biochemical, pharmaceutical, and environmental compounds that are analyzed quantitatively by fluorimetry or phosphorimetry. If an organic analyte is not naturally fluorescent or phosphorescent, it may be possible to incorporate it into a chemical reaction that produces a fluorescent or phosphorescent product. For example, the enzyme creatine phosphokinase is determined by using it to catalyze the formation of creatine from phosphocreatine. Reacting the creatine with ninhydrin produces a fluorescent product of unknown structure. Table 10.6.2 . Examples of Naturally Photoluminescent Organic Analytes class compounds (F = fluorescence, P = phosphorescence) aromatic amino acids phenylalanine (F) tyrosine (F) tryptophan (F, P) vitamins vitamin A (F) vitamin B2 (F) vitamin B6 (F) vitamin B12 (F) vitamin E (F) folic acid (F) catecholamines dopamine (F) norepinephrine (F) pharmaceuticals and drugs quinine (F) salicylic acid (F, P) morphine (F) barbiturates (F) LSD (F) codeine (P) caffeine (P) sulfanilamide (P) environmental pollutants pyrene (F) benzo[a]pyrene (F) organothiophosphorous pesticides (F) carbamate insecticides (F) DDT (P) Standardizing the Method From Equation \ref{10.4} and Equation \ref{10.5} we know that the intensity of fluorescence or phosphorescence is a linear function of the analyte’s concentration provided that the sample’s absorbance of source radiation ($A = \varepsilon bC$) is less than approximately 0.01. Calibration curves often are linear over four to six orders of magnitude for fluorescence and over two to four orders of magnitude for phosphorescence. For higher concentrations of analyte the calibration curve becomes nonlinear because the assumptions that led to Equation \ref{10.4} and Equation \ref{10.5} no longer apply. Nonlinearity may be observed for smaller concentrations of analyte fluorescent or phosphorescent contaminants are present. As discussed earlier, quantum efficiency is sensitive to temperature and sample matrix, both of which must be con- trolled when using external standards. In addition, emission intensity depends on the molar absorptivity of the photoluminescent species, which is sensitive to the sample matrix. Representative Method 10.6.1: Determination of Quinine in Urine The best way to appreciate the theoretical and the practical details discussed in this section is to carefully examine a typical analytical method. Although each method is unique, the following description of the determination of quinine in urine provides an instructive example of a typical procedure. The description here is based on Mule, S. J.; Hushin, P. L. Anal. Chem. 1971, 43, 708–711, and O’Reilly, J. E.; J. Chem. Educ. 1975, 52, 610–612. Figure 10.6.3 shows the fluorescence of the quinine in tonic water. Description of the Method Quinine is an alkaloid used to treat malaria. It is a strongly fluorescent compound in dilute solutions of H2SO4 ($\Phi_f = 0.55$). Quinine’s excitation spectrum has absorption bands at 250 nm and 350 nm and its emission spectrum has a single emission band at 450 nm. Quinine is excreted rapidly from the body in urine and is determined by measuring its fluorescence following its extraction from the urine sample. Procedure Transfer a 2.00-mL sample of urine to a 15-mL test tube and use 3.7 M NaOH to adjust its pH to between 9 and 10. Add 4 mL of a 3:1 (v/v) mixture of chloroform and isopropanol and shake the contents of the test tube for one minute. Allow the organic and the aqueous (urine) layers to separate and transfer the organic phase to a clean test tube. Add 2.00 mL of 0.05 M H2SO4 to the organic phase and shake the contents for one minute. Allow the organic and the aqueous layers to separate and transfer the aqueous phase to the sample cell. Measure the fluorescent emission at 450 nm using an excitation wavelength of 350 nm. Determine the concentration of quinine in the urine sample using a set of external standards in 0.05 M H2SO4, prepared from a 100.0 ppm solution of quinine in 0.05 M H2SO4. Use distilled water as a blank. Questions 1. Chloride ion quenches the intensity of quinine’s fluorescent emission. For example, in the presence of 100 ppm NaCl (61 ppm Cl) quinine’s emission intensity is only 83% of its emission intensity in the absence of chloride. The presence of 1000 ppm NaCl (610 ppm Cl) further reduces quinine’s fluorescent emission to less than 30% of its emission intensity in the absence of chloride. The concentration of chloride in urine typically ranges from 4600–6700 ppm Cl. Explain how this procedure prevents an interference from chloride. The procedure uses two extractions. In the first of these extractions, quinine is separated from urine by extracting it into a mixture of chloroform and isopropanol, leaving the chloride ion behind in the original sample. 2. Samples of urine may contain small amounts of other fluorescent compounds, which will interfere with the analysis if they are carried through the two extractions. Explain how you can modify the procedure to take this into account? One approach is to prepare a blank that uses a sample of urine known to be free of quinine. Subtracting the blank’s fluorescent signal from the measured fluorescence from urine samples corrects for the interfering compounds. The fluorescent emission for quinine at 450 nm can be induced using an excitation frequency of either 250 nm or 350 nm. The fluorescent quantum efficiency is the same for either excitation wavelength. Quinine’s absorption spectrum shows that $\varepsilon_{250}$ is greater than $\varepsilon_{350}$. Given that quinine has a stronger absorbance at 250 nm, explain why its fluorescent emission intensity is greater when using 350 nm as the excitation wavelength. From Equation \ref{10.4} we know that If is a function of the following terms: k, $\Phi_f$, P0, $\varepsilon$, b, and C. We know that $\Phi_f$, b, and C are the same for both excitation wavelengths and that $\varepsilon$ is larger for a wavelength of 250 nm; we can, therefore, ignore these terms. The greater emission intensity when using an excitation wavelength of 350 nm must be due to a larger value for P0 or k . In fact, P0 at 350 nm for a high-pressure Xe arc lamp is about 170% of that at 250 nm. In addition, the sensitivity of a typical photomultiplier detector (which contributes to the value of k) at 350 nm is about 140% of that at 250 nm. Example 10.6.1 To evaluate the method described in Representative Method 10.6.1, a series of external standard are prepared and analyzed, providing the results shown in the following table. All fluorescent intensities are corrected using a blank prepared from a quinine-free sample of urine. The fluorescent intensities are normalized by setting If for the highest concentration standard to 100. [quinine] (µg/mL) If 1.00 10.11 3.00 30.20 5.00 49.84 7.00 69.89 10.00 100.0 After ingesting 10.0 mg of quinine, a volunteer provides a urine sample 24-h later. Analysis of the urine sample gives a relative emission intensity of 28.16. Report the concentration of quinine in the sample in mg/L and the percent recovery for the ingested quinine. Solution Linear regression of the relative emission intensity versus the concentration of quinine in the standards gives the calibration curve shown below and the following calibration equation. $I_{f}=0.122+9.978 \times \frac{\mathrm{g} \text { quinine }}{\mathrm{mL}} \nonumber$ Substituting the sample’s relative emission intensity into the calibration equation gives the concentration of quinine as 2.81 μg/mL. Because the volume of urine taken, 2.00 mL, is the same as the volume of 0.05 M H2SO4 used to extract the quinine, the concentration of quinine in the urine also is 2.81 μg/mL. The recovery of the ingested quinine is $\frac{\frac{2.81 \ \mu \mathrm{g} \text { quinine }}{\mathrm{mL} \text { urine }} \times 2.00 \ \mathrm{mL} \text { urine } \times \frac{1 \mathrm{mg}}{1000 \ \mu \mathrm{g}}} {10.0 \ \mathrm{mg} \text { quinine ingested }} \times 100=0.0562 \% \nonumber$ It can take 10–11 days for the body to completely excrete quinine so it is not surprising that such a small amount of quinine is recovered from this sample of urine. Evaluation of Photoluminescence Spectroscopy Scale of Operation Photoluminescence spectroscopy is used for the routine analysis of trace and ultratrace analytes in macro and meso samples. Detection limits for fluorescence spectroscopy are influenced by the analyte’s quantum yield. For an analyte with $\Phi_f > 0.5$, a picomolar detection limit is possible when using a high quality spectrofluorometer. For example, the detection limit for quinine sulfate, for which $\Phi$ is 0.55, generally is between 1 part per billion and 1 part per trillion. Detection limits for phosphorescence are somewhat higher, with typical values in the nanomolar range for low-temperature phosphorimetry and in the micromolar range for room-temperature phosphorimetry using a solid substrate. Accuracy The accuracy of a fluorescence method generally is between 1–5% when spectral and chemical interferences are insignificant. Accuracy is limited by the same types of problems that affect other optical spectroscopic methods. In addition, accuracy is affected by interferences that affect the fluorescent quantum yield. The accuracy of phosphorescence is somewhat greater than that for fluorescence. Precision The relative standard deviation for fluorescence usually is between 0.5–2% when the analyte’s concentration is well above its detection limit. Precision usually is limited by the stability of the excitation source. The precision for phosphorescence often is limited by reproducibility in preparing samples for analysis, with relative standard deviations of 5–10% being common. Sensitivity From Equation \ref{10.4} and Equation \ref{10.5} we know that the sensitivity of a fluorescent or a phosphorescent method is affected by a number of parameters. We already have considered the importance of quantum yield and the effect of temperature and solution composition on $\Phi_f$ and $\Phi_p$. Besides quantum yield, sensitivity is improved by using an excitation source that has a greater emission intensity, P0, at the desired wavelength, and by selecting an excitation wavelength for which the analyte has a greater molar absorptivity, $\varepsilon$. Another approach for improving sensitivity is to increase the volume from which emission is monitored. Figure 10.6.10 shows how rotating a monochromator’s slits from their usual vertical orientation to a horizontal orientation increases the sampling volume. The result can in- crease the emission from the sample by $5-30 \times$. Selectivity The selectivity of fluorescence and phosphorescence is superior to that of absorption spectrophotometry for two reasons: first, not every compound that absorbs radiation is fluorescent or phosphorescent; and, second, selectivity between an analyte and an interferent is possible if there is a difference in either their excitation or their emission spectra. The total emission intensity is a linear sum of that from each fluorescent or phosphorescent species. The analysis of a sample that contains n analytes, therefore, is accomplished by measuring the total emission intensity at n wavelengths. Time, Cost, and Equipment As with other optical spectroscopic methods, fluorescent and phosphorescent methods provide a rapid means for analyzing samples and are capable of automation. Fluorimeters are relatively inexpensive, ranging from several hundred to several thousand dollars, and often are satisfactory for quantitative work. Spectrofluorometers are more expensive, with models often exceeding \$50,000.
textbooks/chem/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/10%3A_Spectroscopic_Methods/10.05%3A_Emission_Spectroscopy.txt
The focus of this section is on the emission of ultraviolet and visible radiation following the thermal excitation of atoms. Atomic emission spectroscopy has a long history. Qualitative applications based on the color of flames were used in the smelting of ores as early as 1550 and were more fully developed around 1830 with the observation of atomic spectra generated by flame emission and spark emission [Dawson, J. B. J. Anal. At. Spectrosc. 1991, 6, 93–98]. Quantitative applications based on the atomic emission from electric sparks were developed by Lockyer in the early 1870 and quantitative applications based on flame emission were pioneered by Lundegardh in 1930. Atomic emission based on emission from a plasma was introduced in 1964. For an on-line introduction to much of the material in this section, see Atomic Emission Spectroscopy (AES) by Tomas Spudich and Alexander Scheeline, a resource that is part of the Analytical Sciences Digital Library. Atomic Emission Spectra Atomic emission occurs when a valence electron in a higher energy atomic orbital returns to a lower energy atomic orbital. Figure 10.7.1 shows a portion of the energy level diagram for sodium, which consists of a series of discrete lines at wavelengths that correspond to the difference in energy between two atomic orbitals. The intensity of an atomic emission line, Ie, is proportional to the number of atoms, $N^*$, that populate the excited state, $I_{e}=k N^* \label{10.1}$ where k is a constant that accounts for the efficiency of the transition. If a system of atoms is in thermal equilibrium, the population of excited state i is related to the total concentration of atoms, N, by the Boltzmann distribution. For many elements at temperatures of less than 5000 K the Boltzmann distribution is approximated as $N^* = N\left(\frac{g_{i}}{g_{0}}\right) e^{-E_i / k T} \label{10.2}$ where gi and g0 are statistical factors that account for the number of equivalent energy levels for the excited state and the ground state, Ei is the energy of the excited state relative to a ground state energy, E0, k is Boltzmann’s constant ($1.3807 \times 10^{-23}$ J/K), and T is the temperature in Kelvin. From Equation \ref{10.2} we expect that excited states with lower energies have larger populations and more intense emission lines. We also expect emission intensity to increase with temperature. Equipment An atomic emission spectrometer is similar in design to the instrumentation for atomic absorption. In fact, it is easy to adapt most flame atomic absorption spectrometers for atomic emission by turning off the hollow cathode lamp and monitoring the difference between the emission intensity when aspirating the sample and when aspirating a blank. Many atomic emission spectrometers, however, are dedicated instruments designed to take advantage of features unique to atomic emission, including the use of plasmas, arcs, sparks, and lasers as atomization and excitation sources, and an enhanced capability for multielemental analysis. Atomization and Excitation Atomic emission requires a means for converting into a free gaseous atom an analyte that is present in a solid, liquid, or solution sample. The same source of thermal energy used for atomization usually serves as the excitation source. The most common methods are flames and plasmas, both of which are useful for liquid or solution samples. Solid samples are analyzed by dissolving in a solvent and using a flame or plasma atomizer. Flame Sources Atomization and excitation in flame atomic emission is accomplished with the same nebulization and spray chamber assembly used in atomic absorption (Figure 10.4.1). The burner head consists of a single or multiple slots, or a Meker-style burner. Older atomic emission instruments often used a total consumption burner in which the sample is drawn through a capillary tube and injected directly into the flame. A Meker burner is similar to the more common Bunsen burner found in most laboratories; it is designed to allow for higher temperatures and for a larger diameter flame. Plasma Sources A plasma is a hot, partially ionized gas that contains an abundant concentration of cations and electrons. The plasma used in atomic emission is formed by ionizing a flowing stream of argon gas, producing argon ions and electrons. A plasma’s high temperature results from resistive heating as the electrons and argon ions move through the gas. Because a plasma operates at a much higher temperature than a flame, it provides for a better atomization efficiency and a higher population of excited states. A schematic diagram of the inductively coupled plasma source (ICP) is shown in Figure 10.7.2 . The ICP torch consists of three concentric quartz tubes, surrounded at the top by a radio-frequency induction coil. The sample is mixed with a stream of Ar using a nebulizer, and is carried to the plasma through the torch’s central capillary tube. Plasma formation is initiated by a spark from a Tesla coil. An alternating radio-frequency current in the induction coil creates a fluctuating magnetic field that induces the argon ions and the electrons to move in a circular path. The resulting collisions with the abundant unionized gas give rise to resistive heating, providing temperatures as high as 10000 K at the base of the plasma, and between 6000 and 8000 K at a height of 15–20 mm above the coil, where emission usually is measured. At these high temperatures the outer quartz tube must be thermally isolated from the plasma. This is accomplished by the tangential flow of argon shown in the schematic diagram. Multielemental Analysis Atomic emission spectroscopy is ideally suited for a multielemental analysis because all analytes in a sample are excited simultaneously. If the instrument includes a scanning monochromator, we can program it to move rapidly to an analyte’s desired wavelength, pause to record its emission intensity, and then move to the next analyte’s wavelength. This sequential analysis allows for a sampling rate of 3–4 analytes per minute. Another approach to a multielemental analysis is to use a multichannel instrument that allows us to monitor simultaneously many analytes. A simple design for a multichannel spectrometer, shown in Figure 10.7.3 , couples a monochromator with multiple detectors that are positioned in a semicircular array around the monochromator at positions that correspond to the wavelengths for the analytes. Quantitative Applications Atomic emission is used widely for the analysis of trace metals in a variety of sample matrices. The development of a quantitative atomic emission method requires several considerations, including choosing a source for atomization and excitation, selecting a wavelength and slit width, preparing the sample for analysis, minimizing spectral and chemical interferences, and selecting a method of standardization. Choice of Atomization and Excitation Source Except for the alkali metals, detection limits when using an ICP are significantly better than those obtained with flame emission (Table 10.7.1 ). Plasmas also are subject to fewer spectral and chemical interferences. For these reasons a plasma emission source is usually the better choice. Standardizing the Method From Equation \ref{10.1} we know that emission intensity is proportional to the population of the analyte’s excited state, $N^*$. If the flame or plasma is in thermal equilibrium, then the excited state population is proportional to the analyte’s total population, N, through the Boltzmann distribution (Equation \ref{10.2}). A calibration curve for flame emission usually is linear over two to three orders of magnitude, with ionization limiting linearity when the analyte’s concentrations is small and self-absorption limiting linearity at higher concentrations of analyte. When using a plasma, which suffers from fewer chemical interferences, the calibration curve often is linear over four to five orders of magnitude and is not affected significantly by changes in the matrix of the standards. Emission intensity is affected significantly by many parameters, including the temperature of the excitation source and the efficiency of atomization. An increase in temperature of 10 K, for example, produces a 4% increase in the fraction of Na atoms in the 3p excited state, an uncertainty in the signal that may limit the use of external standards. The method of internal standards is used when the variations in source parameters are difficult to control. To compensate for changes in the temperature of the excitation source, the internal standard is selected so that its emission line is close to the analyte’s emission line. In addition, the internal standard should be subject to the same chemical interferences to compensate for changes in atomization efficiency. To accurately correct for these errors the analyte and internal standard emission lines are monitored simultaneously. Representative Method 10.7.1: Determination of Sodium in a Salt Substitute The best way to appreciate the theoretical and the practical details discussed in this section is to carefully examine a typical analytical method. Although each method is unique, the following description of the determination of sodium in salt substitutes provides an instructive example of a typical procedure. The description here is based on Goodney, D. E. J. Chem. Educ. 1982, 59, 875–876. Description of Method Salt substitutes, which are used in place of table salt for individuals on low-sodium diets, replaces NaCl with KCl. Depending on the brand, fumaric acid, calcium hydrogen phosphate, or potassium tartrate also are present. Although intended to be sodium-free, salt substitutes contain small amounts of NaCl as an impurity. Typically, the concentration of sodium in a salt substitute is about 100 μg/g The exact concentration of sodium is determined by flame atomic emission. Because it is difficult to match the matrix of the standards to that of the sample, the analysis is accomplished by the method of standard additions. Procedure A sample is prepared by placing an approximately 10-g portion of the salt substitute in 10 mL of 3 M HCl and 100 mL of distilled water. After the sample has dissolved, it is transferred to a 250-mL volumetric flask and diluted to volume with distilled water. A series of standard additions is prepared by placing 25-mL portions of the diluted sample into separate 50-mL volumetric flasks, spiking each with a known amount of an approximately 10 mg/L standard solution of Na+, and diluting to volume. After zeroing the instrument with an appropriate blank, the instrument is optimized at a wavelength of 589.0 nm while aspirating a standard solution of Na+. The emission intensity is measured for each of the standard addition samples and the concentration of sodium in the salt substitute is reported in μg/g. Questions 1. Potassium ionizes more easily than sodium. What problem might this present if you use external standards prepared from a stock solution of 10 mg Na/L instead of using a set of standard additions? Because potassium is present at a much higher concentration than is sodium, its ionization suppresses the ionization of sodium. Normally suppressing ionization is a good thing because it increases emission intensity. In this case, however, the difference between the standard's matrix and the sample’s matrix means that the sodium in a standard experiences more ionization than an equivalent amount of sodium in a sample. The result is a determinate error. 2. One way to avoid a determinate error when using external standards is to match the matrix of the standards to that of the sample. We could, for example, prepare external standards using reagent grade KCl to match the matrix to that of the sample. Why is this not a good idea for this analysis? Sodium is a common contaminant in many chemicals. Reagent grade KCl, for example, may contain 40–50 μg Na/g. This is a significant source of sodium, given that the salt substitute contains approximately 100 μg Na/g. 3. Suppose you decide to use an external standardization. Given the previous questions, is the result of your analysis likely to underestimate or to overestimate the amount of sodium in the salt substitute? The solid black line in Figure 10.7.6 shows the ideal calibration curve, assuming we match the standard’s matrix to the sample’s matrix, and that we do so without adding any additional sodium. If we prepare the external standards without adding KCl, the emission for each standard decreases due to increased ionization. This is shown by the lower of the two dashed red lines. Preparing the standards by adding reagent grade KCl increases the concentration of sodium due to its contamination. Because we underestimate the actual concentration of sodium in the standards, the resulting calibration curve is shown by the other dashed red line. In both cases, the sample’s emission results in our overestimating the concentration of sodium in the sample. 4. One problem with analyzing salt samples is their tendency to clog the aspirator and burner assembly. What effect does this have on the analysis? Clogging the aspirator and burner assembly decreases the rate of aspiration, which decreases the analyte’s concentration in the flame. The result is a decrease in the emission intensity and a negative determinate error. Example 10.7.1 To evaluate the method described in Representative Method 10.7.1, a series of standard additions is prepared using a 10.0077-g sample of a salt substitute. The results of a flame atomic emission analysis of the standards is shown here [Goodney, D. E. J. Chem. Educ. 1982, 59, 875–876]. added Na (µg/mL) Ie (arb. units) 0.000 1.79 0.420 2.63 1.051 3.54 2.102 4.94 3.153 6.18 What is the concentration of sodium, in μg/g, in the salt substitute. Solution Linear regression of emission intensity versus the concentration of added Na gives the standard additions calibration curve shown below, which has the following calibration equation. $I_{e}=1.97+1.37 \times \frac{\mu \mathrm{g} \ \mathrm{Na}}{\mathrm{mL}} \nonumber$ The concentration of sodium in the sample is the absolute value of the calibration curve’s x-intercept. Substituting zero for the emission intensity and solving for sodium’s concentration gives a result of 1.44 μgNa/mL. The concentration of sodium in the salt substitute is $\frac{\frac{1.44 \ \mu \mathrm{g} \ \mathrm{Na}}{\mathrm{mL}} \times \frac{50.00 \ \mathrm{mL}}{25.00 \ \mathrm{mL}} \times 250.0 \ \mathrm{mL}}{10.0077 \ \mathrm{g} \text { sample }}=71.9 \ \mu \mathrm{g} \ \mathrm{Na} / \mathrm{g}\nonumber$ Evaluation of Atomic Emission Spectroscopy Scale of Operation The scale of operations for atomic emission is ideal for the direct analysis of trace and ultratrace analytes in macro and meso samples. With appropriate dilutions, atomic emission can be applied to major and minor analytes. Accuracy When spectral and chemical interferences are insignificant, atomic emission can achieve quantitative results with accuracies of 1–5%. For flame emission, accuracy frequently is limited by chemical interferences. Because the higher temperature of a plasma source gives rise to more emission lines, accuracy when using plasma emission often is limited by stray radiation from overlapping emission lines. Precision For samples and standards in which the analyte’s concentration exceeds the detection limit by at least a factor of 50, the relative standard deviation for both flame and plasma emission is about 1–5%. Perhaps the most important factor that affect precision is the stability of the flame’s or the plasma’s temperature. For example, in a 2500 K flame a temperature fluctuation of $\pm 2.5$ K gives a relative standard deviation of 1% in emission intensity. Significant improvements in precision are realized when using internal standards. Sensitivity Sensitivity is influenced by the temperature of the excitation source and the composition of the sample matrix. Sensitivity is optimized by aspirating a standard solution of analyte and maximizing the emission by adjusting the flame’s composition and the height from which we monitor the emission. Chemical interferences, when present, decrease the sensitivity of the analysis. Because the sensitivity of plasma emission is less affected by the sample matrix, a calibration curve prepared using standards in a matrix of distilled water is possible even for samples that have more complex matrices. Selectivity The selectivity of atomic emission is similar to that of atomic absorption. Atomic emission has the further advantage of rapid sequential or simultaneous analysis of multiple analytes. Time, Cost, and Equipment Sample throughput with atomic emission is rapid when using an automated system that can analyze multiple analytes. For example, sampling rates of 3000 determinations per hour are possible using a multichannel ICP, and sampling rates of 300 determinations per hour when using a sequential ICP. Flame emission often is accomplished using an atomic absorption spectrometer, which typically costs between $10,000–$50,000. Sequential ICP’s range in price from $55,000–$150,000, while an ICP capable of simultaneous multielemental analysis costs between $80,000–$200,000. Combination ICP’s that are capable of both sequential and simultaneous analysis range in price from $150,000–$300,000. The cost of Ar, which is consumed in significant quantities, can not be overlooked when considering the expense of operating an ICP.
textbooks/chem/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/10%3A_Spectroscopic_Methods/10.07%3A_Atomic_Emission_Spectroscopy.txt
The blue color of the sky during the day and the red color of the sun at sunset are the result of light scattered by small particles of dust, molecules of water, and other gases in the atmosphere. The efficiency of a photon’s scattering depends on its wavelength. We see the sky as blue during the day because violet and blue light scatter to a greater extent than other, longer wavelengths of light. For the same reason, the sun appears red at sunset because red light is less efficiently scattered and is more likely to pass through the atmosphere than other wavelengths of light. The scattering of radiation has been studied since the late 1800s, with applications beginning soon thereafter. The earliest quantitative applications of scattering, which date from the early 1900s, used the elastic scattering of light by colloidal suspensions to determine the concentration of colloidal particles. Origin of Scattering If we send a focused, monochromatic beam of radiation with a wavelength $\lambda$ through a medium of particles with dimensions $< 1.5 \lambda$, the radiation scatters in all directions. For example, visible radiation of 500 nm is scattered by particles as large as 750 nm in the longest dimension. Two general categories of scattering are recognized. In elastic scattering, radiation is first absorbed by the particles and then emitted without undergoing a change in the radiation’s energy. When the radiation emerges with a change in energy, the scattering is inelastic. Only elastic scattering is considered in this text. Elastic scattering is divided into two types: Rayleigh, or small-particle scattering, and large-particle scattering. Rayleigh scattering occurs when the scattering particle’s largest dimension is less than 5% of the radiation’s wavelength. The intensity of the scattered radiation is proportional to its frequency to the fourth power, $\nu^4$—which accounts for the greater scattering of blue light than red light—and is distributed symmetrically (Figure 10.8.1 a). For larger particles, scattering increases in the forward direction and decreases in the backward direction as the result of constructive and destructive interferences (Figure 10.8.1 b). Turbidimetry and Nephelometry Turbidimetry and nephelometry are two techniques that rely on the elastic scattering of radiation by a suspension of colloidal particles. In turbidimetry the detector is placed in line with the source and the decrease in the radiation’s transmitted power is measured. In nephelometry the scattered radiation is measured at an angle of 90o to the source. The similarity of turbidimetry to absorbance spectroscopy and of nephelometry to fluorescence spectroscopy is evident in the instrumental designs shown in Figure 10.8.2 . In fact, we can use a UV/Vis spectrophotometer for turbidimetry and we can use a spectrofluorometer for nephelometry. Turbidimetry or Nephelometry? When developing a scattering method the choice between using turbidimetry or using nephelometry is determined by two factors. The most important consideration is the intensity of the scattered radiation relative to the intensity of the source’s radiation. If the solution contains a small concentration of scattering particles, then the intensity of the transmitted radiation, IT, is approximately the same as the intensity of the source’s radiation, I0. As we learned earlier in the section on molecular absorption, there is substantial uncertainty in determining a small difference between two intense signals. For this reason, nephelometry is a more appropriate choice for a sample that contains few scattering particles. Turbidimetry is a better choice when the sample contains a high concentration of scattering particles. A second consideration in choosing between turbidimetry and nephelometry is the size of the scattering particles. For nephelometry, the intensity of scattered radiation at 90o increases when the particles are small and Rayleigh scattering is in effect. For larger particles, as shown in Figure 10.8.1 , the intensity of scattering decreases at 90o. When using an ultraviolet or a visible source of radiation, the optimum particle size is 0.1–1 μm. The size of the scattering particles is less important for turbidimetry where the signal is the relative decrease in transmitted radiation. In fact, turbidimetric measurements are feasible even when the size of the scattering particles results in an increase in reflection and refraction, although a linear relationship between the signal and the concentration of scattering particles may no longer hold. Determining Concentration by Turbidimetry For turbidimetry the measured transmittance, T, is the ratio of the intensity of source radiation transmitted by the sample, IT, to the intensity of source radiation transmitted by a blank, I0. $T=\frac{I_{\mathrm{T}}}{I_{0}} \nonumber$ The relationship between transmittance and the concentration of the scattering particles is similar to that given by Beer’s law $-\log T=k b C \label{10.1}$ where C is the concentration of the scattering particles in mass per unit volume (w/v), b is the pathlength, and k is a constant that depends on several factors, including the size and shape of the scattering particles and the wavelength of the source radiation. The exact relationship is established by a calibration curve prepared using a series of standards that contain known concentrations of analyte. As with Beer’s law, Equation \ref{10.1} may show appreciable deviations from linearity. Determining Concentration by Nephelometry For nephelometry the relationship between the intensity of scattered radiation, IS, and the concentration of scattering particles is $I_{\mathrm{s}}=k I_{0} C \label{10.2}$ where k is an empirical constant for the system and I0 is the intensity of the source radiation. The value of k is determined from a calibration curve prepared using a series of standards that contain known concentrations of analyte. Selecting a Wavelength for the Incident Radiation The choice of wavelength is based primarily on the need to minimize potential interferences. For turbidimetry, where the incident radiation is transmitted through the sample, a monochromator or filter allow us to avoid wavelengths that are absorbed instead of scattered by the sample. For nephelometry, the absorption of incident radiation is not a problem unless it induces fluorescence from the sample. With a nonfluorescent sample there is no need for wavelength selection and a source of white light may be used as the incident radiation. For both techniques, other considerations in choosing a wavelength including the intensity of scattering, the trans- ducer’s sensitivity (many common photon transducers are more sensitive to radiation at 400 nm than at 600 nm), and the source’s intensity. Preparing the Sample for Analysis Although Equation \ref{10.1} and Equation \ref{10.2} relate scattering to the concentration of the scattering particles, the intensity of scattered radiation also is influenced by the size and the shape of the scattering particles. Samples that contain the same number of scattering particles may show significantly different values for –logT or IS depending on the average diameter of the particles. For a quantitative analysis, therefore, it is necessary to maintain a uniform distribution of particle sizes throughout the sample and between samples and standards. Most turbidimetric and nephelometric methods rely on precipitation reaction to form the scattering particles. As we learned in Chapter 8, a precipitate’s properties, including particle size, are determined by the conditions under which it forms. To maintain a reproducible distribution of particle sizes between samples and standards, it is necessary to control parameters such as the concentration of reagents, the order of adding reagents, the pH and temperature, the agitation or stirring rate, the ionic strength, and the time between the precipitate’s initial formation and the measurement of transmittance or scattering. In many cases a surface-active agent—such as glycerol, gelatin, or dextrin—is added to stabilize the precipitate in a colloidal state and to prevent the coagulation of the particles. Applications Turbidimetry and nephelometry are used to determine the clarity of water. The primary standard for measuring clarity is formazin, an easily prepared, stable polymer suspension (Figure 10.8.3 ) [Hach, C. C.; Bryant, M. “Turbidity Standards,” Technical Information Series, Booklet No. 12, Hach Company: Loveland, CO, 1995]. A stock standard of formazin is prepared by combining a 1g/100mL solution of hydrazine sulfate, N2H4•H2SO4, with a 10 g/100 mL solution of hexamethylenetetramine to produce a suspension of particles that is defined as 4000 nephelometric turbidity units (NTU). A set of external standards with NTUs between 0 and 40 is prepared by diluting the stock standard. This method is readily adapted to the analysis of the clarity of orange juice, beer, and maple syrup. A number of inorganic cations and anions are determined by precipitating them under well-defined conditions. The transmittance or scattering of light, as defined by Equation \ref{10.1} or Equation \ref{10.2}, is proportional to the concentration of the scattering particles, which, in turn, is related by the stoichiometry of the precipitation reaction to the analyte’s concentration. Several examples of analytes determined in this way are listed in Table 10.8.1 . Table 10.8.1 . Examples of Analytes Determined by Turbidimetry or Nephelometry analyte precipitant precipitate Ag+ NaCl AgCl Ca2+ Na2C2O4 CaC2O4 Cl AgNO3 AgCl CN AgNO3 AgCN $\text{CO}_3^{2-}$ BaCl2 BaCO3 F CaCl2 CaF2 $\text{SO}_4^{2-}$ BaCl2 BaSO4 Representative Method 10.8.1: Turbidimetric Determination of Sulfate in Water The best way to appreciate the theoretical and the practical details discussed in this section is to carefully examine a typical analytical method. Although each method is unique, the following description of the determination of sulfate in water provides an instructive example of a typical procedure. The description here is based on Method 4500–SO42––C in Standard Methods for the Analysis of Water and Wastewater, American Public Health Association: Washington, D. C. 20th Ed., 1998. Description of Method Adding BaCl2 to an acidified sample precipitates $\text{SO}_4^{2-}$ as BaSO4. The concentration of $\text{SO}_4^{2-}$ is determined either by turbidimetry or by nephelometry using an incident source of radiation of 420 nm. External standards that contain know concentrations of $\text{SO}_4^{2-}$ are used to standardize the method. Procedure Transfer a 100-mL sample to a 250-mL Erlenmeyer flask along with 20.00 mL of an appropriate buffer. For a sample that contains more than 10 mg $\text{SO}_4^{2-}$/L, the buffer’s composition is 30 g of MgCl2•6H2O, 5 g of CH3COONa•3H2O, 1.0 g of KNO3, and 20 mL of glacial CH3COOH per liter. The buffer for a sample that contain less than 10 mg $\text{SO}_4^{2-}$/L is the same except for the addition of 0.111 g of Na2SO4 per L. Place the sample and the buffer on a magnetic stirrer operated at the same speed for all samples and standards. Add a spoonful of 20–30 mesh BaCl2, using a measuring spoon with a capacity of 0.2–0.3 mL, to precipitate the $\text{SO}_4^{2-}$ as BaSO4. Begin timing when the BaCl2 is added and stir the suspension for 60 ± 2 s. When the stirring is complete, allow the solution to sit without stirring for 5.0± 0.5 min before measuring its transmittance or its scattering. Prepare a calibration curve over the range 0–40 mg $\text{SO}_4^{2-}$/L by diluting a stock standard that is 100-mg $\text{SO}_4^{2-}$/L. Treat each standard using the procedure described above for the sample. Prepare a calibration curve and use it to determine the amount of sulfate in the sample. Questions 1. What is the purpose of the buffer? If the precipitate’s particles are too small, IT is too small to measure reliably. Because rapid precipitation favors the formation of micro-crystalline particles of BaSO4, we use conditions that favor the precipitate’s growth over the nucleation of new particles. The buffer’s high ionic strength and its acidity favor the precipitate’s growth and prevent the formation of microcrystalline BaSO4. 2. Why is it important to use the same stirring rate and time for the samples and standards? How fast and how long we stir the sample after we add BaCl2 influences the size of the precipitate’s particles. 3. Many natural waters have a slight color due to the presence of humic and fulvic acids, and may contain suspended matter (Figure 10.8.4 ). Explain why these might interfere with the analysis for sulfate. For each interferent, suggest a way to minimize its effect on the analysis. Suspended matter in a sample contributes to scattering and, therefore, results in a positive determinate error. We can eliminate this interference by filtering the sample prior to its analysis. A sample that is colored may absorb some of the source’s radiation, leading to a positive determinate error. We can compensate for this interference by taking a sample through the analysis without adding BaCl2. Because no precipitate forms, we use the transmittance of this sample blank to correct for the interference. 4. Why is Na2SO4 added to the buffer for samples that contain less than 10 mg $\text{SO}_4^{2-}$/L? The uncertainty in a calibration curve is smallest near its center. If a sample has a high concentration of $\text{SO}_4^{2-}$, we can dilute it so that its concentration falls near the middle of the calibration curve. For a sample with a small concentration of $\text{SO}_4^{2-}$, the buffer increases the concentration of sulfate by $\begin{array}{c}{\frac{0.111 \ \mathrm{g} \ \mathrm{Na}_{2} \mathrm{SO}_{4}}{\mathrm{L}} \times \frac{96.06 \ \mathrm{g} \ \mathrm{SO}_{4}^{2-}}{142.04 \ \mathrm{g} \ \mathrm{Na}_{2} \mathrm{SO}_{4}} \times} \ {\qquad \frac{1000 \ \mathrm{mg}}{\mathrm{g}} \times \frac{20.00 \ \mathrm{mL}}{250.0 \ \mathrm{mL}}=6.00 \ \mathrm{mg} \ \mathrm{SO}_{4}^{2-} / \mathrm{L}}\end{array} \nonumber$ After using the calibration curve to determine the amount of sulfate in the sample as analyzed, we subtract 6.00 mg $\text{SO}_4^{2-}$/L to determine the amount of sulfate in the original sample. Example 10.8.1 To evaluate the method described in Representative Method 10.8.1, a series of external standard was prepared and analyzed, providing the results shown in the following table. mg $\text{SO}_4^{2-}$/L transmittance 0.00 1.00 10.00 0.646 20.00 0.417 30.00 0.269 40.00 0.174 Analysis of a 100.0-mL sample of a surface water gives a transmittance of 0.538. What is the concentration of sulfate in the sample? Solution Linear regression of –logT versus concentration of $\text{SO}_4^{2-}$ gives the calibration curve shown below, which has the following calibration equation. $-\log T=-1.04 \times 10^{-5}+0.0190 \times \frac{\mathrm{mg} \ \mathrm{SO}_{4}^{2-}}{\mathrm{L}} \nonumber$ Substituting the sample’s transmittance into the calibration curve’s equation gives the concentration of sulfate in sample as 14.2 mg $\text{SO}_4^{2-}$/L.
textbooks/chem/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/10%3A_Spectroscopic_Methods/10.08%3A_Spectroscopy_Based_on_Scattering.txt
1. Provide the missing information in the following table. wavelength (m) frequency (s–1) wavenumber (cm–1) energy (J) $4.50 \times 10^{-9}$ $1.33 \times 10^{15}$ 3215 $7.20 \times 10^{-19}$ 2. Provide the missing information in the following table. [analyte] (M) absorbance %T molar absorptivity (M–1 cm–1) pathlength (cm) $1.40 \times 10^{-4}$     1120 1.00 0.563   750 1.00 $2.56 \times 10^{-4}$ 0.225   456540 $1.55 \times 10^{-3}$ 0.167   1550 5.00 33.3   1.00 $4.35 \times 10^{-3}$   21.2 $1.20 \times 10^{-4}$   81.3   10.00 3. A solution’s transmittance is 35.0%. What is the transmittance if you dilute 25.0 mL of the solution to 50.0 mL? 4. A solution’s transmittance is 85.0% when measured in a cell with a pathlength of 1.00 cm. What is the %T if you increase the pathlength to 10.00 cm? 5. The accuracy of a spectrophotometer is evaluated by preparing a solution of 60.06 ppm K2Cr2O7 in 0.0050 M H2SO4, and measuring its absorbance at a wavelength of 350 nm in a cell with a pathlength of 1.00 cm. The expected absorbance is 0.640. What is the expected molar absorptivity of K2Cr2O7 at this wavelength? 6. A chemical deviation to Beer’s law may occur if the concentration of an absorbing species is affected by the position of an equilibrium reaction. Consider a weak acid, HA, for which Ka is $2 \times 10^{-5}$. Construct Beer’s law calibration curves of absorbance versus the total concentration of weak acid (Ctotal = [HA] + [A]), using values for Ctotal of $1.0 \times 10^{-5}$, $3.0 \times 10^{-5}$, $5.0 \times 10^{-5}$, $7.0 \times 10^{-5}$, $9.0 \times 10^{-5}$, $11 \times 10^{-5}$, and $13 \times 10^{-5}$ M for the following sets of conditions and comment on your results: (a) $\varepsilon_{HA} = \varepsilon_{A^-} = 2000$ M–1 cm–1; unbuffered solution. (b) $\varepsilon_{HA} = 2000$ M–1 cm–1; $\varepsilon_{A^-} = 500$ M–1 cm–1; unbuffered solution. (c) $\epsilon_{HA} = 2000$ M–1 cm–1; $\epsilon_{A^-} = 500$ M–1 cm–1; solution buffered to a pH of 4.5. Assume a constant pathlength of 1.00 cm for all samples. 7. One instrumental limitation to Beer’s law is the effect of polychromatic radiation. Consider a line source that emits radiation at two wavelengths, $\lambda^{\prime}$ and $\lambda^{\prime \prime}$. When treated separately, the absorbances at these wavelengths, A′ and A′′, are $A^{\prime}=-\log \frac{P_{\mathrm{T}}^{\prime}}{P_{0}^{\prime}}=\varepsilon^{\prime} b C \quad \quad A^{\prime \prime}=-\log \frac{P_{\mathrm{T}}^{\prime \prime}}{P_{0}^{\prime \prime}}=\varepsilon^{\prime \prime} b C \nonumber$ If both wavelengths are measured simultaneously the absorbance is $A=-\log \frac{\left(P_{\mathrm{T}}^{\prime}+P_{\mathrm{T}}^{\prime \prime}\right)}{\left(P_{0}^{\prime}+P_{0}^{\prime \prime}\right)} \nonumber$ (a) Show that if the molar absorptivities at $\lambda^{\prime}$ and $\lambda^{\prime \prime}$ are the same ($\varepsilon^{\prime} = \varepsilon^{\prime \prime} = \varepsilon$), then the absorbance is equivalent to $A=\varepsilon b C \nonumber$ (b) Construct Beer’s law calibration curves over the concentration range of zero to $1 \times 10^{-4}$ M using $\varepsilon^{\prime} = 1000$ M–1 cm–1 and $\varepsilon^{\prime \prime} = 1000$ M–1 cm–1, and $\varepsilon^{\prime} = 1000$ M–1 cm–1 and $\varepsilon^{\prime \prime} = 100$ M–1 cm–1. Assume a value of 1.00 cm for the pathlength and that $P_0^{\prime} = P_0^{\prime \prime} = 1$. Explain the difference between the two curves. 8. A second instrumental limitation to Beer’s law is stray radiation. The following data were obtained using a cell with a pathlength of 1.00 cm when stray light is insignificant (Pstray = 0). [analyte] (mM) absorbance 0.00 0.00 2.00 0.40 4.00 0.80 6.00 1.20 8.00 1.60 10.00 3.00 Calculate the absorbance of each solution when Pstray is 5% of P0, and plot Beer’s law calibration curves for both sets of data. Explain any differences between the two curves. (Hint: Assume P0 is 100). 9. In the process of performing a spectrophotometric determination of iron, an analyst prepares a calibration curve using a single-beam spectrophotometer similar to that shown in Figure 10.3.2. After preparing the calibration curve, the analyst drops and breaks the cuvette. The analyst acquires a new cuvette, measures the absorbance of the sample, and determines the %w/w Fe in the sample. Does the change in cuvette lead to a determinate error in the analysis? Explain. 10. The spectrophotometric methods for determining Mn in steel and for determining glucose use a chemical reaction to produce a colored spe- cies whose absorbance we can monitor. In the analysis of Mn in steel, colorless Mn2+ is oxidized to give the purple $\text{MnO}_4^{-}$ ion. To analyze for glucose, which is also colorless, we react it with a yellow colored solution of the $\text{Fe(CN)}_6^{3-}$, forming the colorless $\text{Fe(CN)}_6^{4-}$ ion. The directions for the analysis of Mn do not specify precise reaction conditions, and samples and standards are treated separately. The conditions for the analysis of glucose, however, require that the samples and standards are treated simultaneously at exactly the same temperature and for exactly the same length of time. Explain why these two experimental procedures are so different. 11. One method for the analysis of Fe3+, which is used with a variety of sample matrices, is to form the highly colored Fe3+–thioglycolic acid complex. The complex absorbs strongly at 535 nm. Standardizing the method is accomplished using external standards. A 10.00-ppm Fe3+ working standard is prepared by transferring a 10-mL aliquot of a 100.0 ppm stock solution of Fe3+ to a 100-mL volumetric flask and diluting to volume. Calibration standards of 1.00, 2.00, 3.00, 4.00, and 5.00 ppm are prepared by transferring appropriate amounts of the 10.0 ppm working solution into separate 50-mL volumetric flasks, each of which contains 5 mL of thioglycolic acid, 2 mL of 20% w/v ammonium citrate, and 5 mL of 0.22 M NH3. After diluting to volume and mixing, the absorbances of the external standards are measured against an appropriate blank. Samples are prepared for analysis by taking a portion known to contain approximately 0.1 g of Fe3+, dissolving it in a minimum amount of HNO3, and diluting to volume in a 1-L volumetric flask. A 1.00-mL aliquot of this solution is transferred to a 50-mL volumetric flask, along with 5 mL of thioglycolic acid, 2 mL of 20% w/v ammonium citrate, and 5 mL of 0.22 M NH3 and diluted to volume. The absorbance of this solution is used to determine the concentration of Fe3+ in the sample. (a) What is an appropriate blank for this procedure? (b) Ammonium citrate is added to prevent the precipitation of Al3+. What is the effect on the reported concentration of iron in the sample if there is a trace impurity of Fe3+ in the ammonium citrate? (c) Why does the procedure specify that the sample contain approximately 0.1 g of Fe3+? (d) Unbeknownst to the analyst, the 100-mL volumetric flask used to prepare the 10.00 ppm working standard of Fe3+ has a volume that is significantly smaller than 100.0 mL. What effect will this have on the reported concentration of iron in the sample? 12. A spectrophotometric method for the analysis of iron has a linear calibration curve for standards of 0.00, 5.00, 10.00, 15.00, and 20.00 mg Fe/L. An iron ore sample that is 40–60% w/w is analyzed by this method. An approximately 0.5-g sample is taken, dissolved in a minimum of concentrated HCl, and diluted to 1 L in a volumetric flask using distilled water. A 5.00 mL aliquot is removed with a pipet. To what volume—10, 25, 50, 100, 250, 500, or 1000 mL—should it be diluted to minimize the uncertainty in the analysis? Explain. 13. Lozano-Calero and colleagues developed a method for the quantitative analysis of phosphorous in cola beverages based on the formation of the blue-colored phosphomolybdate complex, (NH4)3[PO4(MoO3)12] [Lozano-Calero, D.; Martín-Palomeque, P.; Madueño-Loriguillo, S. J. Chem. Educ. 1996, 73, 1173–1174]. The complex is formed by adding (NH4)6Mo7O24 to the sample in the presence of a reducing agent, such as ascorbic acid. The concentration of the complex is determined spectrophotometrically at a wavelength of 830 nm, using an external standards calibration curve. In a typical analysis, a set of standard solutions that contain known amounts of phosphorous is prepared by placing appropriate volumes of a 4.00 ppm solution of P2O5 in a 5-mL volumetric flask, adding 2 mL of an ascorbic acid reducing solution, and diluting to volume with distilled water. Cola beverages are prepared for analysis by pouring a sample into a beaker and allowing it to stand for 24 h to expel the dissolved CO2. A 2.50-mL sample of the degassed sample is transferred to a 50-mL volumetric flask and diluted to volume. A 250-μL aliquot of the diluted sample is then transferred to a 5-mL volumetric flask, treated with 2 mL of the ascorbic acid reducing solution, and diluted to volume with distilled water. (a) The authors note that this method can be applied only to noncol- ored cola beverages. Explain why this is true. (b) How might you modify this method so that you can apply it to any cola beverage? (c) Why is it necessary to remove the dissolved gases? (d) Suggest an appropriate blank for this method? (e) The author’s report a calibration curve of $A=-0.02+\left(0.72 \ \mathrm{ppm}^{-1}\right) \times C_{\mathrm{P}_{2} \mathrm{O}_{5}} \nonumber$ A sample of Crystal Pepsi, analyzed as described above, yields an absorbance of 0.565. What is the concentration of phosphorous, reported as ppm P, in the original sample of Crystal Pepsi? Crystal Pepsi was a colorless, caffeine-free soda produced by PepsiCo. It was available in the United States from 1992 to 1993. 14. EDTA forms colored complexes with a variety of metal ions that may serve as the basis for a quantitative spectrophotometric method of analysis. The molar absorptivities of the EDTA complexes of Cu2+, Co2+, and Ni2+ at three wavelengths are summarized in the following table (all values of $\varepsilon$ are in M–1 cm–1). metal ion $\varepsilon_{462.9}$ $\varepsilon_{732.0}$ $\varepsilon_{378.7}$ Co2+ 15.8 2.11 3.11 Cu2+ 2.32 95.2 7.73 Ni2+ 1.79 3.03 13.5 Using this information determine the following, assuming a pathlength, b, of 1.00 cm for all measurements: (a) The concentration of Cu2+ in a solution that has an absorbance of 0.338 at a wavelength of 732.0 nm. (b) The concentrations of Cu2+ and Co2+ in a solution that has an absorbance of 0.453 at a wavelength of 732.0 nm and 0.107 at a wavelength of 462.9 nm. (c) The concentrations of Cu2+, Co2+, and Ni2+ in a sample that has an absorbance of 0.423 at a wavelength of 732.0 nm, 0.184 at a wavelength of 462.9 nm, and 0.291 at a wavelength of 378.7 nm. 15. The concentration of phenol in a water sample is determined by using steam distillation to separate the phenol from non-volatile impurities, followed by reacting the phenol in the distillate with 4-aminoantipyrine and K3Fe(CN)6 at pH 7.9 to form a colored antipyrine dye. A phenol standard with a concentration of 4.00 ppm has an absorbance of 0.424 at a wavelength of 460 nm using a 1.00 cm cell. A water sample is steam distilled and a 50.00-mL aliquot of the distillate is placed in a 100-mL volumetric flask and diluted to volume with distilled water. The absorbance of this solution is 0.394. What is the concentration of phenol (in parts per million) in the water sample? 16. Saito describes a quantitative spectrophotometric procedure for iron based on a solid-phase extraction using bathophenanthroline in a poly(vinyl chloride) membrane [Saito, T. Anal. Chim. Acta 1992, 268, 351–355]. In the absence of Fe2+ the membrane is colorless, but when immersed in a solution of Fe2+ and I, the membrane develops a red color as a result of the formation of an Fe2+–bathophenanthroline complex. A calibration curve determined using a set of external standards with known concentrations of Fe2+ gave a standardization relationship of $A=\left(8.60 \times 10^{3} \ \mathrm{M}^{-1}\right) \times\left[\mathrm{Fe}^{2+}\right] \nonumber$ What is the concentration of iron, in mg Fe/L, for a sample with an absorbance of 0.100? 17. In the DPD colorimetric method for the free chlorine residual, which is reported as mg Cl2/L, the oxidizing power of free chlorine converts the colorless amine N,N-diethyl-p-phenylenediamine to a colored dye that absorbs strongly over the wavelength range of 440–580 nm. Analysis of a set of calibration standards gave the following results. mg Cl2/L absorbance 0.00 0.000 0.50 0.270 1.00 0.543 1.50 0.813 2.00 1.084 A sample from a public water supply is analyzed to determine the free chlorine residual, giving an absorbance of 0.113. What is the free chlorine residual for the sample in mg Cl2/L? 18. Lin and Brown described a quantitative method for methanol based on its effect on the visible spectrum of methylene blue [Lin, J.; Brown, C. W. Spectroscopy 1995, 10(5), 48–51]. In the absence of methanol, methylene blue has two prominent absorption bands at 610 nm and 663 nm, which correspond to the monomer and the dimer, respectively. In the presence of methanol, the intensity of the dimer’s absorption band decreases, while that for the monomer increases. For concentrations of methanol between 0 and 30% v/v, the ratio of the two absorbance, A663/A610, is a linear function of the amount of methanol. Use the following standardization data to determine the %v/v methanol in a sample if A610 is 0.75 and A663 is 1.07. %v/v methanol A663/A610 %v/v methanol A663/A610 0.0 1.21 20.0 1.62 5.0 1.29 25.0 1.74 10.0 1.42 30.0 1.84 15.0 1.52 19. The concentration of the barbiturate barbital in a blood sample is determined by extracting 3.00 mL of blood with 15 mL of CHCl3. The chloroform, which now contains the barbital, is extracted with 10.0 mL of 0.45 M NaOH (pH ≈ 13). A 3.00-mL sample of the aqueous extract is placed in a 1.00-cm cell and an absorbance of 0.115 is measured. The pH of the sample in the absorption cell is then adjusted to approximately 10 by adding 0.50 mL of 16% w/v NH4Cl, giving an absorbance of 0.023. When 3.00 mL of a standard barbital solution with a concentration of 3 mg/100 mL is taken through the same procedure, the absorbance at pH 13 is 0.295 and the absorbance at a pH of 10 is 0.002. Report the mg barbital/100 mL in the sample. 20. Jones and Thatcher developed a spectrophotometric method for analyzing analgesic tablets that contain aspirin, phenacetin, and caffeine [Jones, M.; Thatcher, R. L. Anal. Chem. 1951, 23, 957–960]. The sample is dissolved in CHCl3 and extracted with an aqueous solution of NaHCO3 to remove the aspirin. After the extraction is complete, the chloroform is transferred to a 250-mL volumetric flask and diluted to volume with CHCl3. A 2.00-mL portion of this solution is then diluted to volume in a 200-mL volumetric flask with CHCl3. The absorbance of the final solution is measured at wavelengths of 250 nm and 275 nm, at which the absorptivities, in ppm–1 cm–1, for caffeine and phenacetin are analyte $\varepsilon_{250}$ $\varepsilon_{275}$ caffeine 0.0131 0.0485 phenacetin 0.0702 0.0159 Aspirin is determined by neutralizing the NaHCO3 in the aqueous solution and extracting the aspirin into CHCl3. The combined extracts are diluted to 500 mL in a volumetric flask. A 20.00-mL portion of the solution is placed in a 100-mL volumetric flask and diluted to volume with CHCl3. The absorbance of this solution is measured at 277 nm, where the absorptivity of aspirin is 0.00682 ppm–1 cm–1. An analgesic tablet treated by this procedure is found to have absorbances of 0.466 at 250 nm, 0.164 at 275 nm, and 0.600 at 277 nm when using a cell with a 1.00 cm pathlength. Report the milligrams of aspirin, caffeine, and phenacetin in the analgesic tablet. 21. The concentration of SO2 in a sample of air is determined by the p-rosaniline method. The SO2 is collected in a 10.00-mL solution of $\text{HgCl}_4^{2-}$, where it reacts to form $\text{Hg(SO}_3 )_2$, by pulling air through the solution for 75 min at a rate of 1.6 L/min. After adding p-rosaniline and formaldehyde, the colored solution is diluted to 25 mL in a volumetric flask. The absorbance is measured at 569 nm in a 1-cm cell, yielding a value of 0.485. A standard sample is prepared by substituting a 1.00-mL sample of a standard solution that contains the equivalent of 15.00 ppm SO2 for the air sample. The absorbance of the standard is found to be 0.181. Report the concentration of SO2 in the air in mg SO2/L. The density of air is 1.18 g/liter. 22. Seaholtz and colleagues described a method for the quantitative analysis of CO in automobile exhaust based on the measurement of infrared radiation at 2170 cm–1 [Seaholtz, M. B.; Pence, L. E.; Moe, O. A. Jr. J. Chem. Educ. 1988, 65, 820–823]. A calibration curve is prepared by filling a 10-cm IR gas cell with a known pressure of CO and measuring the absorbance using an FT-IR, giving a calibration equation of $A=-1.1 \times 10^{-4}+\left(9.9 \times 10^{-4}\right) \times P_{\mathrm{CO}} \nonumber$ Samples are prepared by using a vacuum manifold to fill the gas cell. After measuring the total pressure, the absorbance at 2170 cm–1 is measured. Results are reported as %CO (PCO/Ptotal). The analysis of five exhaust samples from a 1973 coupe gives the following results. Ptotal (torr) absorbance 595 0.1146 354 0.0642 332 0.0591 233 0.0412 143 0.0254 Determine the %CO for each sample, and report the mean and the 95% confidence interval. 23. Figure 10.3.8 shows an example of a disposable IR sample card made using a thin sheet of polyethylene. To prepare an analyte for analysis, it is dissolved in a suitable solvent and a portion of the sample placed on the IR card. After the solvent evaporates, leaving the analyte behind as a thin film, the sample’s IR spectrum is obtained. Because the thickness of the polyethylene film is not uniform, the primary application of IR cards is for a qualitative analysis. Zhao and Malinowski reported how an internal standardization with KSCN can be used for a quantitative IR analysis of polystyrene [Zhao, Z.; Malinowski, E. R. Spectroscopy 1996, 11(7), 44–49]. Polystyrene is monitored at 1494 cm–1 and KSCN at 2064 cm–1. Standard solutions are prepared by placing weighed portions of polystyrene in a 10-mL volumetric flask and diluting to volume with a solution of 10 g/L KSCN in methyl isobutyl ketone. A typical set of results is shown here. g polystyrene 0.1609 0.3290 0.4842 0.6402 0.8006 A1494 0.0452 0.1138 0.1820 0.3275 0.3195 A2064 0.1948 0.2274 0.2525 0.3580 0.2703 When a 0.8006-g sample of a poly(styrene/maleic anhydride) copolymer is analyzed, the following results are obtained. replicate A1494 A2064 1 0.2729 0.3582 2 0.2074 0.2820 3 0.2785 0.3642 What is the %w/w polystyrene in the copolymer? Given that the reported %w/w polystyrene is 67%, is there any evidence for a determinate error at $\alpha$ = 0.05? 24. The following table lists molar absorptivities for the Arsenazo complexes of copper and barium [Grossman, O.; Turanov, A. N. Anal. Chim. Acta 1992, 257, 195–202]. Suggest appropriate wavelengths for analyzing mixtures of copper and barium using their Arsenzao complexes. wavelength (nm) $\varepsilon_\text{Cu}$ (M–1 cm–1) $\varepsilon_\text{Ba}$ (M–1 cm–1) 595 11900 7100 600 15500 7200 607 18300 7400 611 19300 6900 614 19300 7000 620 17800 7100 626 16300 8400 635 10900 9900 641 7500 10500 645 5300 10000 650 3500 8600 655 2200 6600 658 1900 6500 665 1500 3900 670 1500 2800 680 1800 1500 25. Blanco and colleagues report several applications of multiwavelength linear regression analysis for the simultaneous determination of two-component mixtures [Blanco, M.; Iturriaga, H.; Maspoch, S.; Tarin, P. J. Chem. Educ. 1989, 66, 178–180]. For each of the following, determine the molar concentration of each analyte in the mixture. (a) Titanium and vanadium are determined by forming complexes with H2O2. Results for a mixture of Ti(IV) and V(V) and for stan-dards of 63.1 ppm Ti(IV) and 96.4 ppm V(V) are listed in the following table. wavelength (nm) ATi(V) Std AV(V) Std Amix 390 0.895 0.326 0.651 430 0.884 0.497 0.743 450 0.694 0.528 0.665 470 0.481 0.512 0.547 510 0.173 0.374 0.314 (b) Copper and zinc are determined by forming colored complexes with 2-pyridyl-azo-resorcinol (PAR). The absorbances for PAR, a mixture of Cu2+ and Zn2+, and standards of 1.00 ppm Cu2+ and 1.00 ppm Zn2+ are listed in the following table. Note that you must correct the absorbances for the each metal for the contribution from PAR. wavelength (nm) APAR ACu Std AZn Std Amix 480 0.211 0.698 0.971 0.656 496 0.137 0.732 1.018 0.668 510 0.100 0.732 0.891 0.627 526 0.072 0.602 0.672 0.498 540 0.056 0.387 0.306 0.290 26. The stoichiometry of a metal–ligand complex, MLn, is determined by the method of continuous variations. A series of solutions is prepared in which the combined concentrations of M and L are held constant at $5.15 \times 10^{-4}$ M. The absorbances of these solutions are measured at a wavelength where only the metal–ligand complex absorbs. Using the following data, determine the formula of the metal–ligand complex. mole fraction of M mole fraction of L absorbance 1.0 0.0 0.001 0.9 0.1 0.126 0.8 0.2 0.260 0.7 0.3 0.389 0.6 0.4 0.515 0.5 0.5 0.642 0.4 0.6 0.775 0.3 0.7 0.771 0.2 0.8 0.513 0.1 0.9 0.253 0.0 1.0 0.000 27. The stoichiometry of a metal–ligand complex, MLn, is determined by the mole-ratio method. A series of solutions are prepared in which the metal’s concentration is held constant at $3.65 \times 10^{-4}$ M and the ligand’s concentration is varied from $1 \times 10^{-4}$ M to $1 \times 10^{-3}$ M. Using the following data, determine the stoichiometry of the metal-ligand complex. [ligand] (M) absorbance [ligand] (M) absorbance $1.0 \times 10^{-4}$ 0.122 $6.0 \times 10^{-4}$ 0.752 $2.0 \times 10^{-4}$ 0.251 $7.0 \times 10^{-4}$ 0.873 $3.0 \times 10^{-4}$ 0.376 $8.0 \times 10^{-4}$ 0.937 $4.0 \times 10^{-4}$ 0.496 $9.0 \times 10^{-4}$ 0.962 $5.0 \times 10^{-4}$ 0.625 $1.0 \times 10^{-3}$ 1.002 28. The stoichiometry of a metal–ligand complex, MLn, is determined by the slope-ratio method. Two sets of solutions are prepared. For the first set of solutions the metal’s concentration is held constant at 0.010 M and the ligand’s concentration is varied. The following data are obtained at a wavelength where only the metal–ligand complex absorbs. [ligand] (M) absorbance $1.0 \times 10^{-5}$ 0.012 $2.0 \times 10^{-5}$ 0.029 $3.0 \times 10^{-5}$ 0.042 $4.0 \times 10^{-5}$ 0.055 $5.0 \times 10^{-5}$ 0.069 For the second set of solutions the concentration of the ligand is held constant at 0.010 M, and the concentration of the metal is varied, yielding the following absorbances. [metal] (M) absorbance $1.0 \times 10^{-5}$ 0.040 $2.0 \times 10^{-5}$ 0.085 $3.0 \times 10^{-5}$ 0.125 $4.0 \times 10^{-5}$ 0.162 $5.0 \times 10^{-5}$ 0.206 Using this data, determine the stoichiometry of the metal-ligand complex. 29. Kawakami and Igarashi developed a spectrophotometric method for nitrite based on its reaction with 5, 10, 15, 20-tetrakis(4-aminophenyl) porphrine (TAPP). As part of their study they investigated the stoichiometry of the reaction between TAPP and $\text{NO}_2^-$. The following data are derived from a figure in their paper [Kawakami, T.; Igarashi, S. Anal. Chim. Acta 1996, 333, 175–180]. [TAPP] (M) [$\text{NO}_2^-$] (M) absorbance $8.0 \times 10^{-7}$ 0 0.227 $8.0 \times 10^{-7}$ $4.0 \times 10^{-8}$ 0.223 $8.0 \times 10^{-7}$ $8.0 \times 10^{-8}$ 0.211 $8.0 \times 10^{-7}$ $1.6 \times 10^{-7}$ 0.191 $8.0 \times 10^{-7}$ $3.2 \times 10^{-7}$ 0.152 $8.0 \times 10^{-7}$ $4.8 \times 10^{-7}$ 0.127 $8.0 \times 10^{-7}$ $6.4 \times 10^{-7}$ 0.107 $8.0 \times 10^{-7}$ $8.0 \times 10^{-7}$ 0.092 $8.0 \times 10^{-7}$ $1.6 \times 10^{-6}$ 0.058 $8.0 \times 10^{-7}$ $2.4 \times 10^{-6}$ 0.045 $8.0 \times 10^{-7}$ $3.2 \times 10^{-6}$ 0.037 $8.0 \times 10^{-7}$ $4.0 \times 10^{-6}$ 0.034 What is the stoichiometry of the reaction? 30. The equilibrium constant for an acid–base indicator is determined by preparing three solutions, each of which has a total indicator concentration of $1.35 \times 10^{-5}$ M. The pH of the first solution is adjusted until it is acidic enough to ensure that only the acid form of the indicator is present, yielding an absorbance of 0.673. The absorbance of the second solution, whose pH is adjusted to give only the base form of the indicator, is 0.118. The pH of the third solution is adjusted to 4.17 and has an absorbance of 0.439. What is the acidity constant for the acid–base indicator? 31. The acidity constant for an organic weak acid is determined by measuring its absorbance as a function of pH while maintaining a constant total concentration of the acid. Using the data in the following table, determine the acidity constant for the organic weak acid. pH absorbance pH absorbance 1.53 0.010 4.88 0.193 2.20 0.010 5.09 0.227 3.66 0.035 5.69 0.288 4.11 0.072 7.20 0.317 4.35 0.103 7.78 0.317 4.75 0.169 32. Suppose you need to prepare a set of calibration standards for the spectrophotometric analysis of an analyte that has a molar absorptivity of 1138 M–1 cm–1 at a wavelength of 625 nm. To maintain an acceptable precision for the analysis, the %T for the standards should be between 15% and 85%. (a) What is the concentration for the most concentrated and for the least concentrated standard you should prepare, assuming a 1.00-cm sample cell. (b) Explain how you will analyze samples with concentrations that are 10 μM, 0.1 mM, and 1.0 mM in the analyte. 33. When using a spectrophotometer whose precision is limited by the uncertainty of reading %T, the analysis of highly absorbing solutions can lead to an unacceptable level of indeterminate errors. Consider the analysis of a sample for which the molar absorptivity is $1.0 \times 10^4$ M–1 cm–1 and for which the pathlength is 1.00 cm. (a) What is the relative uncertainty in concentration for an analyte whose concentration is $2.0 \times 10^{-4}$ M if sT is ±0.002? (b) What is the relative uncertainty in the concentration if the spectrophotometer is calibrated using a blank that consists of a $1.0 \times 10^{-4}$ M solution of the analyte? 34. Hobbins reported the following calibration data for the flame atomic absorption analysis for phosphorous [Hobbins, W. B. “Direct Determination of Phosphorous in Aqueous Matricies by Atomic Absorption,” Varian Instruments at Work, Number AA-19, February 1982]. mg P/L absorbance 2130 0.048 4260 0.110 6400 0.173 8530 0.230 To determine the purity of a sample of Na2HPO4, a 2.469-g sample is dissolved and diluted to volume in a 100-mL volumetric flask. Analysis of the resulting solution gives an absorbance of 0.135. What is the purity of the Na2HPO4? 35. Bonert and Pohl reported results for the atomic absorption analysis of several metals in the caustic suspensions produced during the manufacture of soda by the ammonia-soda process [Bonert, K.; Pohl, B. “The Determination of Cd, Cr, Cu, Ni, and Pb in Concentrated CaCl2/NaCl solutions by AAS,” AA Instruments at Work (Varian) Number 98, November, 1990]. (a) The concentration of Cu is determined by acidifying a 200.0-mL sample of the caustic solution with 20 mL of concentrated HNO3, adding 1 mL of 27% w/v H2O2, and boiling for 30 min. The resulting solution is diluted to 500 mL in a volumetric flask, filtered, and analyzed by flame atomic absorption using matrix matched standards. The results for a typical analysis are shown in the following table. solution mg Cu/L absorbance blank 0.000 0.007 standard 1 0.200 0.014 standard 2 0.500 0.036 standard 3 1.000 0.072 standard 4 2.000 0.146 sample   0.027 Determine the concentration of Cu in the caustic suspension. (b) The determination of Cr is accomplished by acidifying a 200.0-mL sample of the caustic solution with 20 mL of concentrated HNO3, adding 0.2 g of Na2SO3 and boiling for 30 min. The Cr is isolated from the sample by adding 20 mL of NH3, producing a precipitate that includes the chromium as well as other oxides. The precipitate is isolated by filtration, washed, and transferred to a beaker. After acidifying with 10 mL of HNO3, the solution is evaporated to dryness. The residue is redissolved in a combination of HNO3 and HCl and evaporated to dryness. Finally, the residue is dissolved in 5 mL of HCl, filtered, diluted to volume in a 50-mL volumetric flask, and analyzed by atomic absorption using the method of standard additions. The atomic absorption results are summarized in the following table. sample mg Cradded/L absorbance blank   0.001 sample   0.045 standard addition 1 0.200 0.083 standard addition 2 0.500 0.118 standard addition 3 1.000 0.192 Report the concentration of Cr in the caustic suspension. 36. Quigley and Vernon report results for the determination of trace metals in seawater using a graphite furnace atomic absorption spectrophotometer and the method of standard additions [Quigley, M. N.; Vernon, F. J. Chem. Educ. 1996, 73, 671–673]. The trace metals are first separated from their complex, high-salt matrix by coprecipitating with Fe3+. In a typical analysis a 5.00-mL portion of 2000 ppm Fe3+ is added to 1.00 L of seawater. The pH is adjusted to 9 using NH4OH, and the precipitate of Fe(OH)3 allowed to stand overnight. After isolating and rinsing the precipitate, the Fe(OH)3 and coprecipitated metals are dissolved in 2 mL of concentrated HNO3 and diluted to volume in a 50-mL volumetric flask. To analyze for Mn2+, a 1.00-mL sample of this solution is diluted to 100 mL in a volumetric flask. The following samples are injected into the graphite furnace and analyzed. sample absorbance 2.5-µL sample + 2.5 µL of 0 ppb Mn2+ 0.223 2.5-µL sample + 2.5 µL of 2.5 ppb Mn2+ 0.294 2.5-µL sample + 2.5 µL of 5.0 ppb Mn2+ 0.361 Report the ppb Mn2+ in the sample of seawater. 37. The concentration of Na in plant materials are determined by flame atomic emission. The material to be analyzed is prepared by grinding, homogenizing, and drying at 103oC. A sample of approximately 4 g is transferred to a quartz crucible and heated on a hot plate to char the organic material. The sample is heated in a muffle furnace at 550oC for several hours. After cooling to room temperature the residue is dissolved by adding 2 mL of 1:1 HNO3 and evaporated to dryness. The residue is redissolved in 10 mL of 1:9 HNO3, filtered and diluted to 50 mL in a volumetric flask. The following data are obtained during a typical analysis for the concentration of Na in a 4.0264-g sample of oat bran. sample mg Na/L emission (arbitrary units) blank 0.00 0.0 standard 1 2.00 90.3 standard 2 4.00 181 standard 3 6.00 272 standard 4 8.00 363 standard 5 10.00 448 sample   238 Report the concentration of μg Na/g sample. 38. Yan and colleagues developed a method for the analysis of iron based its formation of a fluorescent metal–ligand complex with the ligand 5-(4-methylphenylazo)-8-aminoquinoline [Yan, G.; Shi, G.; Liu, Y. Anal. Chim. Acta 1992, 264, 121–124]. In the presence of the surfactant cetyltrimethyl ammonium bromide the analysis is carried out using an excitation wavelength of 316 nm with emission monitored at 528 nm. Standardization with external standards gives the following calibration curve. $I_{f}=-0.03+\left(1.594 \ \mathrm{mg}^{-1} \ \mathrm{L}\right) \times \frac{\mathrm{mg} \ \mathrm{Fe}^{3+}}{\mathrm{L}} \nonumber$ A 0.5113-g sample of dry dog food is ashed to remove organic materials, and the residue dissolved in a small amount of HCl and diluted to volume in a 50-mL volumetric flask. Analysis of the resulting solution gives a fluorescent emission intensity of 5.72. Determine the mg Fe/L in the sample of dog food. 39. A solution of $5.00 \times 10^{-5}$ M 1,3-dihydroxynaphthelene in 2 M NaOH has a fluorescence intensity of 4.85 at a wavelength of 459 nm. What is the concentration of 1,3-dihydroxynaphthelene in a solution that has a fluorescence intensity of 3.74 under identical conditions? 40. The following data is recorded for the phosphorescent intensity of several standard solutions of benzo[a]pyrene. [benzo[a]pyrene] (M) emission intensity 0 0.00 $1.00 \times 10^{-5}$ 0.98 $3.00 \times 10^{-5}$ 3.22 $6.00 \times 10^{-5}$ 6.25 $1.00 \times 10^{-4}$ 10.21 What is the concentration of benzo[a]pyrene in a sample that yields a phosphorescent emission intensity of 4.97? 41. The concentration of acetylsalicylic acid, C9H8O4, in aspirin tablets is determined by hydrolyzing it to the salicylate ion, $\text{C}_7 \text{H}_5 \text{O}_2^-$, and determining its concentration spectrofluorometrically. A stock standard solution is prepared by weighing 0.0774 g of salicylic acid, C7H6O2, into a 1-L volumetric flask and diluting to volume. A set of calibration standards is prepared by pipeting 0, 2.00, 4.00, 6.00, 8.00, and 10.00 mL of the stock solution into separate 100-mL volumetric flasks that contain 2.00 mL of 4 M NaOH and diluting to volume. Fluorescence is measured at an emission wavelength of 400 nm using an excitation wavelength of 310 nm with results shown in the following table. mL of stock solution emission intensity 0.00 0.00 2.00 3.02 4.00 5.98 6.00 9.18 8.00 12.13 10.00 14.96 Several aspirin tablets are ground to a fine powder in a mortar and pestle. A 0.1013-g portion of the powder is placed in a 1-L volumetric flask and diluted to volume with distilled water. A portion of this solution is filtered to remove insoluble binders and a 10.00-mL aliquot transferred to a 100-mL volumetric flask that contains 2.00 mL of 4 M NaOH. After diluting to volume the fluorescence of the resulting solution is 8.69. What is the %w/w acetylsalicylic acid in the aspirin tablets? 42. Selenium (IV) in natural waters is determined by complexing with ammonium pyrrolidine dithiocarbamate and extracting into CHCl3. This step serves to concentrate the Se(IV) and to separate it from Se(VI). The Se(IV) is then extracted back into an aqueous matrix using HNO3. After complexing with 2,3-diaminonaphthalene, the complex is extracted into cyclohexane. Fluorescence is measured at 520 nm following its excitation at 380 nm. Calibration is achieved by adding known amounts of Se(IV) to the water sample before beginning the analysis. Given the following results what is the concentration of Se(IV) in the sample. [Se(IV)] added (nM) emission intensity 0.00 323 2.00 597 4.00 862 6.00 1123 43. Fibrinogen is a protein that is produced by the liver and found in human plasma. Its concentration in plasma is clinically important. Many of the analytical methods used to determine the concentration of fibrinogen in plasma are based on light scattering following its precipitation. For example, da Silva and colleagues describe a method in which fibrino- gen precipitates in the presence of ammonium sulfate in a guanidine hydrochloride buffer [da Silva, M. P.; Fernandez-Romero, J. M.; Luque de Castro, M. D. Anal. Chim. Acta 1996, 327, 101–106]. Light scattering is measured nephelometrically at a wavelength of 340 nm. Analysis of a set of external calibration standards gives the following calibration equation $I_{\mathrm{s}}=-4.66+9907.63 C \nonumber$ where Is is the intensity of scattered light and C is the concentration of fibrinogen in g/L. A 9.00-mL sample of plasma is collected from a patient and mixed with 1.00 mL of an anticoagulating agent. A 1.00-mL aliquot of this solution is diluted to 250 mL in a volumetric flask and is found to have a scattering intensity of 44.70. What is the concentration of fibrinogen, in gram per liter, in the plasma sample?
textbooks/chem/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/10%3A_Spectroscopic_Methods/10.09%3A_Problems.txt
The following set of experiments introduce students to the applications of spectroscopy. Experiments are grouped into five categories: UV/Vis spectroscopy, IR spectroscopy, atomic absorption and atomic emission, fluorescence and phosphorescence, and signal averaging. UV/Vis Spectroscopy • Abney, J. R.; Scalettar, B. A. “Saving Your Students’ Skin. Undergraduate Experiments That Probe UV Protection by Sunscreens and Sunglasses,” J. Chem. Educ. 1998, 75, 757–760. • Ainscough, E. W.; Brodie, A. M. “The Determination of Vanillin in Vanilla Extract,” J. Chem. Educ. 1990, 67, 1070–1071. • Allen, H. C.; Brauers, T.; Finlayson-Pitts, B. J. “Illustrating Deviations in the Beer-Lambert Law in an Instrumental Analysis Laboratory: Measuring Atmospheric Pollutants by Differential Optical Absorption Spectrometry,” J. Chem. Educ. 1997, 74, 1459–1462. • Blanco, M.; Iturriaga, H.; Maspoch, S.; Tarîn, P. “A Simple Method for Spectrophotometric Determination of Two-Components with Overlapped Spectra,” J. Chem. Educ. 1989, 66, 178–180. • Bonicamp, J. M.; Martin, K. L.; McBride, G. R.; Clark, R. W. “Beer’s Law is Not a Straight Line: Amplification of Errors by Transformation,” Chem. Educator 1999, 4, 81–88. • Bruneau, E.; Lavabre, D.; Levy, G.; Micheau, J. C. “Quantitative Analysis of Continuous-Variation Plots with a Comparison of Several Methods,” J. Chem. Educ. 1992, 69, 833–837. • Cappas, C.; Hoffman, N.; Jones, J.; Young, S. “Determination of Concentrations of Species Whose Absorption Bands Overlap Extensively,” J. Chem. Educ. 1991, 68, 300–303. • Crisp, P. T.; Eckert, J. M.; Gibson, N. A. “The Determination of Anionic Surfactants in Natural and Waste Waters,” J. Chem. Educ. 1983, 60, 236–238. • Dilbeck, C. W.; Ganske, J. A. “Detection of NOx in Automobile Exhaust: An Applied Experiment in Atmospheric/Environmental Chemistry for the General Chemistry Laboratory,” Chem. Educator 2008, 13, 1–5. • Domínguez, A., Fernández, A.; González, N.; Iglesias, E.; Montenegro, L. “Determination of Critical Micelle Concentration of Some Surfactants by Three Techniques,” J. Chem. Educ. 1997, 74, 1227– 1231. • Gilbert, D. D. “Determining Optimum Spectral Bandwidth,” J. Chem. Educ. 1991, 68, A278– A281. • Han, J.; Story, T.; Han, G. “A Spectrophotometric Method for Quantitative Determination of Bromine Using Tris(2-carboxyethyl)phophine,” J. Chem. Educ. 1999, 76, 976–977. • Higginbotham, C.; Pike, C. F.; Rice, J, K. “Spectroscopy in Sol-Gel Matricies,” J. Chem. Educ. 1998, 75, 461–464. • Hill, Z. D.; MacCarthy, P. “Novel Approach to Job’s Method,” J. Chem. Educ. 1986, 63, 162–167. • Ibañez, G. A.; Olivieri, A. C.; Escandar, G. M. “Determination of Equilibrium Constants of Metal Complexes from Spectrophotometric Measurements,” J. Chem. Educ. 1999, 76, 1277–1281. • Long, J. R.; Drago, R. S. “The Rigorous Evaluation of Spectrophotometric Data to Obtain an Equilibrium Constant,” J. Chem. Educ. 1982, 59, 1037–1039. • Lozano-Calero; D.; Martin-Palomeque, P. “Determination of Phosphorous in Cola Drinks,” J. Chem. Educ. 1996, 73, 1173–1174. • Maloney, K. M.; Quiazon, E. M.; Indralingam, R. “Measurement of Iron in Egg Yolk: An Instrumental Analysis Measurement Using Biochemical Principles,” J. Chem. Educ. 2008, 85, 399–400. • Mascotti, D. P.; Waner, M. J. “Complementary Spectroscopic Assays for Investigation Protein-Ligand Binding Activity: A Project for the Advanced Chemistry Laboratory,” J. Chem. Educ. 2010, 87, 735– 738. • McClain, R. L. “Construction of a Photometer as an Instructional Tool for Electronics and Instrumentation,” J. Chem. Educ. 2014, 91, 747–750. • McDevitt, V. L.; Rodriquez, A.; Williams, K. R. “Analysis of Soft Drinks: UV Spectrophotometry, Liquid Chromatography, and Capillary Electrophoresis,” J. Chem. Educ. 1998, 75, 625–629. • Mehra, M. C.; Rioux, J. “An Analytical Chemistry Experiment in Simultaneous Spectrophotometric Determination of Fe(III) and Cu(II) with Hexacyanoruthenate(II) Reagent,” J. Chem. Educ. 1982, 59, 688–689. • Mitchell-Koch, J. T.; Reid, K. R.; Meyerhoff, M. E. “Salicylate Detection by Complexation with Iron(III) and Optical Absorbance Spectroscopy,” J. Chem. Educ. 2008, 85, 1658–1659. • Msimanga, H. Z.; Wiese, J. “Determination of Acetaminophen in Analgesics by the Standard Addition Method: A Quantitative Analytical Chemistry Laboratory,” Chem. Educator 2005, 10, 1–7. • Örstan, A.; Wojcik, J. F. “Spectroscopic Determination of Protein-Ligand Binding Constants,” J. Chem. Educ. 1987, 64, 814–816. • Pandey, S.; Powell, J. R.; McHale, M. E. R.; Acree Jr., W. E. “Quantitative Determination of Cr(III) and Co(II) Using a Spectroscopic H-Point Standard Addition,” J. Chem. Educ. 1997, 74, 848–850. • Parody-Morreale, A.; Cámara-Artigas, A.; Sánchez-Ruiz, J. M. “Spectrophotometric Determination of the Binding Constants of Succinate and Chloride to Glutamic Oxalacetic Transaminase,” J. Chem. Educ. 1990, 67, 988–990. • Ravelo-Perez, L. M.; Hernández-Borges, J.; Rodríguez-Delgado, M. A.; Borges-Miquel, T. “Spectrophotometric Analysis of Lycopene in Tomatoes and Watermelons: A Practical Class,” Chem. Educator 2008, 13, 1–3. • Russell, D. D.; Potts, J.; Russell, R. M.; Olson, C.; Schimpf, M. “Spectroscopic and Potentiometric Investigation of a Diprotic Acid: An Experimental Approach to Understanding Alpha Functions,” Chem. Educator 1999, 4, 68–72. • Smith, E. T.; Matachek, J. R. “A Colorful Investigation of a Diprotic Acid: A General Chemistry Laboratory Exercise,” Chem. Educator 2002, 7, 359–363 • Tello-Solis, S. R. “Thermal Unfolding of Lysozyme Studied by UV Difference Spectroscopy,” Chem. Educator 2008, 13, 16–18. • Tucker, S.; Robinson, R.; Keane, C.; Boff, M.; Zenko, M.; Batish, S.; Street, Jr., K. W. “Colorimetric Determination of pH,” J. Chem. Educ. 1989, 66, 769–771. • Vitt, J. E. “Troubleshooting 101: An Instrumental Analysis Experiment,” J. Chem. Educ. 2008, 85, 1660–1662. • Williams, K. R.; Cole, S. R.; Boyette, S. E.; Schulman, S. G. “The Use of Dristan Nasal Spray as the Unknown for Simultaneous Spectrophotometric Analysis of a Mixture,” J. Chem. Educ. 1990, 67, 535. • Walmsley, F. “Aggregation in Dyes: A Spectrophotometric Study,” J. Chem. Educ. 1992, 69, 583. Wells, T. A. “Construction of a Simple Myoglobin-Based Optical Biosensor,” Chem. Educator 2007, 12, 1–3. Yarnelle, M. K.; West, K. J. “Modification of an Ultraviolet Spectrophotometric Determination of the Active Ingredients in APC Tablets,” J. Chem. Educ. 1989, 66, 601–602. IR Spectroscopy • Dragon, S.; Fitch, A. “Infrared Spectroscopy Determination of Lead Binding to Ethylenediaminetetraacetic Acid,” J. Chem. Educ. 1998, 75, 1018–1021. • Frohlich, H. “Using Infrared Spectroscopy Measurements to Study Intermolecular Hydrogen Bonding,” J. Chem. Educ. 1993, 70, A3–A6. • Garizi, N.; Macias, A.; Furch, T.; Fan, R.; Wagenknecht, P.; Singmaster, K. A. “Cigarette Smoke Analysis Using an Inexpensive Gas-Phase IR Cell,” J. Chem. Educ. 2001, 78, 1665–1666. • Indralingam, R.; Nepomuceno, A. I. “The Use of Disposable IR Cards for Quantitative Analysis Using an Internal Standard,” J. Chem. Educ. 2001, 78, 958–960. • Mathias, L. J.; Hankins, M. G.; Bertolucci, C. M.; Grubb, T. L.; Muthiah, J. “Quantitative Analysis by FTIR: Thin Films of Copolymers of Ethylene and Vinyl Acetate,” J. Chem. Educ. 1992, 69, A217– A219. • Schuttlefield, J. D.; Grassian, V. H. “ATR-FTIR Spectroscopy in the Undergraduate Chemistry Laboratory. Part I: Fundamentals and Examples,” J. Chem. Educ. 2008, 85, 279–281. • Schuttlefield, J. D.; Larsen, S. C.; Grassian, V. H. “ATR-FTIR Spectroscopy in the Undergraduate Chemistry Laboratory. Part II: A Physical Chemistry Laboratory Experiment on Surface Adsorption,” J. Chem. Educ. 2008, 85, 282–284. • Seasholtz, M. B.; Pence, L. E.; Moe Jr., O. A. “Determination of Carbon Monoxide in Automobile Exhaust by FTIR Spectroscopy,” J. Chem. Educ. 1988, 65, 820–823. Atomic Absorption and Atomic Emission Spectroscopy • Amarasiriwardena, D. “Teaching analytical atomic spectroscopy advances in an environmental chemistry class using a project-based laboratory approach: investigation of lead and arsenic distributions in a lead arsenate contaminated apple orchard,” Anal. Bioanal. Chem. 2007, 388, 307–314. • Bazzi, A.; Bazzi, J.; Deng, Y.’ Ayyash, M. “Flame Atomic Absorption Spectroscopic Determination of Iron in Breakfast Cereals: A Validated Experiment for the Analytical Chemistry Laboratory,” Chem. Educator 2014, 19, 283–286. • Buffen, B. P. “Removal of Heavy Metals from Water: An Environmentally Significant Atomic Absorption Spectrometry Experiment,” J. Chem. Educ. 1999, 76, 1678–1679. • Dockery, C. R.; Blew, M. J.; Goode, S. R. “Visualizing the Solute Vaporization Interference in Flame Atomic Absorption Spectroscopy,” J. Chem. Educ. 2008, 85, 854–858. • Donas, M. K.; Whissel, G.; Dumas, A.; Golden, K. “Analyzing Lead Content in Ancient Bronze Coins by Flame Atomic Absorption Spectroscopy,” J. Chem. Educ. 2009, 86, 343–346. • Finch, L. E.; Hillyer, M. M.; Leopold, M. C. “Quantitative Analysis of Heavy Metals in Children’s Toys and Jewelry: A Multi-Instrument, Multitechnique Exercise in Analytical Chemistry and Public Health,” J. Chem. Educ. 2015, 92, 849–854. • Garrison, N.; Cunningham, M.; Varys, D.; Schauer, D. J. “Discovering New Biosorbents with Atomic Absorption Spectroscopy: An Undergraduate Laboratory Experiment,” J. Chem. Educ. 2014, 91, 583–585. • Gilles de Pelichy, L. D.; Adams, C.; Smith, E. T. “Analysis of the Essential Nutrient Strontium in Marine Aquariums by Atomic Absorption Spectroscopy,” J. Chem. Educ. 1997, 74, 1192–1194. • Hoskins, L. C.; Reichardt, P. B.; Stolzberg, R. J. “Determination of the Extraction Constant for Zinc Pyrrolidinecarbodithioate,” J. Chem. Educ. 1981, 58, 580–581. • Kooser, A. S.; Jenkins, J. L.; Welch, L. E. “Inductively Coupled Plasma-Atomic Emission Spectroscopy: Two Laboratory Activities for the Undergraduate Instrumental Analysis Course,” J. Chem. Educ. 2003, 80, 86–88. • Kostecka, K. S. “Atomic Absorption Spectroscopy of Calcium in Foodstuffs in Non-Science-Major Courses,” J. Chem. Educ. 2000, 77, 1321–1323. • Kristian, K. E.; Friedbauer, S.; Kabashi, D.; Ferencz, K. M.; Barajas, J. C.; O’Brien, K. “A Simplified Digestion Protocol for the Analysis of Hg in Fish by Cold Vapor Atomic Absorption Spectroscopy,” J. Chem. Educ. 2015, 92, 698–702. • Lehman, T. A.; Everett, W. W. “Solubility of Lead Sulfate in Water and in Sodium Sulfate Solutions,” J. Chem. Educ. 1982, 59, 797. • Markow, P. G. “Determining the Lead Content of Paint Chips,” J. Chem. Educ. 1996, 73, 178–179. • Masina, M. R.; Nkosi, P. A.; Rasmussen, P. W.; Shelembe, J. S.; Tyobeka, T. E. “Determination of Metal Ions in Pineapple Juice and Effluent of a Fruit Canning Industry,” J. Chem. Educ. 1989, 66, 342–343. • Quigley, M. N. “Determination of Calcium in Analgesic Tablets using Atomic Absorption Spectrophotometry,” J. Chem. Educ. 1994, 71, 800. • Quigley, M. N.; Vernon, F. “Determination of Trace Metal Ion Concentrations in Seawater,” J. Chem. Educ. 1996, 73, 671–675. • Quigley, M. N.; Vernon, F. “A Matrix Modification Experiment for Use in Electrothermal Atomic Absorption Spectrophotometry,” J. Chem. Educ. 1996, 73, 980–981. • Palkendo, J. A.; Kovach, J.; Betts, T. A. “Determination of Wear Metals in Used Motor Oil by Flame Atomic Absorption Spectroscopy,” J. Chem. Educ. 2014, 91, 579–582. • Rheingold, A. L.; Hues, S.; Cohen, M. N. “Strontium and Zinc Content in Bones as an Indication of Diet,” J. Chem. Educ. 1983, 60, 233–234. • Rocha, F. R. P.; Nóbrega, J. A. “Effects of Solution Physical Properties on Copper and Chromium Signals in Flame Atomic Absorption Spectrometry,” J. Chem. Educ. 1996, 73, 982–984. Fluorescence and Phosphorescence Spectroscopy • Bigger, S. W.; Bigger, A. S.; Ghiggino, K. P. “FluSpec: A Simulated Experiment in Fluorescence Spectroscopy,” J. Chem. Educ. 2014, 91, 1081–1083. • Buccigross, J. M.; Bedell, C. M.; Suding-Moster, H. L. “Fluorescent Measurement of TNS Binding to Calmodulin,” J. Chem. Educ. 1996, 73, 275–278. • Henderleiter, J. A.; Hyslopo, R. M. “The Analysis of Riboflavin in Urine by Fluorescence,” J. Chem. Educ. 1996, 73, 563–564. • Koenig, M. H.; Yi, E. P.; Sandridge, M. J.; Mathew, A. S.; Demas, J. N. “Open-Box Approach to Measuring Fluorescence Quenching Using an iPad Screen and Digital SLR Camera,” J. Chem. Educ. 2015, 92, 310–316. • Lagoria, M. G.; Román, E. S. “How Does Light Scattering Affect Luminescence? Fluorescence Spectra and Quantum Yields in the Solid Form,” J. Chem. Educ. 2002, 79, 1362–1367. • Richardson, D. P.; Chang, R. “Lecture Demonstrations of Fluorescence and Phosphorescence,” Chem. Educator 2007, 12, 272–274. • Seixas de Melo, J. S.; Cabral, C.; Burrows, H. D. “Photochemistry and Photophysics in the Laboratory. Showing the Role of Radiationless and Radiative Decay of Excited States,” Chem. Educator 2007, 12, 1–6. • Sheffield, M. C.; Nahir, T. M. “Analysis of Selenium in Brazil Nuts by Microwave Digestion and Fluorescence Detection,” J. Chem. Educ. 2002, 79, 1345–1347. Signal Averaging • Blitz, J. P.; Klarup, D. G. “Signal-to-Noise Ratio, Signal Processing, and Spectral Information in the Instrumental Analysis Laboratory,” J. Chem. Educ. 2002, 79, 1358–1360. • Stolzberg, R. J. “Introduction to Signals and Noise in an Instrumental Method Course,” J. Chem. Educ. 1983, 60, 171–172. • Tardy, D. C. “Signal Averaging. A Signal-to-Noise Enhancement Experiment for the Advanced Chemistry Laboratory,” J. Chem. Educ. 1986, 63, 648–650. The following sources provide additional information on spectroscopy in the following areas: general spectroscopy, Beer’s law, instrumentation, Fourier transforms, IR spectroscopy, atomic asorption and emission, luminescence, and applications. General Spectroscopy • Ball, D. W. “Units! Units! Units!” Spectroscopy 1995, 10(8), 44–47. • A History of Analytical Chemistry, Laitinen, H. A.; Ewing, G. W, Eds. The Division of Analytical Chemistry of the American Chemical Society: Washington, D. C., 1977, p103–243. • Ingle, J. D.; Crouch, S. R. Spectrochemical Analysis, Prentice Hall, Englewood Cliffs, N. J.; 1988. • Macomber, R. S. “A Unifying Approach to Absorption Spectroscopy at the Undergraduate Level,” J. Chem. Educ. 1997, 74, 65–67. • Orchin, M.; Jaffe, H. H. Symmetry, Orbitals and Spectra, Wiley-Interscience: New York, 1971. • Thomas, N. C. “The Early History of Spectroscopy,” J. Chem. Educ. 1991, 68, 631–633. Beer’s Law • Lykos, P. “The Beer-Lambert Law Revisited: A Development without Calculus,” J. Chem. Educ. 1992, 69, 730–732. • Ricci, R. W.; Ditzler, M. A.; Nestor, L. P. “Discovering the Beer-Lambert Law,” J. Chem. Educ. 1994, 71, 983–985. Instrumentation • Altermose, I. R. “Evolution of Instrumentation for UV-Visible Spectrophotometry: Part I,” J. Chem. Educ. 1986, 63, A216–A223. • Altermose, I. R. “Evolution of Instrumentation for UV-Visible Spectrophotometry: Part II,” J. Chem. Educ. 1986, 63, A262–A266. • Grossman, W. E. L. “The Optical Characteristics and Production of Diffraction Gratings,” J. Chem. Educ. 1993, 70, 741–748. • Jones, D. G. “Photodiode Array Detectors in UV-Vis Spectroscopy: Part I,” Anal. Chem. 1985, 57,1057A–1073A. • Jones, D. G. “Photodiode Array Detectors in UV-Vis Spectroscopy: Part II,” Anal. Chem. 1985, 11, 1207A–1214A. • Palmer, C. “Diffraction Gratings,” Spectroscopy, 1995, 10(2), 14–15. Fourier Transforms • Bracewell, R. N. “The Fourier Transform,” Sci. American 1989, 260(6), 85–95. • Glasser, L. “Fourier Transforms for Chemists: Part I. Introduction to the Fourier Transform,” J. Chem. Educ. 1987, 64, A228–A233. • Glasser, L. “Fourier Transforms for Chemists: Part II. Fourier Transforms in Chemistry and Spectroscopy,” J. Chem. Educ. 1987, 64, A260–A266. • Glasser, L. “Fourier Transforms for Chemists: Part III. Fourier Transforms in Data Treatment,” J. Chem. Educ. 1987, 64, A306–A313. • Graff, D. K. “Fourier and Hadamard: Transforms in Spectroscopy,” J. Chem. Educ. 1995, 72, 304–309. • Griffiths, P. R. Chemical Fourier Transform Spectroscopy, Wiley-Interscience: New York, 1975. • Transform Techniques in Chemistry, Griffiths, P. R. Ed., Plenum Press: New York, 1978. • Perkins, W. E. “Fourier Transform Infrared Spectroscopy: Part I. Instrumentation,” J. Chem. Educ. 1986, 63, A5–A10. • Perkins, W. E. “Fourier Transform Infrared Spectroscopy: Part II. Advantages of FT-IR,” J. Chem. Educ. 1987, 64, A269–A271. • Perkins, W. E. “Fourier Transform Infrared Spectroscopy: Part III. Applications,” J. Chem. Educ. 1987, 64, A296–A305. • Strong III, F. C. “How the Fourier Transform Infrared Spectrophotometer Works,” J. Chem. Educ. 1979, 56, 681–684. IR Spectroscopy. • Optical Spectroscopy: Sampling Techniques Manual, Harrick Scientific Corporation: Ossining, N. Y., 1987. • Leyden, D. E.; Shreedhara Murthy, R. S. “Surface-Selective Sampling Techniques in Fourier Transform Infrared Spectroscopy,” Spectroscopy 1987, 2(2), 28–36. • Porro, T. J.; Pattacini, S. C. “Sample Handling for Mid-Infrared Spectroscopy, Part I: Solid and Liquid Sampling,” Spectroscopy 1993, 8(7), 40–47. • Porro, T. J.; Pattacini, S. C. “Sample Handling for Mid-Infrared Spectroscopy, Part II: Specialized Techniques,” Spectroscopy 1993, 8(8), 39–44. Atomic Absorption and Emission • Blades, M. W.; Weir, D. G. “Fundamental Studies of the Inductively Coupled Plasma,” Spectroscopy 1994, 9, 14–21. • Greenfield, S. “Invention of the Annular Inductively Coupled Plasma as a Spectroscopic Source,” J. Chem. Educ. 2000, 77, 584–591. • Hieftje, G. M. “Atomic Absorption Spectrometry - Has it Gone or Where is it Going?” J. Anal. At. Spectrom. 1989, 4, 117–122. • Jarrell, R. F. “A Brief History of Atomic Emission Spectrochemical Analysis, 1666–1950,” J. Chem. Educ. 2000, 77, 573–576 • Koirtyohann, S. R. “A History of Atomic Absorption Spectrometry From an Academic Perspective,”Anal. Chem. 1991, 63, 1024A–1031A. • L’Vov, B. V. “Graphite Furnace Atomic Absorption Spectrometry,” Anal. Chem. 1991, 63, 924A–931A. • Slavin, W. “A Comparison of Atomic Spectroscopic Analytical Techniques,” Spectroscopy, 1991, 6, 16–21. • Van Loon, J. C. Analytical Atomic Absorption Spectroscopy, Academic Press: New York, 1980. • Walsh, A. “The Development of Atomic Absorption Methods of Elemental Analysis 1952–1962,” Anal. Chem. 1991, 63, 933A–941A. • Welz, B. Atomic Absorption Spectroscopy, VCH: Deerfield Beach, FL, 1985. Luminescence Spectroscopy • Guilbault, G. G. Practical Fluorescence, Decker: New York, 1990. • Schenk, G. “Historical Overview of Fluorescence Analysis to 1980,” Spectroscopy 1997, 12, 47–56. • Vo-Dinh, T. Room-Temperature Phosphorimetry for Chemical Analysis, Wiley-Interscience: New York, 1984. • Winefordner, J. D.; Schulman, S. G.; O’Haver, T. C. Luminescence Spectroscopy in Analytical Chemistry, Wiley-Interscience: New York, 1969. Applications • Trace Analysis and Spectroscopic Methods for Molecules, Christian, G. D.; Callis, J. B. Eds., Wiley-Interscience: New York, 1986. • Vandecasteele, C.; Block, C. B. Modern Methods for Trace Element Determination, Wiley: Chichester, England, 1994. • Skoog, D. A.; Holler, F. J.; Nieman, T. A. Principles of Instrumental Analysis, Saunders: Philadelphia, 1998. • Van Loon, J. C. Selected Methods of Trace Metal Analysis: Biological and Environmental Samples, Wiley- Interscience: New York, 1985. Gathered here are resources and experiments for analyzing multicomponent samples using mathematical techniques not covered in this textbook. • Aberasturi, F.; Jimenez, A. I.; Jimenez, F.; Arias, J. J. “UV-Visible First-Derivative Spectrophotometry Applied to an Analysis of a Vitamin Mixture,” J. Chem. Educ. 2001, 78, 793–795. • Afkhami, A.; Abbasi-Tarighat, M.; Bahram, M.; Abdollahi, H. “A new strategy for solving matrix effect in multivariate calibration standard addition data using combination of H-point curve isolation and H-point standard addition methods,” Anal. Chim. Acta 2008, 613, 144–151. • Brown, C. W.; Obremski, R. J. “Multicomponent Quantitative Analysis,” Appl. Spectrosc. Rev. 1984, 20, 373–418. • Charles, M. J.; Martin, N. W.; Msimanga, H. Z. “Simultaneous Determination of Aspirin, Salicylamide, and Caffeine in Pain Relievers by Target Factor Analysis,” J. Chem. Educ. 1997, 74, 1114–1117. • Dado, G.; Rosenthal, J. “Simultaneous Determination of Cobalt, Copper, and Nickel by Multivariate Linear Regression,” J. Chem. Educ. 1990, 67, 797–800. • DiTusa, M. R.; Schilt, A. A. “Selection of Wavelengths for Optimum Precision in Simultaneous Spectrophotometric Determinations,” J. Chem. Educ. 1985, 62, 541–542. • Gómez, D. G.; de la Peña, A. M.; Mansilla, A. E.; Olivieri, A. C. “Spectrophotometric Analysis of Mixtures by Classical Least-Squares Calibration: An Advanced Experiment Introducing MATLAB,” Chem. Educator 2003, 8, 187–191. • Harvey, D. T.; Bowman, A. “Factor Analysis of Multicomponent Samples,” J. Chem. Educ. 1990, 67, 470–472. • Lucio-Gutierrez, J. R.; Salazar-Cavazos, M. L.; de Torres, N. W. “Chemometrics in the Teaching Lab. Quantification of a Ternary Mixture of Common Pharmaceuticals by First- and Second-Derivative IR Spectroscopy,” Chem. Educator 2004, 9, 234–238. • Padney, S.; McHale, M. E. R.; Coym, K. S.; Acree Jr., W. E. “Bilinear Regression Analysis as a Means to Reduce Matrix Effects in Simultaneous Spectrophotometric Determination of Cr(III) and Co(II),” J. Chem. Educ. 1998, 75, 878–880. • Raymond, M.; Jochum, C.; Kowalski, B. R. “Optimal Multicomponent Analysis Using the Generalized Standard Addition Method,” J. Chem. Educ. 1983, 60, 1072–1073. • Ribone, M. E.; Pagani, A. P.; Olivieri, A. C.; Goicoechea, H. C. “Determination of the Active Principle in a Spectrophotometry and Principal Component Regression Analysis,” J. Chem. Educ. 2000, 77, 1330–1333. • Rojas, F. S.; Ojeda, C. B. “Recent developments in derivative ultraviolet/visible absorption spectrophotometry: 2004–2008,” Anal. Chim. Acta 2009, 635, 22–44.
textbooks/chem/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/10%3A_Spectroscopic_Methods/10.10%3A_Additional_Resources.txt
Chapter Summary The spectrophotometric methods of analysis covered in this chapter include those based on the absorption, emission, or scattering of electromagnetic radiation. When a molecule absorbs UV/Vis radiation it undergoes a change in its valence shell electron configuration. A change in vibrational energy results from the absorption of IR radiation. Experimentally we measure the fraction of radiation transmitted, T, by the sample. Instrumentation for measuring absorption requires a source of electromagnetic radiation, a means for selecting a wavelength, and a detector for measuring transmittance. Beer’s law relates absorbance to both transmittance and to the concentration of the absorbing species ($A = - \log T = \varepsilon b C$). In atomic absorption we measure the absorption of radiation by gas phase atoms. Samples are atomized using thermal energy from either a flame or a graphite furnace. Because the width of an atom’s absorption band is so narrow, the continuum sources common for molecular absorption are not used. Instead, a hollow cathode lamp provides the necessary line source of radiation. Atomic absorption suffers from a number of spectral and chemical interferences. The absorption or scattering of radiation from the sample’s matrix are important spectral interferences that are minimized by background correction. Chemical interferences include the formation of nonvolatile forms of the analyte and ionization of the analyte. The former interference is minimized by using a releasing agent or a protecting agent, and an ionization suppressor helps minimize the latter interference. When a molecule absorbs radiation it moves from a lower energy state to a higher energy state. In returning to the lower energy state the molecule may emit radiation. This process is called photoluminescence. One form of photoluminescence is fluorescence in which the analyte emits a photon without undergoing a change in its spin state. In phosphorescence, emission occurs with a change in the analyte’s spin state. For low concentrations of analyte, both fluorescent and phosphorescent emission intensities are a linear function of the analyte’s concentration. Thermally excited atoms also emit radiation, forming the basis for atomic emission spectroscopy. Thermal excitation is achieved using either a flame or a plasma. Spectroscopic measurements also include the scattering of light by a particulate form of the analyte. In turbidimetry, the decrease in the radiation’s transmission through the sample is measured and related to the ana- lyte’s concentration through an equation similar to Beer’s law. In nephelometry we measure the intensity of scattered radiation, which varies linearly with the analyte’s concentration. Key Terms absorbance amplitude background correction chromophore double-beam electromagnetic spectrum excitation spectrum fiber-optic probe fluorescence frequency interferometer ionization suppressor line source mole-ratio method nephelometry phosphorescence photoluminescence polychromatic relaxation self-absorption signal-to-noise ratio slope-ratio method spectrophotometer transducer turbidimetry wavenumber absorbance spectrum attenuated total reflectance Beer’s law continuum source effective bandwidth emission external conversion filter fluorescent quantum yield graphite furnace internal conversion Jacquinot’s advantage method of continuous variations monochromatic nominal wavelength phosphorescent quantum yield photon protecting agent releasing agent signal averaging single-beam spectral searching spectroscopy transmittance vibrational relaxation absorptivity atomization chemiluminescence dark current electromagnetic radiation emission spectrum Fellgett’s advantage filter photometer fluorimeter interferogram intersystem crossing lifetime molar absorptivity monochromator phase angle photodiode array plasma radiationless deactivation resolution signal processor singlet excited state spectrofluorometer stray radiation triplet excited state wavelength
textbooks/chem/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/10%3A_Spectroscopic_Methods/10.11%3A_Chapter_Summary_and_Key_Terms.txt
In Chapter 10 we examined several spectroscopic techniques that take advantage of the interaction between electromagnetic radiation and matter. In this chapter we turn our attention to electrochemical techniques in which the potential, current, or charge in an electrochemical cell serves as the analytical signal. Although there are only three fundamental electrochemical signals, there are many possible experimental designs—too many, in fact, to cover adequately in an introductory textbook. The simplest division of electrochemical techniques is between bulk techniques, in which we measure a property of the solution in the electrochemical cell, and interfacial techniques, in which the potential, current, or charge depends on the species present at the interface between an electrode and the solution in which it sits. The measurement of a solution’s conductivity, which is proportional to the total concentration of dissolved ions, is one example of a bulk electrochemical technique. A determination of pH using a pH electrode is an example of an interfacial electrochemical technique. Only interfacial electrochemical methods receive further consideration in this chapter. • 11.1: Overview of Electrochemistry The focus of this chapter is on analytical techniques that use a measurement of potential, current, or charge to determine an analyte’s concentration or to characterize an analyte’s chemical reactivity. Collectively we call this area of analytical chemistry electrochemistry because its originated from the study of the movement of electrons in an oxidation–reduction reaction. • 11.2: Potentiometric Methods In potentiometry we measure the potential of an electrochemical cell under static conditions. Because no current—or only a negligible current—flows through the electrochemical cell, its composition remains unchanged. For this reason, potentiometry is a useful quantitative method of analysis. • 11.3: Coulometric Methods Coulometry is based on an exhaustive electrolysis of the analyte. By exhaustive we mean that the analyte is oxidized or reduced completely at the working electrode, or that it reacts completely with a reagent generated at the working electrode. • 11.4: Voltammetric and Amperometric Methods In voltammetry we apply a time-dependent potential to an electrochemical cell and measure the resulting current as a function of that potential. We call the resulting plot of current versus applied potential a voltammogram, and it is the electrochemical equivalent of a spectrum in spectroscopy, providing quantitative and qualitative information about the species involved in the oxidation or reduction reaction. • 11.5: Problems End-of-chapter problems to test your understanding of topics in this chapter. • 11.6: Additional Resources A compendium of resources to accompany topics in this chapter. • 11.7: Chapter Summary and Key Terms Summary of chapter's main topics and a list of key terms introduced in this chapter. 11: Electrochemical Methods The focus of this chapter is on analytical techniques that use a measurement of potential, current, or charge to determine an analyte’s concentration or to characterize an analyte’s chemical reactivity. Collectively we call this area of analytical chemistry electrochemistry because its originated from the study of the movement of electrons in an oxidation–reduction reaction. Despite the difference in instrumentation, all electrochemical techniques share several common features. Before we consider individual examples in greater detail, let’s take a moment to consider some of these similarities. As you work through the chapter, this overview will help you focus on similarities between different electrochemical methods of analysis. You will find it easier to understand a new analytical method when you can see its relationship to other similar methods. Five Important Concepts To understand electrochemistry we need to appreciate five important and interrelated concepts: (1) the electrode’s potential determines the analyte’s form at the electrode’s surface; (2) the concentration of analyte at the electrode’s surface may not be the same as its concentration in bulk solution; (3) in addition to an oxidation–reduction reaction, the analyte may participate in other chemical reactions; (4) current is a measure of the rate of the analyte’s oxidation or reduction; and (5) we cannot control simultaneously current and potential. The material in this section—particularly the five important concepts—draws upon a vision for understanding electrochemistry outlined by Larry Faulkner in the article “Understanding Electrochemistry: Some Distinctive Concepts,” J. Chem. Educ. 1983, 60, 262–264. See also, Kissinger, P. T.; Bott, A. W. “Electrochemistry for the Non-Electrochemist,” Current Separations, 2002, 20:2, 51–53. The Electrode's Potential Determines the Analyte's Form In Chapter 6 we introduced the ladder diagram as a tool for predicting how a change in solution conditions affects the position of an equilibrium reaction. Figure 11.1.1 , for example, shows a ladder diagram for the Fe3+/Fe2+ and the Sn4+/Sn2+ equilibria. If we place an electrode in a solution of Fe3+ and Sn4+ and adjust its potential to +0.500 V, Fe3+ is reduced to Fe2+ but Sn4+ is not reduced to Sn2+. Interfacial Concentrations May Not Equal Bulk Concentrations In Chapter 6 we introduced the Nernst equation, which provides a mathematical relationship between the electrode’s potential and the concentrations of an analyte’s oxidized and reduced forms in solution. For example, the Nernst equation for Fe3+ and Fe2+ is $E=E_{\mathrm{Fe}^{3+} / \mathrm{Fe}^{2+}}-\frac{R T}{n F} \ln \frac{\left[\mathrm{Fe}^{2+}\right]}{\left[\mathrm{Fe}^{3+}\right]}=\frac{0.05916}{1} \log \frac{\left[\mathrm{Fe}^{2+}\right]}{\left[\mathrm{Fe}^{3+}\right]} \label{11.1}$ where E is the electrode’s potential and $E_{\text{Fe}^{3+}/\text{Fe}^{2+}}^{\circ}$ is the standard-state reduction potential for the reaction $\text{Fe}^{3+}(aq) \rightleftharpoons \text{ Fe}^{2+}(aq) + e^-$. Because it is the potential of the electrode that determines the analyte’s form at the electrode’s surface, the concentration terms in Equation \ref{11.1} are those of Fe2+ and Fe3+ at the electrode's surface, not their concentrations in bulk solution. This distinction between a species’ surface concentration and its bulk concentration is important. Suppose we place an electrode in a solution of Fe3+ and fix its potential at 1.00 V. From the ladder diagram in Figure 11.1.1 , we know that Fe3+ is stable at this potential and, as shown in Figure 11.1.2 a, the concentration of Fe3+ is the same at all distances from the electrode’s surface. If we change the electrode’s potential to +0.500 V, the concentration of Fe3+ at the electrode’s surface decreases to approximately zero. As shown in Figure 11.1.2 b, the concentration of Fe3+ increases as we move away from the electrode’s surface until it equals the concentration of Fe3+ in bulk solution. The resulting concentration gradient causes additional Fe3+ from the bulk solution to diffuse to the electrode’s surface. We call the region of solution that contains this concentration gradient in Fe3+ the diffusion layer. We will have more to say about this in Chapter 11.4. The Analyte May Participate in Other Reactions Figure 11.1.1 and Figure 11.1.2 shows how the electrode’s potential affects the concentration of Fe3+ and how the concentration of Fe3+ varies as a function of distance from the electrode’s surface. The reduction of Fe3+ to Fe2+, which is governed by Equation \ref{11.1}, may not be the only reaction that affects the concentration of Fe3+ in bulk solution or at the electrode’s surface. The adsorption of Fe3+ at the electrode’s surface or the formation of a metal–ligand complex in bulk solution, such as Fe(OH)2+, also affects the concentration of Fe3+. Current is a Measure of Rate The reduction of Fe3+ to Fe2+ consumes an electron, which is drawn from the electrode. The oxidation of another species, perhaps the solvent, at a second electrode is the source of this electron. Because the reduction of Fe3+ to Fe2+ consumes one electron, the flow of electrons between the electrodes—in other words, the current—is a measure of the rate at which Fe3+ is reduced. One important consequence of this observation is that the current is zero when the reaction $\text{Fe}^{3+}(aq) \rightleftharpoons \text{ Fe}^{2+}(aq) + e^-$ is at equilibrium. The rate of the reaction $\text{Fe}^{3+}(aq) \rightleftharpoons \text{ Fe}^{2+}(aq) + e^-$ is the change in the concentration of Fe3+ as a function of time. We Cannot Control Simultaneously Both the Current and the Potential If a solution of Fe3+ and Fe2+ is at equilibrium, the current is zero and the potential is given by Equation \ref{11.1}. If we change the potential away from its equilibrium position, current flows as the system moves toward its new equilibrium position. Although the initial current is quite large, it decreases over time, reaching zero when the reaction reaches equilibrium. The current, therefore, changes in response to the applied potential. Alternatively, we can pass a fixed current through the electrochemical cell, forcing the reduction of Fe3+ to Fe2+. Because the concentrations of Fe3+ decreases and the concentration of Fe2+ increases, the potential, as given by Equation \ref{11.1}, also changes over time. In short, if we choose to control the potential, then we must accept the resulting current, and we must accept the resulting potential if we choose to control the current. Controlling and Measuring Current and Potential Electrochemical measurements are made in an electrochemical cell that consists of two or more electrodes and the electronic circuitry needed to control and measure the current and the potential. In this section we introduce the basic components of electrochemical instrumentation. The simplest electrochemical cell uses two electrodes. The potential of one electrode is sensitive to the analyte’s concentration, and is called the working electrode or the indicator electrode. The second electrode, which we call the counter electrode, completes the electrical circuit and provides a reference potential against which we measure the working electrode’s potential. Ideally the counter electrode’s potential remains constant so that we can assign to the working electrode any change in the overall cell potential. If the counter electrode’s potential is not constant, then we replace it with two electrodes: a reference electrode whose potential remains constant and an auxiliary electrode that completes the electrical circuit. Because we cannot control simultaneously the current and the potential, there are only three basic experimental designs: (1) we can measure the potential when the current is zero, (2) we can measure the potential while we control the current, and (3) we can measure the current while we control the potential. Each of these experimental designs relies on Ohm’s law, which states that the current, i, passing through an electrical circuit of resistance, R, generates a potential, E. $E = i R\nonumber$ Each of these experimental designs uses a different type of instrument. To help us understand how we can control and measure current and potential, we will describe these instruments as if the analyst is operating them manually. To do so the analyst observes a change in the current or the potential and manually adjusts the instrument’s settings to maintain the desired experimental conditions. It is important to understand that modern electrochemical instruments provide an automated, electronic means for controlling and measuring current and potential, and that they do so by using very different electronic circuitry than that described here. This point bears repeating: It is important to understand that the experimental designs in Figure 11.1.3 , Figure 11.1.4 , and Figure 11.1.5 do not represent the electrochemical instruments you will find in today’s analytical labs. For further information about modern electrochemical instrumentation, see this chapter’s additional resources. Potentiometers To measure the potential of an electrochemical cell under a condition of zero current we use a potentiometer. Figure 11.1.3 shows a schematic diagram for a manual potentiometer that consists of a power supply, an electrochemical cell with a working electrode and a counter electrode, an ammeter to measure the current that passes through the electrochemical cell, an adjustable, slide-wire resistor, and a tap key for closing the circuit through the electrochemical cell. Using Ohm’s law, the current in the upper half of the circuit is $i_{\text {upper}}=\frac{E_{\mathrm{PS}}}{R_{a b}} \nonumber$ where EPS is the power supply’s potential, and Rab is the resistance between points a and b of the slide-wire resistor. In a similar manner, the current in the lower half of the circuit is $i_{\text {lower}}=\frac{E_{\text {cell}}}{R_{c b}} \nonumber$ where Ecell is the potential difference between the working electrode and the counter electrode, and Rcb is the resistance between the points c and b of the slide-wire resistor. When iupper = ilower = 0, no current flows through the ammeter and the potential of the electrochemical cell is $E_{\mathrm{coll}}=\frac{R_{c b}}{R_{a b}} \times E_{\mathrm{PS}} \label{11.2}$ To determine Ecell we briefly press the tap key and observe the current at the ammeter. If the current is not zero, then we adjust the slide wire resistor and remeasure the current, continuing this process until the current is zero. When the current is zero, we use Equation \ref{11.2} to calculate Ecell. Using the tap key to briefly close the circuit through the electrochemical cell minimizes the current that passes through the cell and limits the change in the electrochemical cell’s composition. For example, passing a current of 10–9 A through the electrochemical cell for 1 s changes the concentrations of species in the cell by approximately 10–14 moles. Modern potentiometers use operational amplifiers to create a high-impedance voltmeter that measures the potential while drawing a current of less than 10–9 A. Galvanostats A galvanostat, a schematic diagram of which is shown in Figure 11.1.4 , allows us to control the current that flows through an electrochemical cell. The current from the power supply through the working electrode is $i=\frac{E_{\mathrm{PS}}}{R+R_{\mathrm{cell}}} \nonumber$ where EPS is the potential of the power supply, R is the resistance of the resistor, and Rcell is the resistance of the electrochemical cell. If R >> Rcell, then the current between the auxiliary and working electrodes $i=\frac{E_{\mathrm{PS}}}{R} \approx \text{constant} \nonumber$ maintains a constant value. To monitor the working electrode’s potential, which changes as the composition of the electrochemical cell changes, we can include an optional reference electrode and a high-impedance potentiometer. Potentiostats A potentiostat, a schematic diagram of which is shown in Figure 11.1.5 , allows us to control the working electrode’s potential. The potential of the working electrode is measured relative to a constant-potential reference electrode that is connected to the working electrode through a high-impedance potentiometer. To set the working electrode’s potential we adjust the slide wire resistor that is connected to the auxiliary electrode. If the working electrode’s potential begins to drift, we adjust the slide wire resistor to return the potential to its initial value. The current flowing between the auxiliary electrode and the working electrode is measured with an ammeter. Modern potentiostats include waveform generators that allow us to apply a time-dependent potential profile, such as a series of potential pulses, to the working electrode. Interfacial Electrochemical Techniques Because interfacial electrochemistry is such a broad field, let’s use Figure 11.1.6 to organize techniques by the experimental conditions we choose to use (Do we control the potential or the current? How do we change the applied potential or applied current? Do we stir the solution?) and the analytical signal we decide to measure (Current? Potential?). At the first level, we divide interfacial electrochemical techniques into static techniques and dynamic techniques. In a static technique we do not allow current to pass through the electrochemical cell and, as a result, the concentrations of all species remain constant. Potentiometry, in which we measure the potential of an electrochemical cell under static conditions, is one of the most important quantitative electrochemical methods and is discussed in detail in Chapter 11.2. Dynamic techniques, in which we allow current to flow and force a change in the concentration of species in the electrochemical cell, comprise the largest group of interfacial electrochemical techniques. Coulometry, in which we measure current as a function of time, is covered in Chapter 11.3. Amperometry and voltammetry, in which we measure current as a function of a fixed or variable potential, is the subject of Chapter 11.4.
textbooks/chem/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/11%3A_Electrochemical_Methods/11.01%3A_Overview_of_Electrochemistry.txt
In potentiometry we measure the potential of an electrochemical cell under static conditions. Because no current—or only a negligible current—flows through the electrochemical cell, its composition remains unchanged. For this reason, potentiometry is a useful quantitative method of analysis. The first quantitative potentiometric applications appeared soon after the formulation, in 1889, of the Nernst equation, which relates an electrochemical cell’s potential to the concentration of electroactive species in the cell [Stork, J. T. Anal. Chem. 1993, 65, 344A–351A]. For an on-line introduction to much of the material in this section, see Analytical Electrochemistry: Potentiometry by Erin Gross, Richard S. Kelly, and Donald M. Cannon, Jr., a resource that is part of the Analytical Sciences Digital Library. Potentiometry initially was restricted to redox equilibria at metallic electrodes, which limited its application to a few ions. In 1906, Cremer discovered that the potential difference across a thin glass membrane is a function of pH when opposite sides of the membrane are in contact with solutions that have different concentrations of H3O+. This discovery led to the development of the glass pH electrode in 1909. Other types of membranes also yield useful potentials. For example, in 1937 Kolthoff and Sanders showed that a pellet of AgCl can be used to determine the concentration of Ag+. Electrodes based on membrane potentials are called ion-selective electrodes, and their continued development extends potentiometry to a diverse array of analytes. Potentiometric Measurements As shown in Figure 11.1.3, we use a potentiometer to determine the difference between the potential of two electrodes. The potential of one electrode—the working or indicator electrode—responds to the analyte’s activity and the other electrode—the counter or reference electrode—has a known, fixed potential. In this section we introduce the conventions for describing potentiometric electrochemical cells, and the relationship between the measured potential and the analyte’s activity. In Chapter 6 we noted that a chemical reaction’s equilibrium position is a function of the activities of the reactants and products, not their concentrations. To be correct, we should write the Nernst equation in terms of activities. So why didn’t we use activities in Chapter 9 when we calculated redox titration curves? There are two reasons for that choice. First, concentrations are always easier to calculate than activities. Second, in a redox titration we determine the analyte’s concentration from the titration’s end point, not from the potential at the end point. The only reasons for calculating a titration curve is to evaluate its feasibility and to help us select a useful indicator. In most cases, the error we introduce by assuming that concentration and activity are identical is too small to be a significant concern. In potentiometry we cannot ignore the difference between activity and concentration. Later in this section we will consider how we can design a potentiometric method so that we can ignore the difference between activity and concentration. See Chapter 6.9 to review our earlier discussion of activity and concentration. Potentiometric Electrochemical Cells A schematic diagram of a typical potentiometric electrochemical cell is shown in Figure 11.2.1 . The electrochemical cell consists of two half-cells, each of which contains an electrode immersed in a solution of ions whose activities determine the electrode’s potential. A salt bridge that contains an inert electrolyte, such as KCl, connects the two half-cells. The ends of the salt bridge are fixed with porous frits, which allow the electrolyte’s ions to move freely between the half-cells and the salt bridge. This movement of ions in the salt bridge completes the electrical circuit. By convention, we identify the electrode on the left as the anode and assign to it the oxidation reaction; thus $\mathrm{Zn}(s) \rightleftharpoons \text{ Zn}^{2+}(a q)+2 e^{-} \nonumber$ The electrode on the right is the cathode, where the reduction reaction occurs. $\mathrm{Ag}^{+}(a q)+e^{-} \rightleftharpoons \mathrm{Ag}(s) \nonumber$ The potential of the electrochemical cell in Figure 11.2.1 is for the reaction $\mathrm{Zn}(s)+2 \mathrm{Ag}^{+}(a q) \rightleftharpoons 2 \mathrm{Ag}(s)+\mathrm{Zn}^{2+}(\mathrm{aq}) \nonumber$ We also define potentiometric electrochemical cells such that the cathode is the indicator electrode and the anode is the reference electrode. The reason for separating the electrodes is to prevent the oxidation reaction and the reduction reaction from occurring at one of the electrodes. For example, if we place a strip of Zn metal in a solution of AgNO3, the reduction of Ag+ to Ag occurs on the surface of the Zn at the same time as a potion of the Zn metal oxidizes to Zn2+. Because the transfer of electrons from Zn to Ag+ occurs at the electrode’s surface, we can not pass them through the potentiometer. Shorthand Notation for Electrochemical Cells Although Figure 11.2.1 provides a useful picture of an electrochemical cell, it is not a convenient way to represent it (Imagine having to draw a picture of each electrochemical cell you are using!). A more useful way to describe an electrochemical cell is a shorthand notation that uses symbols to identify different phases and that lists the composition of each phase. We use a vertical slash (|) to identify a boundary between two phases where a potential develops, and a comma (,) to separate species in the same phase or to identify a boundary between two phases where no potential develops. Shorthand cell notations begin with the anode and continue to the cathode. For example, we describe the electrochemical cell in Figure 11.2.1 using the following shorthand notation. $\text{Zn}(s) | \text{ZnCl}_2(aq, a_{\text{Zn}^{2+}} = 0.0167) || \text{AgNO}_3(aq, a_{\text{Ag}^+} = 0.100) | \text{Ag} (s) \nonumber$ The double vertical slash (||) represents the salt bridge, the contents of which we usually do not list. Note that a double vertical slash implies that there is a potential difference between the salt bridge and each half-cell. Example 11.2.1 What are the anodic, the cathodic, and the overall reactions responsible for the potential of the electrochemical cell in Figure 11.2.2 ? Write the shorthand notation for the electrochemical cell. Solution The oxidation of Ag to Ag+ occurs at the anode, which is the left half-cell. Because the solution contains a source of Cl, the anodic reaction is $\mathrm{Ag}(s)+\mathrm{Cl}^{-}(aq) \rightleftharpoons\text{ AgCl}(s)+e^{-} \nonumber$ The cathodic reaction, which is the right half-cell, is the reduction of Fe3+ to Fe2+. $\mathrm{Fe}^{3+}(a q)+e^{-}\rightleftharpoons \text{ Fe}^{2+}(a q) \nonumber$ The overall cell reaction, therefore, is $\mathrm{Ag}(s)+\text{ Fe}^{3+}(a q)+\text{ Cl}^{-}(a q) \rightleftharpoons \mathrm{AgCl}(s)+\text{ Fe}^{2+}(a q) \nonumber$ The electrochemical cell’s shorthand notation is $\text{Ag}(s) | \text{HCl} (aq, a_{\text{Cl}^{-}} = 0.100), \text{AgCl} (\text{sat’d}) || \text{FeCl}_2(aq, a_{\text{Fe}^{2+}} = 0.0100), \text{ Fe}^{3+}(aq,a_{\text{Fe}^{3+}} = 0.0500) | \text{Pt} (s) \nonumber$ Note that the Pt cathode is an inert electrode that carries electrons to the reduction half-reaction. The electrode itself does not undergo reduction. Exercise 11.2.1 Write the reactions occurring at the anode and the cathode for the potentiometric electrochemical cell with the following shorthand notation. Pt(s) | H2(g), H+(aq) || Cu2+(aq) | Cu(s) Answer The oxidation of H2 to H+ occurs at the anode $\mathrm{H}_{2}(g)\rightleftharpoons2 \mathrm{H}^{+}(a q)+2 e^{-} \nonumber$ and the reduction of Cu2+ to Cu occurs at the cathode. $\mathrm{Cu}^{2+}(a q)+2 e^{-}\rightleftharpoons\mathrm{Cu}(s) \nonumber$ The overall cell reaction, therefore, is $\mathrm{Cu}^{2+}(a q)+\text{ H}_{2}(g)\rightleftharpoons2 \mathrm{H}^{+}(a q)+\mathrm{Cu}(s) \nonumber$ Potential and Activity—The Nernst Equation The potential of a potentiometric electrochemical cell is $E_{\text {cell }}=E_{\text {cathode }}-E_{\text {anode }} \label{11.1}$ where Ecathode and Eanode are reduction potentials for the redox reactions at the cathode and the anode, respectively. Each reduction potential is given by the Nernst equation $E=E^{\circ}-\frac{R T}{n F} \ln Q \nonumber$ where Eo is the standard-state reduction potential, R is the gas constant, T is the temperature in Kelvins, n is the number of electrons in the redox reaction, F is Faraday’s constant, and Q is the reaction quotient. At a temperature of 298 K (25oC) the Nernst equation is $E=E^{\circ}-\frac{0.05916}{n} \log Q \label{11.2}$ where E is in volts. Using Equation \ref{11.2}, the potential of the anode and cathode in Figure 11.2.1 are $E_\text{anode} = E_{\text{Zn}^{2+}/\text{Zn}}^{\circ} - \frac {0.05916} {2} \log \frac{1} {a_{\text{Zn}^{2+}}} \nonumber$ $E_\text{anode} = E_{\text{Ag}^{+}/\text{Ag}}^{\circ} - \frac {0.05916} {1} \log \frac{1} {a_{\text{Ag}^{+}}} \nonumber$ Even though an oxidation reaction is taking place at the anode, we define the anode's potential in terms of the corresponding reduction reaction and the standard-state reduction potential. See Chapter 6.4 for a review of using the Nernst equation in calculations. Substituting Ecathode and Eanode into Equation \ref{11.1}, along with the activities of Zn2+ and Ag+ and the standard-state reduction potentials, gives Ecell as $E_\text{cell} = \left( E_{\text{Ag}^{+}/\text{Ag}}^{\circ} - \frac {0.05916} {1} \log \frac{1} {a_{\text{Ag}^{+}}} \right) - \left( E_{\text{Zn}^{2+}/\text{Zn}}^{\circ} - \frac {0.05916} {2} \log \frac{1} {a_{\text{Zn}^{2+}}} \right) \nonumber$ $E_\text{cell} = \left( 0.7996 - \frac {0.05916} {1} \log \frac{1} {0.100} \right) - \left( -0.7618 - \frac {0.05916} {2} \log \frac{1} {0.0167} \right) = 1.555 \text{ V} \nonumber$ You will find values for the standard-state reduction potentials in Appendix 13. Example 11.2.2 What is the potential of the electrochemical cell shown in Example 11.2.1 ? Solution Substituting Ecathode and Eanode into Equation \ref{11.1}, along with the concentrations of Fe3+, Fe2+, and Cl and the standard-state reduction potentials gives $E_\text{cell} = \left( E_{\text{Fe}^{3+}/\text{Fe}^{2+}}^{\circ} - \frac {0.05916} {1} \log \frac{a_{\text{Fe}^{2+}}} {a_{\text{Fe}^{3+}}} \right) - \left( E_{\text{AgCl/Ag}}^{\circ} - \frac {0.05916} {1} \log a_{\text{Cl}^-} \right) \nonumber$ $E_\text{cell} = \left( 0.771 - \frac {0.05916} {1} \log \frac{0.0100} {0.0500} \right) - \left( 0.2223 - \frac {0.05916} {1} \log (0.100) \right) = 0.531 \text{ V} \nonumber$ Exercise 11.2.2 What is the potential for the electrochemical cell in Exercise 11.2.1 if the activity of H+ in the anodic half-cell is 0.100, the fugacity of H2 in the anodic half-cell is 0.500, and the activity of Cu2+ in the cathodic half-cell is 0.0500? Fugacity, $f$, is the equivalent term for the activity of a gas. Answer Making appropriate substitutions into Equation \ref{11.1} and solving for Ecell gives its value as $E_\text{cell} = \left( E_{\text{Cu}^{2+}/\text{Cu}}^{\circ} - \frac {0.05916} {2} \log \frac{1} {a_{\text{Cu}^{2+}}} \right) - \left( E_{\text{H}^{+}/\text{H}_2}^{\circ} - \frac {0.05916} {2} \log \frac{f_{\text{H}_2}} {a_{\text{H}^+}^2} \right) \nonumber$ $E_\text{cell} = \left( 0.3419 - \frac {0.05916} {2} \log \frac{1} {0.0500} \right) - \left( 0.0000 - \frac {0.05916} {2} \log \frac{0.500} {(0.100)^2} \right) = 0.3537 \text{ V} \nonumber$ In potentiometry, we assign the reference electrode to the anodic half-cell and assign the indicator electrode to the cathodic half-cell. Thus, if the potential of the cell in Figure 11.2.1 is +1.50 V and the activity of Zn2+ is 0.0167, then we can solve the following equation for aAg+ $1.50 \text{ V} = \left( 0.7996 - \frac {0.05916} {1} \log \frac{1} {a_{\text{Ag}^+}} \right) - \left( -0.7618 - \frac {0.05916} {2} \log \frac{1} {0.0167} \right) \nonumber$ obtaining an activity of 0.0118. Example 11.2.3 What is the activity of Fe3+ in an electrochemical cell similar to that in Example 11.2.1 if the activity of Cl in the left-hand cell is 1.0, the activity of Fe2+ in the right-hand cell is 0.015, and Ecell is +0.546 V? Solution Making appropriate substitutions into Equation \ref{11.1} $0.546 = \left( 0.771 - \frac {0.05916} {1} \log \frac{0.0100} {a_{\text{Fe}^{3+}}} \right) - \left( 0.2223 - \frac {0.05916} {1} \log (1.0) \right) \nonumber$ and solving for aFe3+ gives its activity as 0.0135. Exercise 11.2.3 What is the activity of Cu2+ in the electrochemical cell in Exercise 11.2.1 if the activity of H+ in the anodic half-cell is 1.00 with a fugacity of 1.00 for H2, and an Ecell of +0.257 V? Answer Making appropriate substitutions into Equation \ref{11.1} $0.257 \text{ V} = \left( 0.3419 - \frac {0.05916} {2} \log \frac{1} {a_{\text{Cu}^{2+}}} \right) - \left( 0.0000 - \frac {0.05916} {2} \log \frac{1.00} {(1.00)^2} \right) \nonumber$ and solving for aCu2+ gives its activity as $1.35 \times 10^{-3}$. Despite the apparent ease of determining an analyte’s activity using the Nernst equation, there are several problems with this approach. One problem is that standard-state potentials are temperature-dependent and the values in reference tables usually are for a temperature of 25oC. We can overcome this problem by maintaining the electrochemical cell at 25oC or by measuring the standard-state potential at the desired temperature. Another problem is that a standard-sate reduction potential may have a significant matrix effect. For example, the standard-state reduction potential for the Fe3+/Fe2+ redox couple is +0.735 V in 1 M HClO4, +0.70 V in 1 M HCl, and +0.53 V in 10 M HCl. The difference in potential for equimolar solutions of HCl and HClO4 is the result of a difference in the activity coefficients for Fe3+ and Fe2+ in these two media. The shift toward a more negative potential with an increase in the concentration of HCl is the result of chloride’s ability to form a stronger complex with Fe3+ than with Fe2+. We can minimize this problem by replacing the standard-state potential with a matrix-dependent formal potential. Most tables of standard-state potentials, including those in Appendix 13, include selected formal potentials. Finally, a more serious problem is the presence of additional potentials in the electrochemical cell not included in Equation \ref{11.1}. In writing the shorthand notation for an electrochemical cell we use a double slash (||) to indicate the salt bridge, suggesting a potential exists at the interface between each end of the salt bridge and the solution in which it is immersed. The origin of this potential is discussed in the following section. Junction Potentials A junction potential develops at the interface between two ionic solution if there is a difference in the concentration and mobility of the ions. Consider, for example, a porous membrane that separations a solution of 0.1 M HCl from a solution of 0.01 M HCl (Figure 11.2.3 a). Because the concentration of HCl on the membrane’s left side is greater than that on the right side of the membrane, H+ and Cl will diffuse in the direction of the arrows. The mobility of H+, however, is greater than that for Cl, as shown by the difference in the lengths of their respective arrows. Because of this difference in mobility, the solution on the right side of the membrane develops an excess concentration of H+ and a positive charge (Figure 11.2.3 b). Simultaneously, the solution on the membrane’s left side develops a negative charge because there is an excess concentration of Cl. We call this difference in potential across the membrane a junction potential and represent it as Ej. The magnitude of the junction potential depends upon the difference in the concentration of ions on the two sides of the interface, and may be as large as 30–40 mV. For example, a junction potential of 33.09 mV has been measured at the interface between solutions of 0.1 M HCl and 0.1 M NaCl [Sawyer, D. T.; Roberts, J. L., Jr. Experimental Electrochemistry for Chemists, Wiley-Interscience: New York, 1974, p. 22]. A salt bridge’s junction potential is minimized by using a salt, such as KCl, for which the mobilities of the cation and anion are approximately equal. We also can minimize the junction potential by incorporating a high concentration of the salt in the salt bridge. For this reason salt bridges frequently are constructed using solutions that are saturated with KCl. Nevertheless, a small junction potential, generally of unknown magnitude, is always present. When we measure the potential of an electrochemical cell, the junction potential also contributes to Ecell; thus, we rewrite Equation \ref{11.1} as $E_{\text {cell }}=E_{\text {cathode }}-E_{\text {anode }}+E_{j} \nonumber$ to include its contribution. If we do not know the junction potential’s actual value—which is the usual situation—then we cannot directly calculate the analyte’s concentration using the Nernst equation. Quantitative analytical work is possible, however, if we use one of the standardization methods—external standards, the method of standard additions, or internal standards—discussed in Chapter 5.3. Reference Electrodes In a potentiometric electrochemical cell one of the two half-cells provides a fixed reference potential and the potential of the other half-cell responds the analyte’s concentration. By convention, the reference electrode is the anode; thus, the short hand notation for a potentiometric electrochemical cell is reference electrode || indicator electrode and the cell potential is $E_{\mathrm{cell}}=E_{\mathrm{ind}}-E_{\mathrm{ref}}+E_{j} \nonumber$ The ideal reference electrode provides a stable, known potential so that we can attribute any change in Ecell to the analyte’s effect on the indicator electrode’s potential. In addition, it should be easy to make and to use the reference electrode. Three common reference electrodes are discussed in this section. Standard Hydrogen Electrode Although we rarely use the standard hydrogen electrode (SHE) for routine analytical work, it is the reference electrode used to establish standard-state potentials for other half-reactions. The SHE consists of a Pt electrode immersed in a solution in which the activity of hydrogen ion is 1.00 and in which the fugacity of H2(g) is 1.00 (Figure 11.2.4 ). A conventional salt bridge connects the SHE to the indicator half-cell. The short hand notation for the standard hydrogen electrode is $\text{Pt}(s), \text{ H}_{2}\left(g, f_{\mathrm{H}_{2}}=1.00\right) | \text{ H}^{+}\left(a q, a_{\mathrm{H}^{+}}=1.00\right) \| \nonumber$ and the standard-state potential for the reaction $\mathrm{H}^{+}(a q)+e^{-}=\frac{1}{2} \mathrm{H}_{2}(g) \nonumber$ is, by definition, 0.00 V at all temperatures. Despite its importance as the fundamental reference electrode against which we measure all other potentials, the SHE is rarely used because it is difficult to prepare and inconvenient to use. Calomel Electrodes A calomel reference electrode is based on the following redox couple between Hg2Cl2 and Hg (calomel is the common name for Hg2Cl2) $\mathrm{Hg}_{2} \mathrm{Cl}_{2}(s)+2 e^{-}\rightleftharpoons2 \mathrm{Hg}(l)+2 \mathrm{Cl}^{-}(a q) \nonumber$ for which the potential is $E=E_{\mathrm{Hg}_{2} \mathrm{Cl}_{2} / \mathrm{Hg}}^{\mathrm{o}}-\frac{0.05916}{2} \log \left(a_{\text{Cl}^-}\right)^{2}=+0.2682 \mathrm{V}-\frac{0.05916}{2} \log \left(a_{\text{Cl}^-}\right)^{2} \nonumber$ The potential of a calomel electrode, therefore, depends on the activity of Cl in equilibrium with Hg and Hg2Cl2. As shown in Figure 11.2.5 , in a saturated calomel electrode (SCE) the concentration of Cl is determined by the solubility of KCl. The electrode consists of an inner tube packed with a paste of Hg, Hg2Cl2, and KCl, situated within a second tube that contains a saturated solution of KCl. A small hole connects the two tubes and a porous wick serves as a salt bridge to the solution in which the SCE is immersed. A stopper in the outer tube provides an opening for adding addition saturated KCl. The short hand notation for this cell is $\mathrm{Hg}(l) | \mathrm{Hg}_{2} \mathrm{Cl}_{2}(s), \mathrm{KCl}(a q, \text { sat'd }) \| \nonumber$ Because the concentration of Cl is fixed by the solubility of KCl, the potential of an SCE remains constant even if we lose some of the inner solution to evaporation. A significant disadvantage of the SCE is that the solubility of KCl is sensitive to a change in temperature. At higher temperatures the solubility of KCl increases and the electrode’s potential decreases. For example, the potential of the SCE is +0.2444 V at 25oC and +0.2376 V at 35oC. The potential of a calomel electrode that contains an unsaturated solution of KCl is less dependent on the temperature, but its potential changes if the concentration, and thus the activity of Cl, increases due to evaporation. For example, the potential of a calomel electrode is +0.280 V when the concentration of KCl is 1.00 M and +0.336 V when the concentration of KCl is 0.100 M. If the activity of Cl is 1.00, the potential is +0.2682 V. Silver/Silver Chloride Electrodes Another common reference electrode is the silver/silver chloride electrode, which is based on the reduction of AgCl to Ag. $\operatorname{AgCl}(s)+e^{-} \rightleftharpoons \mathrm{Ag}(s)+\mathrm{Cl}^{-}(a q) \nonumber$ As is the case for the calomel electrode, the activity of Cl determines the potential of the Ag/AgCl electrode; thus $E = E_\text{AgCl/Ag}^{\circ}-0.05916 \log a_{\text{Cl}^-} = 0.2223 \text{ V} - 0.05916 \log a_{\text{Cl}^-} \nonumber$ When prepared using a saturated solution of KCl, the electrode's potential is +0.197 V at 25oC. Another common Ag/AgCl electrode uses a solution of 3.5 M KCl and has a potential of +0.205 V at 25oC. As you might expect, the potential of a Ag/AgCl electrode using a saturated solution of KCl is more sensitive to a change in temperature than an electrode that uses an unsaturated solution of KCl. A typical Ag/AgCl electrode is shown in Figure 11.2.6 and consists of a silver wire, the end of which is coated with a thin film of AgCl, immersed in a solution that contains the desired concentration of KCl. A porous plug serves as the salt bridge. The electrode’s short hand notation is $\operatorname{Ag}(s) | \operatorname{Ag} \mathrm{Cl}(s), \mathrm{KCl}\left(a q, a_{\mathrm{Cl}^{-}}=x\right) \| \nonumber$ Converting Potentials Between Reference Electrodes The standard state reduction potentials in most tables are reported relative to the standard hydrogen electrode’s potential of +0.00 V. Because we rarely use the SHE as a reference electrode, we need to convert an indicator electrode’s potential to its equivalent value when using a different reference electrode. As shown in the following example, this is easy to do. Example 11.2.4 The potential for an Fe3+/Fe2+ half-cell is +0.750 V relative to the standard hydrogen electrode. What is its potential if we use a saturated calomel electrode or a saturated silver/silver chloride electrode? Solution When we use a standard hydrogen electrode the potential of the electrochemical cell is $E_\text{cell} = E_{\text{Fe}^{3+}/\text{Fe}^{2+}} - E_\text{SHE} = 0.750 \text{ V} -0.000 \text{ V} = 0.750 \text{ V} \nonumber$ We can use the same equation to calculate the potential if we use a saturated calomel electrode $E_\text{cell} = E_{\text{Fe}^{3+}/\text{Fe}^{2+}} - E_\text{SHE} = 0.750 \text{ V} -0.2444 \text{ V} = 0.506 \text{ V} \nonumber$ or a saturated silver/silver chloride electrode $E_\text{cell} = E_{\text{Fe}^{3+}/\text{Fe}^{2+}} - E_\text{SHE} = 0.750 \text{ V} -0.197 \text{ V} = 0.553 \text{ V} \nonumber$ Figure 11.2.7 provides a pictorial representation of the relationship between these different potentials. Exercise 11.2.4 The potential of a $\text{UO}_2^+$/U4+ half-cell is –0.0190 V relative to a saturated calomel electrode. What is its potential when using a saturated silver/silver chloride electrode or a standard hydrogen electrode? Answer When using a saturated calomel electrode, the potential of the electro- chemical cell is $E_\text{cell} = E_{\text{UO}_2^+/\text{U}^{4+}} - E_\text{SCE} \nonumber$ Substituting in known values $-0.0190 \text{ V} = E_{\text{UO}_2^+/\text{U}^{4+}} - 0.2444 \text{ V} \nonumber$ and solving for $E_{\text{UO}_2^+/\text{U}^{4+}}$ gives its value as +0.2254 V. The potential relative to the Ag/AgCl electrode is $E_\text{cell} = E_{\text{UO}_2^+/\text{U}^{4+}} - E_\text{Ag/AgCl} = 0.2254 \text{ V} - 0.197 \text{ V} = 0.028 \text{ V} \nonumber$ and the potential relative to the standard hydrogen electrode is $E_\text{cell} = E_{\text{UO}_2^+/\text{U}^{4+}} - E_\text{SHE} = 0.2254 \text{ V} - 0.000 \text{ V} = 0.2254 \text{ V} \nonumber$ Metallic Indicator Electrodes In potentiometry, the potential of the indicator electrode is proportional to the analyte’s activity. Two classes of indicator electrodes are used to make potentiometric measurements: metallic electrodes, which are the subject of this section, and ion-selective electrodes, which are covered in the next section. Electrodes of the First Kind If we place a copper electrode in a solution that contains Cu2+, the electrode’s potential due to the reaction $\mathrm{Cu}^{2+}(a q)+2 e^{-} \rightleftharpoons \mathrm{Cu}(s) \nonumber$ is determined by the activity of Cu2+. $E=E_{\mathrm{Cu}^{2+} / \mathrm{Cu}}^{\mathrm{o}}-\frac{0.05916}{2} \log \frac{1}{a_{\mathrm{Cu}^{2+}}}=+0.3419 \mathrm{V}-\frac{0.05916}{2} \log \frac{1}{a_{\mathrm{Cu}^{2+}}} \nonumber$ If copper is the indicator electrode in a potentiometric electrochemical cell that also includes a saturated calomel reference electrode $\mathrm{SCE} \| \mathrm{Cu}^{2+}\left(a q, a_{\mathrm{Cu^{2+}}}=x\right) | \text{Cu}(s) \nonumber$ then we can use the cell potential to determine an unknown activity of Cu2+ in the indicator electrode’s half-cell $E_{\text{cell}}= E_{\text { ind }}-E_{\text {SCE }}+E_{j}= +0.3419 \mathrm{V}-\frac{0.05916}{2} \log \frac{1}{a_{\mathrm{Cu}^{2+}}}-0.2224 \mathrm{V}+E_{j} \nonumber$ An indicator electrode in which the metal is in contact with a solution containing its ion is called an electrode of the first kind. In general, if a metal, M, is in a solution of Mn+, the cell potential is $E_{\mathrm{call}}=K-\frac{0.05916}{n} \log \frac{1}{a_{M^{n+}}}=K+\frac{0.05916}{n} \log a_{M^{n+}} \nonumber$ where K is a constant that includes the standard-state potential for the Mn+/M redox couple, the potential of the reference electrode, and the junction potential. Note that including Ej in the constant K means we do not need to know the junction potential’s actual value; however, the junction potential must remain constant if K is to maintain a constant value. For a variety of reasons—including the slow kinetics of electron transfer at the metal–solution interface, the formation of metal oxides on the electrode’s surface, and interfering reactions—electrodes of the first kind are limited to the following metals: Ag, Bi, Cd, Cu, Hg, Pb, Sn, Tl, and Zn. Many of these electrodes, such as Zn, cannot be used in acidic solutions because they are easily oxidized by H+. $\mathrm{Zn}(s)+2 \mathrm{H}^{+}(a q)\rightleftharpoons \text{ H}_{2}(g)+\mathrm{Zn}^{2+}(a q) \nonumber$ Electrodes of the Second Kind The potential of an electrode of the first kind responds to the activity of Mn+. We also can use this electrode to determine the activity of another species if it is in equilibrium with Mn+. For example, the potential of a Ag electrode in a solution of Ag+ is $E=0.7996 \mathrm{V}+0.05916 \log a_{\mathrm{Ag}^{+}} \label{11.3}$ If we saturate the indicator electrode’s half-cell with AgI, the solubility reaction $\operatorname{Agl}(s)\rightleftharpoons\operatorname{Ag}^{+}(a q)+\mathrm{I}^{-}(a q) \nonumber$ determines the concentration of Ag+; thus $a_{\mathrm{Ag}^{+}}=\frac{K_{\mathrm{sp}, \mathrm{Agl}}}{a_{\text{I}^-}} \label{11.4}$ where Ksp,AgI is the solubility product for AgI. Substituting Equation \ref{11.4} into Equation \ref{11.3} $E=0.7996 \text{ V}+0.05916 \log \frac{K_{\text{sp, Agl}}}{a_{\text{I}^-}} \nonumber$ shows that the potential of the silver electrode is a function of the activity of I. If we incorporate this electrode into a potentiometric electrochemical cell with a saturated calomel electrode $\mathrm{SCE} \| \mathrm{AgI}(s), \text{ I}^-\left(a q, a_{\text{I}^-}=x\right) | \mathrm{Ag}(\mathrm{s}) \nonumber$ then the cell potential is $E_{\mathrm{cell}}=K-0.05916 \log a_{\text{I}^-} \nonumber$ where K is a constant that includes the standard-state potential for the Ag+/Ag redox couple, the solubility product for AgI, the reference electrode’s potential, and the junction potential. If an electrode of the first kind responds to the activity of an ion in equilibrium with Mn+, we call it an electrode of the second kind. Two common electrodes of the second kind are the calomel and the silver/silver chloride reference electrodes. In an electrode of the second kind we link together a redox reaction and another reaction, such as a solubility reaction. You might wonder if we can link together more than two reactions. The short answer is yes. An electrode of the third kind, for example, links together a redox reaction and two other reactions. Such electrodes are less common and we will not consider them in this text. Redox Electrodes An electrode of the first kind or second kind develops a potential as the result of a redox reaction that involves the metallic electrode. An electrode also can serve as a source of electrons or as a sink for electrons in an unrelated redox reaction, in which case we call it a redox electrode. The Pt cathode in Figure 11.2.2 and Example 11.2.1 is a redox electrode because its potential is determined by the activity of Fe2+ and Fe3+ in the indicator half-cell. Note that a redox electrode’s potential often responds to the activity of more than one ion, which limits its usefulness for direct potentiometry. Membrane Electrodes If metals were the only useful materials for constructing indicator electrodes, then there would be few useful applications of potentiometry. In 1906, Cremer discovered that the potential difference across a thin glass membrane is a function of pH when opposite sides of the membrane are in contact with solutions that have different concentrations of H3O+. The existence of this membrane potential led to the development of a whole new class of indicator electrodes, which we call ion-selective electrodes (ISEs). In addition to the glass pH electrode, ion-selective electrodes are available for a wide range of ions. It also is possible to construct a membrane electrode for a neutral analyte by using a chemical reaction to generate an ion that is monitored with an ion-selective electrode. The development of new membrane electrodes continues to be an active area of research. Membrane Potentials Figure 11.2.8 shows a typical potentiometric electrochemical cell equipped with an ion-selective electrode. The short hand notation for this cell is $\text { ref (sample) }\left\|[\mathrm{A}]_{\text { samp }}\left(a q, a_{\mathrm{A}}=x\right) |[\mathrm{A}]_{\text { int }}\left(a q, a_{\mathrm{A}}=y\right)\right\| \text { ref (internal) } \nonumber$ where the ion-selective membrane is represented by the vertical slash that separates the two solutions that contain analyte: the sample solution and the ion-selective electrode’s internal solution. The potential of this electrochemical cell includes the potential of each reference electrode, a junction potential, and the membrane’s potential $E_\text{cell} = E_\text{ref(int)} - E_\text{ref(samp)} + E_\text{mem} + E_j \label{11.5}$ where Emem is the potential across the membrane and The notations ref(sample) and ref(internal) represent a reference electrode immersed in the sample and a reference electrode immersed in the ISE’s internal solution. Because the junction potential and the potential of the two reference electrodes are constant, any change in Ecell reflects a change in the membrane’s potential. The analyte’s interaction with the membrane generates a membrane potential if there is a difference in its activity on the membrane’s two sides. Current is carried through the membrane by the movement of either the analyte or an ion already present in the membrane’s matrix. The membrane potential is given by the following Nernst-like equation $E_{\mathrm{mem}}=E_{\mathrm{asym}}-\frac{R T}{z F} \ln \frac{\left(a_{A}\right)_{\mathrm{int}}}{\left(a_{A}\right)_{\mathrm{samp}}} \label{11.6}$ where (aA)samp is the analyte’s activity in the sample, (aA)int is the analyte’s activity in the ion-selective electrode’s internal solution, and z is the analyte’s charge. Ideally, Emem is zero when (aA)int = (aA)samp. The term Easym, which is an asymmetry potential, accounts for the fact that Emem usually is not zero under these conditions. For now we simply note that a difference in the analyte’s activity results in a membrane potential. As we consider different types of ion-selective electrodes, we will explore more specifically the source of the membrane potential. Substituting Equation \ref{11.6} into Equation \ref{11.5}, assuming a temperature of 25oC, and rearranging gives $E_{\mathrm{cell}}=K+\frac{0.05916}{z} \log \left(a_{A}\right)_{\mathrm{samp}} \label{11.7}$ where K is a constant that includes the potentials of the two reference electrodes, the junction potentials, the asymmetry potential, and the analyte's activity in the internal solution. Equation \ref{11.7} is a general equation and applies to all types of ion-selective electrodes. Selectivity of Membranes A membrane potential results from a chemical interaction between the analyte and active sites on the membrane’s surface. Because the signal depends on a chemical process, most membranes are not selective toward a single analyte. Instead, the membrane potential is proportional to the concentration of each ion that interacts with the membrane’s active sites. We can rewrite Equation \ref{11.7} to include the contribution to the potential of an interferent, I $E_\text{cell} = K + \frac {0.05916} {z_A} \log \left\{ a_A + K_{A,I}(a_I)^{z_A/z_I} \right\} \nonumber$ where zA and zI are the charges of the analyte and the interferent, and KA,I is a selectivity coefficient that accounts for the relative response of the interferent. The selectivity coefficient is defined as $K_{A,I} = \frac {(a_A)_e} {(a_I)_e^{z_A/z_I}} \label{11.8}$ where (aA)e and (aI)e are the activities of analyte and the interferent that yield identical cell potentials. When the selectivity coefficient is 1.00, the membrane responds equally to the analyte and the interferent. A membrane shows good selectivity for the analyte when KA,I is significantly less than 1.00. Selectivity coefficients for most commercially available ion-selective electrodes are provided by the manufacturer. If the selectivity coefficient is not known, it is easy to determine its value experimentally by preparing a series of solutions, each of which contains the same activity of interferent, (aI)add, but a different activity of analyte. As shown in Figure 11.2.9 , a plot of cell potential versus the log of the analyte’s activity has two distinct linear regions. When the analyte’s activity is significantly larger than KA,I $\times$ (aI)add, the potential is a linear function of log(aA), as given by Equation \ref{11.7}. If KA,I $\times$ (aI)add is significantly larger than the analyte’s activity, however, the cell’s potential remains constant. The activity of analyte and interferent at the intersection of these two linear regions is used to calculate KA,I. Example 11.2.5 Sokalski and co-workers described a method for preparing ion-selective electrodes with significantly improved selectivities [Sokalski, T.; Ceresa, A.; Zwicki, T.; Pretsch, E. J. Am. Chem. Soc. 1997, 119, 11347–11348]. For example, a conventional Pb2+ ISE has a $\log K_{\text{Pb}^{2+}/\text{Mg}^{2+}}$ of –3.6. If the potential for a solution in which the activity of Pb2+ is $4.1 \times 10^{-12}$ is identical to that for a solution in which the activity of Mg2+ is 0.01025, what is the value of $\log K_{\text{Pb}^{2+}/\text{Mg}^{2+}}$ for their ISE? Solution Making appropriate substitutions into Equation \ref{11.8}, we find that $K_{\text{Pb}^{2+}/\text{Mg}^{2+}} = \frac {(a_{\text{Pb}^{2+}})_e} {(a_{\text{Mg}^{2+}})_e^{z_{\text{Pb}^{2+}}/z_{\text{Mg}^{2+}}}} = \frac {4.1 \times 10^{-12}} {(0.01025)^{+2/+2}} = 4.0 \times 10^{-10} \nonumber$ The value of $\log K_{\text{Pb}^{2+}/\text{Mg}^{2+}}$, therefore, is –9.40. Exercise 11.2.5 A ion-selective electrode for $\text{NO}_2^-$ has logKA,I values of –3.1 for F, –4.1 for $\text{SO}_4^{2-}$, –1.2 for I, and –3.3 for $\text{NO}_3^-$. Which ion is the most serious interferent and for what activity of this interferent is the potential equivalent to a solution in which the activity of $\text{NO}_2^-$ is $2.75 \times 10^{-4}$? Answer The larger the value of KA,I the more serious the interference. Larger values for KA,I correspond to more positive (less negative) values for logKA,I; thus, I, with a KA,I of $6.3 \times 10^{-2}$, is the most serious of these interferents. To find the activity of I that gives a potential equivalent to a $\text{NO}_2^-$ activity of $2.75 \times 10^{-4}$, we note that $a_{\text{NO}_2^-}=K_{A, I} \times a_{\text{I}^-} \nonumber$ Making appropriate substitutions $2.75 \times 10^{-4}=\left(6.3 \times 10^{-2}\right) \times a_{\mathrm{I}^-} \nonumber$ and solving for $a_{\text{I}^-}$ gives its activity as $4.4 \times 10^{-3}$. Glass Ion-Selective Electrodes The first commercial glass electrodes were manufactured using Corning 015, a glass with a composition that is approximately 22% Na2O, 6% CaO, and 72% SiO2. When immersed in an aqueous solution for several hours, the outer approximately 10 nm of the membrane’s surface becomes hydrated, resulting in the formation of negatively charged sites, —SiO. Sodium ions, Na+, serve as counter ions. Because H+ binds more strongly to —SiO than does Na+, they displace the sodium ions $\mathrm{H}^{+}+-\mathrm{SiO}^{-} \mathrm{Na}^{+}\rightleftharpoons-\mathrm{SiO}^{-} \mathrm{H}^{+}+\mathrm{Na}^{+} \nonumber$ explaining the membrane’s selectivity for H+. The transport of charge across the membrane is carried by the Na+ ions. The potential of a glass electrode using Corning 015 obeys the equation $E_{\mathrm{cell}}=K+0.05916 \log a_{\mathrm{H}^{+}} \label{11.9}$ over a pH range of approximately 0.5 to 9. At more basic pH values the glass membrane is more responsive to other cations, such as Na+ and K+. Example 11.2.6 For a Corning 015 glass membrane, the selectivity coefficient KH+/Na+ is $\approx 10^{-11}$. What is the expected error if we measure the pH of a solution in which the activity of H+ is $2 \times 10^{-13}$ and the activity of Na+ is 0.05? Solution A solution in which the actual activity of H+, (aH+)act, is $2 \times 10^{-13}$ has a pH of 12.7. Because the electrode responds to both H+ and Na+, the apparent activity of H+, (aH+)app, is $(a_{\text{H}^+})_\text{app} = (a_{\text{H}^+})_\text{act} + (K_{\text{H}^+ / \text{Na}^+} \times a_{\text{Na}^+}) = 2 \times 10^{-13} + (10^{-11} \times 0.05) = 7 \times 10^{-13} \nonumber$ The apparent activity of H+ is equivalent to a pH of 12.2, an error of –0.5 pH units. Replacing Na2O and CaO with Li2O and BaO extends the useful pH range of glass membrane electrodes to pH levels greater than 12. Glass membrane pH electrodes often are available in a combination form that includes both the indicator electrode and the reference electrode. The use of a single electrode greatly simplifies the measurement of pH. An example of a typical combination electrode is shown in Figure 11.2.10 . The observation that the Corning 015 glass membrane responds to ions other than H+ (see Example 11.2.6 ) led to the development of glass membranes with a greater selectivity for other cations. For example, a glass membrane with a composition of 11% Na2O, 18% Al2O3, and 71% SiO2 is used as an ion-selective electrode for Na+. Other glass ion-selective electrodes have been developed for the analysis of Li+, K+, Rb+, Cs+, $\text{NH}_4^+$, Ag+, and Tl+. Table 11.2.1 provides several examples. Table 11.2.1 . Representative Examples of Glass Membrane Ion-Selective Electrodes for Analytes Other Than H+ analyte membrane composition selectivity coefficients Na+ 11% Na2O, 18% Al2O3, 71% SiO2 $K_{\mathrm{Na}^{+} / \mathrm{H}^{+}}=1000$ $K_{\mathrm{Na}^{+} / \mathrm{K}^{+}}=0.001$ $K_{\mathrm{Na}^{+} / \mathrm{Li}^{+}}=0.001$ Li+ 15% Li2O, 25% Al2O3, 60% SiO2 $K_{\mathrm{Li}^{+} / \mathrm{Na}^{+}}=0.3$ $K_{\mathrm{Li}^{+} / \mathrm{K}^{+}}=0.001$ K+ 27% Na2O, 5% Al2O3, 68% SiO2 $K_{\mathrm{K}^{+} / \mathrm{Na}^{+}}=0.05$ Selectivity coefficients are approximate; values found experimentally may vary substantially from the listed values. See Cammann, K. Working With Ion-Selective Electrodes, Springer-Verlag: Berlin, 1977. Because an ion-selective electrode’s glass membrane is very thin—it is only about 50 μm thick—they must be handled with care to avoid cracks or breakage. Glass electrodes usually are stored in a storage buffer recommended by the manufacturer, which ensures that the membrane’s outer surface remains hydrated. If a glass electrode dries out, it is reconditioned by soaking for several hours in a solution that contains the analyte. The composition of a glass membrane will change over time, which affects the electrode’s performance. The average lifetime for a typical glass electrode is several years. Solid-State Ion-Selective Electrodes A solid-state ion-selective electrode has a membrane that consists of either a polycrystalline inorganic salt or a single crystal of an inorganic salt. We can fashion a polycrystalline solid-state ion-selective electrode by sealing a 1–2 mm thick pellet of AgS—or a mixture of AgS and a second silver salt or another metal sulfide—into the end of a nonconducting plastic cylinder, filling the cylinder with an internal solution that contains the analyte, and placing a reference electrode into the internal solution. Figure 11.2.11 shows a typical design. The NaCl in a salt shaker is an example of polycrystalline material because it consists of many small crystals of sodium chloride. The NaCl salt plates used in IR spectroscopy (see Chapter 10), on the other hand, are an example of a single crystal of sodium chloride. The membrane potential for a Ag2S pellet develops as the result of a difference in the extent of the solubility reaction $\mathrm{Ag}_{2} \mathrm{S}(s)\rightleftharpoons2 \mathrm{Ag}^{+}(a q)+\mathrm{S}^{2-}(a q) \nonumber$ on the membrane’s two sides, with charge carried across the membrane by Ag+ ions. When we use the electrode to monitor the activity of Ag+, the cell potential is $E_{\text {cell }}=K+0.05916 \log a_{\mathrm{Ag}^{+}} \nonumber$ The membrane also responds to the activity of $\text{S}^{2-}$, with a cell potential of $E_{\mathrm{cell}}=K-\frac{0.05916}{2} \log a_{\text{S}^{2-}} \nonumber$ If we combine an insoluble silver salt, such as AgCl, with the Ag2S, then the membrane potential also responds to the concentration of Cl–, with a cell potential of $E_{\text {cell }}=K-0.05916 \log a_{\mathrm{Cl}^{-}} \nonumber$ By mixing Ag2S with CdS, CuS, or PbS, we can make an ion-selective electrode that responds to the activity of Cd2+, Cu2+, or Pb2+. In this case the cell potential is $E_{\mathrm{cell}}=K+\frac{0.05916}{2} \ln a_{M^{2+}} \nonumber$ where aM2+ is the activity of the metal ion. Table 11.2.2 provides examples of polycrystalline, Ag2S-based solid-state ion-selective electrodes. The selectivity of these ion-selective electrodes depends on the relative solubility of the compounds. A Cl ISE using a Ag2S/AgCl membrane is more selective for Br (KCl/Br= 102) and for I (KCl/I = 106) because AgBr and AgI are less soluble than AgCl. If the activity of Br is sufficiently high, AgCl at the membrane/solution interface is replaced by AgBr and the electrode’s response to Cl decreases substantially. Most of the polycrystalline ion-selective electrodes listed in Table 11.2.2 operate over an extended range of pH levels. The equilibrium between S2– and HS limits the analysis for S2– to a pH range of 13–14. 11.2.2 . Representative Examples of Polycrystalline Solid-State Ion-Selective Electrodes analyte membrane composition selectivity coefficients Ag+ Ag2S $K_{\text{Ag}^+/\text{Cu}^{2+}} = 10^{-6}$ $K_{\text{Ag}^+/\text{Pb}^{2+}} = 10^{-10}$ Hg2+ interferes Cd2+ CdS/Ag2S $K_{\text{Cd}^{2+}/\text{Fe}^{2+}} = 200$ $K_{\text{Cd}^{2+}/\text{Pb}^{2+}} = 6$ Ag+, Hg2+, and Cu2+ must be absent Cu2+ CuS/Ag2S $K_{\text{Cu}^{2+}/\text{Fe}^{3+}} = 10$ $K_{\text{Cu}^{2+}/\text{Cu}^{+}} = 10^{-6}$ Ag+ and Hg2+ must be absent Pb2+ PbS/Ag2S $K_{\text{Pb}^{2+}/\text{Fe}^{3+}} = 1$ $K_{\text{Pb}^{2+}/\text{Cd}^{2+}} = 1$ Ag+, Hg2+, and Cu2+ must be absent Br AgBr/Ag2S $K_{\text{Br}^-/\text{I}^{-}} = 5000$ $K_{\text{Br}^-/\text{Cl}^{-}} = 0.005$ $K_{\text{Br}^-/\text{OH}^{-}} = 10^{-5}$ S2– must be absent Cl AgCl/Ag2S $K_{\text{Cl}^-/\text{I}^{-}} = 10^{6}$ $K_{\text{Cl}^-/\text{Br}^{-}} = 100$ $K_{\text{Cl}^-/\text{OH}^{-}} = 0.01$ S2– must be absent I AgI/Ag2S $K_{\text{I}^-/\text{S}^{2-}} = 30$ $K_{\text{I}^-/\text{Br}^{-}} = 10^{-4}$ $K_{\text{I}^-/\text{Cl}^{-}} = 10^{-6}$ $K_{\text{I}^-/\text{OH}^{-}} = 10^{-7}$ SCN AgSCN/Ag2S $K_{\text{SCN}^-/\text{I}^{-}} = 10^{3}$ $K_{\text{SCN}^-/\text{Br}^{-}} = 100$ $K_{\text{SCN}^-/\text{Cl}^{-}} = 0.1$$K_{\text{SCN}^-/\text{OH}^{-}} = 0.01$ S2– must be absent S2– Ag2S Hg2+ must be absent Selectivity coefficients are approximate; values found experimentally may vary substantially from the listed values. See Cammann, K. Working With Ion-Selective Electrodes, Springer-Verlag: Berlin, 1977. The membrane of a F ion-selective electrode is fashioned from a single crystal of LaF3, which usually is doped with a small amount of EuF2 to enhance the membrane’s conductivity. Because EuF2 provides only two Fions—compared to the three F ions in LaF3—each EuF2 produces a vacancy in the crystal’s lattice. Fluoride ions pass through the membrane by moving into adjacent vacancies. As shown in Figure 11.2.11 , the LaF3 membrane is sealed into the end of a non-conducting plastic cylinder, which contains a standard solution of F, typically 0.1 M NaF, and a Ag/AgCl reference electrode. The membrane potential for a F ISE results from a difference in the solubility of LaF3 on opposite sides of the membrane, with the potential given by $E_{\mathrm{cell}}=K-0.05916 \log a_{\mathrm{F}^-} \nonumber$ One advantage of the F ion-selective electrode is its freedom from interference. The only significant exception is OH (KF/OH = 0.1), which imposes a maximum pH limit for a successful analysis. Below a pH of 4 the predominate form of fluoride in solution is HF, which does not contribute to the membrane potential. For this reason, an analysis for fluoride is carried out at a pH greater than 4. Example 11.2.7 What is the maximum pH that we can tolerate if we need to analyze a solution in which the activity of F is $1 \times 10^{-5}$ with an error of less than 1%? Solution In the presence of OH the cell potential is $E_{\mathrm{cell}}=K-0.05916\left\{a_{\mathrm{F}^-}+K_{\mathrm{F}^- / \mathrm{OH}^{-}} \times a_{\mathrm{OH}^-}\right\} \nonumber$ To achieve an error of less than 1%, the term $K_{\mathrm{F}^- / \mathrm{OH}^{-}} \times a_{\mathrm{OH}^-}$ must be less than 1% of aF; thus $K_{\mathrm{F}^- / \mathrm{OH}^-} \times a_{\mathrm{OH}^{-}} \leq 0.01 \times a_{\mathrm{F}^-} \nonumber$ $0.10 \times a_{\mathrm{OH}^{-}} \leq 0.01 \times\left(1.0 \times 10^{-5}\right) \nonumber$ Solving for aOH gives the maximum allowable activity for OH as $1 \times 10^{-6}$, which corresponds to a pH of less than 8. Exercise 11.2.6 Suppose you wish to use the nitrite-selective electrode in Exercise 11.2.5 to measure the activity of $\text{NO}_2^-$. If the activity of $\text{NO}_2^-$ is $2.2 \times 10^{-4}$, what is the maximum pH you can tolerate if the error due to OHmust be less than 10%? The selectivity coefficient for OH, $K_{\text{NO}_2^-/\text{OH}^-}$, is 630. Do you expect the electrode to have a lower pH limit? Clearly explain your answer. Answer In the presence of OH the cell potential is $E_{\mathrm{cell}}=K-0.05916 \log \left\{a_{\mathrm{NO}_{2}^-}+K_{\mathrm{NO}_{2}^- / \mathrm{OH}^{-}} \times a_{\mathrm{OH}^{-}}\right\} \nonumber$ To achieve an error of less than 10%, the term $K_{\mathrm{NO}_{2}^- / \mathrm{OH}^{-}} \times a_{\mathrm{OH}^{-}}$ must be less than 1% of $a_{\text{NO}_2^-}$; thus $K_{\mathrm{NO}_{2}^- / \mathrm{OH}^{-}} \times a_{\mathrm{OH}^-} \leq 0.10 \times a_{\mathrm{NO}_{2}^-} \nonumber$ $630 \times a_{\mathrm{OH}^{-}} \leq 0.10 \times\left(2.2 \times 10^{-4}\right) \nonumber$ Solving for aOH gives its maximum allowable activity as $3.5 \times 10^{-8}$, which corresponds to a pH of less than 6.54. The electrode does have a lower pH limit. Nitrite is the conjugate weak base of HNO2, a species to which the ISE does not respond. As shown by the ladder diagram below, at a pH of 4.15 approximately 10% of nitrite is present as HNO2. A minimum pH of 4.5 is the usual recommendation when using a nitrite ISE. This corresponds to a $\text{NO}_2^- / \text{HNO}_2$ ratio of $\mathrm{pH}=\mathrm{p} K_{\mathrm{a}}+\log \frac{\left[\mathrm{NO}_{2}^{-}\right]}{\left[\mathrm{HNO}_{2}\right]} \nonumber$ $4.5=3.15+\log \frac{\left[\mathrm{NO}_{2}^{-}\right]}{\left[\mathrm{HNO}_{2}\right]} \nonumber$ $\frac{\left[\mathrm{NO}_{2}^{-}\right]}{\left[\mathrm{HNO}_{2}\right]} \approx 22 \nonumber$ Thus, at a pH of 4.5 approximately 96% of nitrite is present as $\text{NO}_2^-$. Unlike a glass membrane ion-selective electrode, a solid-state ISE does not need to be conditioned before it is used, and it may be stored dry. The surface of the electrode is subject to poisoning, as described above for a Cl ISE in contact with an excessive concentration of Br. If an electrode is poisoned, it can be returned to its original condition by sanding and polishing the crystalline membrane. Poisoning simply means that the surface has been chemically modified, such as AgBr forming on the surface of a AgCl membrane. Liquid-Based Ion-Selective Electrodes Another class of ion-selective electrodes uses a hydrophobic membrane that contains a liquid organic complexing agent that reacts selectively with the analyte. Three types of organic complexing agents have been used: cation exchangers, anion exchangers, and neutral ionophores. A membrane potential exists if the analyte’s activity is different on the two sides of the membrane. Current is carried through the membrane by the analyte. An ionophore is a ligand whose exterior is hydrophobic and whose interior is hydrophilic. The crown ether shown here is one example of a neutral ionophore. One example of a liquid-based ion-selective electrode is that for Ca2+, which uses a porous plastic membrane saturated with the cation exchanger di-(n-decyl) phosphate. As shown in Figure 11.2.12 , the membrane is placed at the end of a non-conducting cylindrical tube and is in contact with two reservoirs. The outer reservoir contains di-(n-decyl) phosphate in di-n-octylphenylphosphonate, which soaks into the porous membrane. The inner reservoir contains a standard aqueous solution of Ca2+ and a Ag/AgCl reference electrode. Calcium ion-selective electrodes also are available in which the di-(n-decyl) phosphate is immobilized in a polyvinyl chloride (PVC) membrane that eliminates the need for the outer reservoir. The membrane potential for the Ca2+ ISE develops as the result of a difference in the extent of the complexation reaction $\mathrm{Ca}^{2+}(a q)+2\left(\mathrm{C}_{10} \mathrm{H}_{21} \mathrm{O}\right)_{2} \mathrm{PO}_{2}^{-}(mem) \rightleftharpoons \mathrm{Ca}\left[\left(\mathrm{C}_{10} \mathrm{H}_{21} \mathrm{O}\right)_{2} \mathrm{PO}_{2}\right]_2 (mem) \nonumber$ on the two sides of the membrane, where (mem) indicates a species that is present in the membrane. The cell potential for the Ca2+ ion-selective electrode is $E_{\mathrm{cell}}=K+\frac{0.05916}{2} \log a_{\mathrm{ca}^{2+}} \nonumber$ The selectivity of this electrode for Ca2+ is very good, with only Zn2+ showing greater selectivity. Table 11.2.3 lists the properties of several liquid-based ion-selective electrodes. An electrode using a liquid reservoir can be stored in a dilute solution of analyte and needs no additional conditioning before use. The lifetime of an electrode with a PVC membrane, however, is proportional to its exposure to aqueous solutions. For this reason these electrodes are best stored by covering the membrane with a cap along with a small amount of wetted gauze to maintain a humid environment. Before using the electrode it is conditioned in a solution of analyte for 30–60 minutes. Table 11.2.3 . Representative Examples of Liquid-Based Ion-Selective Electrodes analyte membrane composition selectivity coefficients Ca2+ di-(n-decyl) phosphate in PVC $K_{\text{Ca}^{2+}/\text{Zn}^{2+}} = 1-5$ $K_{\text{Ca}^{2+}/\text{Al}^{3+}} = 0.90$ $K_{\text{Ca}^{2+}/\text{Mn}^{2+}} = 0.38$ $K_{\text{Ca}^{2+}/\text{Cu}^{2+}} = 0.070$ $K_{\text{Ca}^{2+}/\text{Mg}^{2+}} = 0.032$ K+ valinomycin in PVC $K_{\text{K}^{+}/\text{Rb}^{+}} = 1.9$ $K_{\text{K}^{+}/\text{Cs}^{+}} = 0.38$ $K_{\text{K}^{+}/\text{Li}^{+}} = 10^{-4}$ Li+ ETH 149 in PVC $K_{\text{Li}^{+}/\text{H}^{+}} = 1$ $K_{\text{Li}^{+}/\text{Na}^{+}} = 0.03$ $K_{\text{Li}^{+}/\text{K}^{+}} = 0.007$ $\text{NH}_4^+$ nonactin and monactin in PVC $K_{\text{NH}_4^{+}/\text{K}^{+}} = 0.12$ $K_{\text{NH}_4^{+}/\text{H}^{+}} = 0.016$ $K_{\text{NH}_4^{+}/\text{Li}^{+}} = 0.0042$ $K_{\text{NH}_4^{+}/\text{Na}^{+}} = 0.002$ $\text{ClO}_3^-$ $\text{Fe}(o\text{-phen})_3^{3+}$ in p-nitrocymene with porous membrane $K_{\text{ClO}_4^{-}/\text{OH}^{-}} = 1$ $K_{\text{ClO}_4^{-}/\text{I}^{-}} = 0.012$ $K_{\text{ClO}_4^{-}/\text{NO}_3^{-}} = 0.0015$ $K_{\text{ClO}_4^{-}/\text{Br}^{-}} = 5.6 \times 10^{-4}$ $K_{\text{ClO}_4^{-}/\text{Cl}^{-}} = 2.2 \times 10^{-4}$ $\text{NO}_3^-$ tetradodecyl ammonium nitrate in pVC $K_{\text{NO}_3^{-}/\text{Cl}^{-}} = 0.006$ $K_{\text{NO}_3^{-}/\text{F}^{-}} = 9 \times 10^{-4}$ Selectivity coefficients are approximate; values found experimentally may vary substantially from the listed values. See Cammann, K. Working With Ion-Selective Electrodes, Springer-Verlag: Berlin, 1977. Gas-Sensing Electrodes A number of membrane electrodes respond to the concentration of a dissolved gas. The basic design of a gas-sensing electrode, as shown in Figure 11.2.13 , consists of a thin membrane that separates the sample from an inner solution that contains an ion-selective electrode. The membrane is permeable to the gaseous analyte, but impermeable to nonvolatile components in the sample’s matrix. The gaseous analyte passes through the membrane where it reacts with the inner solution, producing a species whose concentration is monitored by the ion-selective electrode. For example, in a CO2 electrode, CO2 diffuses across the membrane where it reacts in the inner solution to produce H3O+. $\mathrm{CO}_{2}(a q)+2 \mathrm{H}_{2} \mathrm{O}(l)\rightleftharpoons\text{ HCO}_{3}^{-}(a q)+\text{ H}_{3} \mathrm{O}^{+}(a q) \label{11.10}$ The change in the activity of H3O+ in the inner solution is monitored with a pH electrode, for which the cell potential is given by Equation \ref{11.9}. To find the relationship between the activity of H3O+ in the inner solution and the activity of CO2 in the inner solution we rearrange the equilibrium constant expression for reaction \ref{11.10}; thus $a_{\mathrm{H}_{3} \mathrm{O}^{+}}=K_{\mathrm{a}} \times \frac{a_{\mathrm{CO}_{2}}}{a_{\mathrm{HCO}_{3}^{-}}} \label{11.11}$ where Ka is the equilibrium constant. If the activity of $\text{HCO}_3^-$ in the internal solution is sufficiently large, then its activity is not affected by the small amount of CO2 that passes through the membrane. Substituting Equation \ref{11.11} into Equation \ref{11.9} gives $E_{\mathrm{cell}}=K^{\prime}+0.05916 \log a_{\mathrm{co}_{2}} \nonumber$ where K′ is a constant that includes the constant for the pH electrode, the equilibrium constant for reaction \ref{11.10} and the activity of $\text{HCO}_3^-$ in the inner solution. Table 11.2.4 lists the properties of several gas-sensing electrodes. The composition of the inner solution changes with use, and both the inner solution and the membrane must be replaced periodically. Gas-sensing electrodes are stored in a solution similar to the internal solution to minimize their exposure to atmospheric gases. Table 11.2.4 . Representative Examples of Gas-Sensing Electrodes analyte inner solution reaction in inner solution ion-selective electrode CO2 10 mM NaHCO3 10 mM NaCl $\mathrm{CO}_{2}(a q)+2 \mathrm{H}_{2} \mathrm{O}(l ) \rightleftharpoons \ \text{ HCO}_{3}^{-}(a q)+\text{ H}_{3} \mathrm{O}^{+}(a q)$ glass pH ISE HCN 10 mM KAg(CN)2 $\mathrm{HCN}(a q)+\mathrm{H}_{2} \mathrm{O}(l )\rightleftharpoons \ \mathrm{CN}^{-}(a q)+\text{ H}_{3} \mathrm{O}^{+}(a q)$ Ag2S solid-state ISE HF 1 M H3O+ $\mathrm{HF}(a q)+\mathrm{H}_{2} \mathrm{O}(l)\rightleftharpoons \ \mathrm{F}^{-}(a q)+\text{ H}_{3} \mathrm{O}^{+}(a q)$ F solid-state ISE H2S pH 5 citrate buffer $\mathrm{H}_{2} \mathrm{S}(a q)+\text{ H}_{2} \mathrm{O}(l )\rightleftharpoons \ \mathrm{HS}^{-}(a q)+\text{ H}_{3} \mathrm{O}^{+}(a q)$ Ag2S solid state ISE NH3 10 mM NH4Cl 0.1 M KNO3 $\mathrm{NH}_{3}(a q)+\text{ H}_{2} \mathrm{O}(l)\rightleftharpoons \ \mathrm{NH}_{4}^{+}(a q)+\text{ OH}^{-}(a q)$ glass pH ISE NO2 20 mM NaNO2 0.1 M KNO3 $2 \mathrm{NO}_{2}(a q)+3 \mathrm{H}_{2} \mathrm{O}(l)\rightleftharpoons \ {\mathrm{NO}_{3}^{-}(a q)+\text{ NO}_{2}^{-}(a q)+2 \mathrm{H}_{3} \mathrm{O}^{+}(a q)}$ glass pH ISE SO2 1 mM NaHSO3 pH 5 $\mathrm{SO}_{2}(a q)+2 \mathrm{H}_{2} \mathrm{O}(l )\rightleftharpoons \ \mathrm{HSO}_{3}^{-}(a q)+\text{ H}_{3} \mathrm{O}^{+}(a q)$ glass pH ISE Source: Cammann, K. Working With Ion-Selective Electrodes, Springer-Verlag: Berlin, 1977. Potentiometric Biosensors The approach for developing gas-sensing electrodes can be modified to create potentiometric electrodes that respond to a biochemically important species. The most common class of potentiometric biosensors are enzyme electrodes, in which we trap or immobilize an enzyme at the surface of a potentiometric electrode. The analyte’s reaction with the enzyme produces a product whose concentration is monitored by the potentiometric electrode. Potentiometric biosensors also have been designed around other biologically active species, including antibodies, bacterial particles, tissues, and hormone receptors. One example of an enzyme electrode is the urea electrode, which is based on the catalytic hydrolysis of urea by urease $\mathrm{CO}\left(\mathrm{NH}_{2}\right)_{2}(a q)+2 \mathrm{H}_{2} \mathrm{O}(l)\rightleftharpoons 2 \mathrm{NH}_{4}^{+}(a q)+\text{ CO}_{3}^{-}(a q) \nonumber$ Figure 11.2.14 shows one version of the urea electrode, which modifies a gas-sensing NH3 electrode by adding a dialysis membrane that traps a pH 7.0 buffered solution of urease between the dialysis membrane and the gas permeable membrane [(a) Papastathopoulos, D. S.; Rechnitz, G. A. Anal. Chim. Acta 1975, 79, 17–26; (b) Riechel, T. L. J. Chem. Educ. 1984, 61, 640–642]. An NH3 electrode, as shown in Table 11.2.4 , uses a gas-permeable membrane and a glass pH electrode. The NH3 diffuses across the membrane where it changes the pH of the internal solution. When immersed in the sample, urea diffuses through the dialysis membrane where it reacts with the enzyme urease to form the ammonium ion, $\text{NH}_4^+$, which is in equilibrium with NH3. $\mathrm{NH}_{4}^{+}(a q)+\mathrm{H}_{2} \mathrm{O}(l ) \rightleftharpoons \text{ H}_{3} \mathrm{O}^{+}(a q)+\text{ NH}_{3}(a q) \nonumber$ The NH3, in turn, diffuses through the gas permeable membrane where a pH electrode measures the resulting change in pH. The electrode’s response to the concentration of urea is $E_{\text {cell }}=K-0.05916 \log a_{\text {urea }} \label{11.12}$ Another version of the urea electrode (Figure 11.2.15 ) immobilizes the enzyme urease in a polymer membrane formed directly on the tip of a glass pH electrode [Tor, R.; Freeman, A. Anal. Chem. 1986, 58, 1042–1046]. In this case the response of the electrode is $\mathrm{pH}=K a_{\mathrm{urea}} \label{11.13}$ Few potentiometric biosensors are available commercially. As shown in Figure 11.2.14 and Figure 11.2.15 , however, it is possible to convert an ion-selective electrode or a gas-sensing electrode into a biosensor. Several representative examples are described in Table 11.2.5 , and additional examples can be found in this chapter’s additional resources. Table 11.2.5 . Representative Examples of Potentiometric Biosensors analyte biologically active phase substance determined $5^{\prime}$-AMP AMP-deaminase (E) NH3 L-arginine arginine and urease (E) NH3 asparagine asparaginase (E) $\text{NH}_4^+$ L-cysteine Proteus morganii (B) H2S L-glutamate yellow squash (T) CO2 L-glutamine Sarcina flava (B) NH3 oxalate oxalate decarboxylase (E) CO2 penicillin penicllinase (E) H3O+ L-phenylalanine L-amino acid oxidase/horseradish peroxidase (E) I sugars bacteria from dental plaque (B) H3O+ urea urease (E) NH3 or H3O+ Source: Complied from Cammann, K. Working With Ion-Selective Electrodes, Springer-Verlag: Berlin, 1977 and Lunte, C. E.; Heineman, W. R. “Electrochemical techniques in Bioanalysis,” in Steckham, E. ed. Topics in Current Chemistry, Vol. 143, Springer-Verlag: Berlin, 1988, p.8. Abbreviations for biologically active phase: E = enzyme; B = bacterial particle; T = tissue. Quantitative Applications The potentiometric determination of an analyte’s concentration is one of the most common quantitative analytical techniques. Perhaps the most frequent analytical measurement is the determination of a solution’s pH, a measurement we will consider in more detail later in this section. Other areas where potentiometry is important are clinical chemistry, environmental chemistry, and potentiometric titrations. Before we consider representative applications, however, we need to examine more closely the relationship between cell potential and the analyte’s concentration and methods for standardizing potentiometric measurements. Activity and Concentration The Nernst equation relates the cell potential to the analyte’s activity. For example, the Nernst equation for a metallic electrode of the first kind is $E_{\mathrm{cll}}=K+\frac{0.05916}{n} \log a_{M^{n+}} \label{11.14}$ where aMn+ is the metal ion’s activity. When we use a potentiometric electrode, however, our goal is to determine the analyte’s concentration. As we learned in Chapter 6, an ion’s activity is the product of its concentration, [Mn+], and a matrix-dependent activity coefficient, $\gamma_{Mn^{n+}}$. $a_{M^{n+}}=\left[M^{n+}\right] \gamma_{M^{n+}} \label{11.15}$ Substituting Equation \ref{11.15} into Equation \ref{11.14} and rearranging, gives $E_{\mathrm{cell}}=K+\frac{0.05916}{n} \log \gamma_{M^{n+}}+\frac{0.05916}{n} \log \left[M^{n+}\right] \label{11.16}$ We can solve Equation \ref{11.16} for the metal ion’s concentration if we know the value for its activity coefficient. Unfortunately, if we do not know the exact ionic composition of the sample’s matrix—which is the usual situation—then we cannot calculate the value of $\gamma_{Mn^{n+}}$. There is a solution to this dilemma. If we design our system so that the standards and the samples have an identical matrix, then the value of $\gamma_{Mn^{n+}}$ remains constant and Equation \ref{11.16} simplifies to $E_{\mathrm{cell}}=K^{\prime}+\frac{0.05916}{n} \log \left[M^{n+}\right] \nonumber$ where $K^{\prime}$ includes the activity coefficient. Quantitative Analysis Using External Standards Before we can determine the concentration of analyte in a sample, we must standardize the electrode. If the electrode’s response obeys the Nernst equation, then we can determine the constant K using a single external standard. Because a small deviation from the ideal slope of ±RT/nF or ±RT/zF is not unexpected, we usually use two or more external standards. To review the use of external standards, see Chapter 5.3. In the absence of interferents, a calibration curve of Ecell versus logaA, where A is the analyte, is a straight-line. A plot of Ecell versus log[A], however, may show curvature at higher concentrations of analyte as a result of a matrix-dependent change in the analyte’s activity coefficient. To maintain a consistent matrix we add a high concentration of an inert electrolyte to all samples and standards. If the concentration of added electrolyte is sufficient, then the difference between the sample’s matrix and the matrix of the standards will not affect the ionic strength and the activity coefficient essentially remains constant. The inert electrolyte added to the sample and the standards is called a total ionic strength adjustment buffer (TISAB). Example 11.2.8 The concentration of Ca2+ in a water sample is determined using the method of external standards. The ionic strength of the samples and the standards is maintained at a nearly constant level by making each solution 0.5 M in KNO3. The measured cell potentials for the external standards are shown in the following table. [Ca2+] (M) Ecell (V) $1.00 \times 10^{-5}$ –0.125 $5.00 \times 10^{-5}$ –0.103 $1.00 \times 10^{-4}$ –0.093 $5.00 \times 10^{-4}$ –0.072 $1.00 \times 10^{-3}$ –0.063 $5.00 \times 10^{-3}$ –0.043 $1.00 \times 10^{-2}$ –0.033 What is the concentration of Ca2+ in a water sample if its cell potential is found to be –0.084 V? Solution Linear regression gives the calibration curve in Figure 11.2.16 , with an equation of $E_{\mathrm{cell}}=0.027+0.0303 \log \left[\mathrm{Ca}^{2+}\right] \nonumber$ Substituting the sample’s cell potential gives the concentration of Ca2+ as $2.17 \times 10^{-4}$ M. Note that the slope of the calibration curve, which is 0.0303, is slightly larger than its ideal value of 0.05916/2 = 0.02958; this is not unusual and is one reason for using multiple standards. One reason that it is not unusual to find that the experimental slope deviates from its ideal value of 0.05916/n is that this ideal value assumes that the temperature is 25°C. Quantitative Analysis Using the Method of Standard Additions Another approach to calibrating a potentiometric electrode is the method of standard additions. First, we transfer a sample with a volume of Vsamp and an analyte concentration of Csamp into a beaker and measure the potential, (Ecell)samp. Next, we make a standard addition by adding to the sample a small volume, Vstd, of a standard that contains a known concentration of analyte, Cstd, and measure the potential, (Ecell)std. If Vstd is significantly smaller than Vsamp, then we can safely ignore the change in the sample’s matrix and assume that the analyte’s activity coefficient is constant. Example 11.2.9 demonstrates how we can use a one-point standard addition to determine the concentration of analyte in a sample. To review the method of standard additions, see Chapter 5.3. Example 11.2.9 The concentration of Ca2+ in a sample of sea water is determined using a Ca ion-selective electrode and a one-point standard addition. A 10.00-mL sample is transferred to a 100-mL volumetric flask and diluted to volume. A 50.00-mL aliquot of the sample is placed in a beaker with the Ca ISE and a reference electrode, and the potential is measured as –0.05290 V. After adding a 1.00-mL aliquot of a $5.00 \times 10^{-2}$ M standard solution of Ca2+ the potential is –0.04417 V. What is the concentration of Ca2+ in the sample of sea water? Solution To begin, we write the Nernst equation before and after adding the standard addition. The cell potential for the sample is $\left(E_{\mathrm{cell}}\right)_{\mathrm{samp}}=K+\frac{0.05916}{2} \log C_{\mathrm{samp}} \nonumber$ and that following the standard addition is $\left(E_{\mathrm{cell}}\right)_{\mathrm{std}}=K+\frac{0.05916}{2} \log \left\{ \frac {V_\text{samp}} {V_\text{tot}}C_\text{samp} + \frac {V_\text{std}} {V_\text{tot}}C_\text{std} \right\} \nonumber$ where Vtot is the total volume (Vsamp + Vstd) after the standard addition. Subtracting the first equation from the second equation gives $\Delta E = \left(E_{\mathrm{cell}}\right)_{\mathrm{std}} - \left(E_{\mathrm{cell}}\right)_{\mathrm{samp}} = \frac{0.05916}{2} \log \left\{ \frac {V_\text{samp}} {V_\text{tot}}C_\text{samp} + \frac {V_\text{std}} {V_\text{tot}}C_\text{std} \right\} - \frac{0.05916}{2}\log C_\text{samp} \nonumber$ Rearranging this equation leaves us with $\frac{2 \Delta E_{cell}}{0.05916} = \log \left\{ \frac {V_\text{samp}} {V_\text{tot}} + \frac {V_\text{std}C_\text{std}} {V_\text{tot}C_\text{samp}} \right\} \nonumber$ Substituting known values for $\Delta E$, Vsamp, Vstd, Vtot and Cstd, $\begin{array}{l}{\frac{2 \times\{-0.04417-(-0.05290)\}}{0.05916}=} \ {\log \left\{\frac{50.00 \text{ mL}}{51.00 \text{ mL}}+\frac{(1.00 \text{ mL})\left(5.00 \times 10^{-2} \mathrm{M}\right)}{(51.00 \text{ mL}) C_{\mathrm{samp}}}\right\}} \ {0.2951=\log \left\{0.9804+\frac{9.804 \times 10^{-4}}{C_{\mathrm{samp}}}\right\}}\end{array} \nonumber$ and taking the inverse log of both sides gives $1.973=0.9804+\frac{9.804 \times 10^{-4}}{C_{\text {samp }}} \nonumber$ Finally, solving for Csamp gives the concentration of Ca2+ as $9.88 \times 10^{-4}$ M. Because we diluted the original sample of seawater by a factor of 10, the concentration of Ca2+ in the seawater sample is $9.88 \times 10^{-3}$ M. Free Ions Versus Complexed Ions Most potentiometric electrodes are selective toward the free, uncomplexed form of the analyte, and do not respond to any of the analyte’s complexed forms. This selectivity provides potentiometric electrodes with a significant advantage over other quantitative methods of analysis if we need to determine the concentration of free ions. For example, calcium is present in urine both as free Ca2+ ions and as protein-bound Ca2+ ions. If we analyze a urine sample using atomic absorption spectroscopy, the signal is propor- tional to the total concentration of Ca2+ because both free and bound calcium are atomized. Analyzing urine with a Ca2+ ISE, however, gives a signal that is a function of only free Ca2+ ions because the protein-bound Ca2+ can not interact with the electrode’s membrane. The best way to appreciate the theoretical and the practical details discussed in this section is to carefully examine a typical analytical method. Although each method is unique, the following description of the determination of F in toothpaste provides an instructive example of a typical procedure. The description here is based on Kennedy, J. H. Analytical Chemistry— Practice, Harcourt Brace Jaovanovich: San Diego, 1984, p. 117–118. Representative Method 11.2.1: Determination of Fluoride in Toothpaste Description of the Method The concentration of fluoride in toothpastes that contains soluble F is determined with a F ion-selective electrode using a calibration curve prepared with external standards. Although the F ISE is very selective (only OH with a KF/OH of 0.1 is a significant interferent), Fe3+ and Al3+ interfere with the analysis because they form soluble fluoride complexes that do not interact with the ion-selective electrode’s membrane. This interference is minimized by reacting any Fe3+ and Al3+ with a suitable complexing agent. Procedure Prepare 1 L of a standard solution of 1.00% w/v SnF2 and transfer it to a plastic bottle for storage. Using this solution, prepare 100 mL each of standards that contain 0.32%, 0.36%, 0.40%, 0.44% and 0.48% w/v SnF2, adding 400 mg of malic acid to each solution as a stabilizer. Transfer the standards to plastic bottles for storage. Prepare a total ionic strength adjustment buffer (TISAB) by mixing 500 mL of water, 57 mL of glacial acetic acid, 58 g of NaCl, and 4 g of disodium DCTA (trans-1,2-cyclohexanetetraacetic acid) in a 1-L beaker, stirring until dissolved. Cool the beaker in a water bath and add 5 M NaOH until the pH is between 5–5.5. Transfer the contents of the beaker to a 1-L volumetric flask and dilute to volume. Prepare each external standard by placing approximately 1 g of a fluoride-free toothpaste, 30 mL of distilled water, and 1.00 mL of standard into a 50-mL plastic beaker and mix vigorously for two min with a stir bar. Quantitatively transfer the resulting suspension to a 100-mL volumetric flask along with 50 mL of TISAB and dilute to volume with distilled water. Store the entire external standard in a 250-mL plastic beaker until you are ready to measure the potential. Prepare toothpaste samples by obtaining an approximately 1-g portion and treating in the same manner as the standards. Measure the cell potential for the external standards and the samples using a F ion-selective electrode and an appropriate reference electrode. When measuring the potential, stir the solution and allow two to three minutes to reach a stable potential. Report the concentration of F in the toothpaste %w/w SnF2. Questions 1. The total ionic strength adjustment buffer serves several purposes in this procedure. Identify these purposes. The composition of the TISAB has three purposes: (a) The high concentration of NaCl (the final solutions are approximately 1 M NaCl) ensures that the ionic strength of each external standard and each sample is essentially identical. Because the activity coefficient for fluoride is the same in all solutions, we can write the Nernst equation in terms of fluoride’s concentration instead of its activity. (b) The combination of glacial acetic acid and NaOH creates an acetic acid/acetate buffer of pH 5–5.5. As shown in Figure 11.2.17 , the pH of this buffer is high enough to ensure that the predominate form of fluoride is F instead of HF. This pH also is sufficiently acidic that it avoids an interference from OH(see Example $\PageIndex{8)}$). (c) DCTA is added as a complexing agent for Fe3+ or Al3+, preventing the formation of $\text{FeF}_6^{3-}$ or $\text{AlF}_6^{3-}$. 2. Why is a fluoride-free toothpaste added to the standard solutions? Adding a fluoride-free toothpaste protects against any unaccounted for matrix effects that might influence the ion-selective electrode’s response. This assumes, of course, that the matrices of the two toothpastes are otherwise similar. 3. The procedure specifies that the standards and the sample should be stored in plastic containers. Why is it a bad idea to store the solutions in glass containers? The fluoride ion is capable of reacting with glass to form SiF4. 4. Suppose your calibration curve has a slope of –57.98 mV for each 10-fold change in the concentration of F. The ideal slope from the Nernst equation is –59.16 mV per 10-fold change in concentration. What effect does this have on the quantitative analysis for fluoride in toothpaste? No effect at all! This is why we prepare a calibration curve using multiple standards. Measurement of pH With the availability of inexpensive glass pH electrodes and pH meters, the determination of pH is one of the most common quantitative analytical measurements. The potentiometric determination of pH, however, is not without complications, several of which we discuss in this section. One complication is confusion over the meaning of pH [Kristensen, H. B.; Saloman, A.; Kokholm, G. Anal. Chem. 1991, 63, 885A–891A]. The conventional definition of pH in most general chemistry textbooks is $\mathrm{pH}=-\log \left[\mathrm{H}^{+}\right] \label{11.17}$ As we now know, pH actually is a measure of the activity of H+. $\mathrm{pH}=-\log a_{\mathrm{H}^{+}} \label{11.18}$ Try this experiment—find several general chemistry textbooks and look up pH in each textbook’s index. Turn to the appropriate pages and see how it is defined. Next, look up activity or activity coefficient in each textbook’s index and see if these terms are indexed. Equation \ref{11.17} only approximates the true pH. If we calculate the pH of 0.1 M HCl using Equation \ref{11.17}, we obtain a value of 1.00; the solution’s actual pH, as defined by Equation \ref{11.18}, is 1.1 [Hawkes, S. J. J. Chem. Educ. 1994, 71, 747–749]. The activity and the concentration of H+ are not the same in 0.1 M HCl because the activity coefficient for H+ is not 1.00 in this matrix. Figure 11.2.18 shows a more colorful demonstration of the difference between activity and concentration. A second complication in measuring pH is the uncertainty in the relationship between potential and activity. For a glass membrane electrode, the cell potential, (Ecell)samp, for a sample of unknown pH is $(E_{\text{cell}})_\text {samp} = K-\frac{R T}{F} \ln \frac{1}{a_{\mathrm{H}^{+}}}=K-\frac{2.303 R T}{F} \mathrm{pH}_{\mathrm{samp}} \label{11.19}$ where K includes the potential of the reference electrode, the asymmetry potential of the glass membrane, and any junction potentials in the electrochemical cell. All the contributions to K are subject to uncertainty, and may change from day-to-day, as well as from electrode-to-electrode. For this reason, before using a pH electrode we calibrate it using a standard buffer of known pH. The cell potential for the standard, (Ecell)std, is $\left(E_{\text {ccll}}\right)_{\text {std}}=K-\frac{2.303 R T}{F} \mathrm{p} \mathrm{H}_{\mathrm{std}} \label{11.20}$ where pHstd is the standard’s pH. Subtracting Equation \ref{11.20} from Equation \ref{11.19} and solving for pHsamp gives $\text{pH}_\text{samp} = \text{pH}_\text{std} - \frac{\left\{\left(E_{\text {cell}}\right)_{\text {samp}}-\left(E_{\text {cell}}\right)_{\text {std}}\right\} F}{2.303 R T} \label{11.21}$ which is the operational definition of pH adopted by the International Union of Pure and Applied Chemistry [Covington, A. K.; Bates, R. B.; Durst, R. A. Pure & Appl. Chem. 1985, 57, 531–542]. Calibrating a pH electrode presents a third complication because we need a standard with an accurately known activity for H+. Table 11.2.6 provides pH values for several primary standard buffer solutions accepted by the National Institute of Standards and Technology. Table 11.2.6 . pH Values for Selected NIST Primary Standard Buffers temp (oC) saturated (at 25oC) KHC4H4O7 (tartrate) 0.05 m KH2C6H5O7 (citrate) 0.05 m KHC8H4O4 (phthlate) 0.025 m KH2PO4, 0.025 m NaHPO4 0.008695 m KH2PO4, 0.03043 m Na2HPO4 0.01 m Na4B4O7 0.025 m NaHCO3, 0.025 m Na2CO3 0 3.863 4.003 6.984 7.534 9.464 10.317 5 3.840 3.999 6.951 7.500 9.395 10.245 10 3.820 3.998 6.923 7.472 9.332 10.179 15 3.802 3.999 6.900 7.448 9.276 10.118 20 3.788 4.002 6.881 7.429 9.225 10.062 25 3.557 3.776 4.008 6.865 7.413 9.180 10.012 30 3.552 3.766 4.015 6.854 7.400 9.139 9.966 35 3.549 3.759 4.024 6.844 7.389 9.012 9.925 40 3.547 3.753 4.035 6.838 7.380 9.068 9.889 45 3.547 3.750 4.047 6.834 7.373 9.038 9.856 50 3.549 3.749 4.060 6.833 7.367 9.011 9.828 Source: Values taken from Bates, R. G. Determination of pH: Theory and Practice, 2nd ed. Wiley: New York, 1973. See also Buck, R. P., et. al.“Measurement of pH. Definition, Standards, and Procedures,” Pure. Appl. Chem. 2002, 74, 2169–2200. All concentrations are molal (m). To standardize a pH electrode using two buffers, choose one near a pH of 7 and one that is more acidic or basic depending on your sample’s expected pH. Rinse your pH electrode in deionized water, blot it dry with a laboratory wipe, and place it in the buffer with the pH closest to 7. Swirl the pH electrode and allow it to equilibrate until you obtain a stable reading. Adjust the “Standardize” or “Calibrate” knob until the meter displays the correct pH. Rinse and dry the electrode, and place it in the second buffer. After the electrode equilibrates, adjust the “Slope” or “Temperature” knob until the meter displays the correct pH. Some pH meters can compensate for a change in temperature. To use this feature, place a temperature probe in the sample and connect it to the pH meter. Adjust the “Temperature” knob to the solution’s temperature and calibrate the pH meter using the “Calibrate” and “Slope” controls. As you are using the pH electrode, the pH meter compensates for any change in the sample’s temperature by adjusting the slope of the calibration curve using a Nernstian response of 2.303RT/F. Clinical Applications Because of their selectivity for analytes in complex matricies, ion-selective electrodes are important sensors for clinical samples. The most common analytes are electrolytes, such as Na+, K+, Ca2+, H+, and Cl, and dissolved gases such as CO2. For extracellular fluids, such as blood and urine, the analysis can be made in vitro. An in situ analysis, however, requires a much smaller electrode that we can insert directly into a cell. Liquid-based membrane microelectrodes with tip diameters smaller than 1 μm are constructed by heating and drawing out a hard-glass capillary tube with an initial diameter of approximately 1–2 mm (Figure 11.2.19 ). The microelectrode’s tip is made hydrophobic by dipping into a solution of dichlorodimethyl silane, and an inner solution appropriate for the analyte and a Ag/AgCl wire reference electrode are placed within the microelectrode. The microelectrode is dipped into a solution of the liquid complexing agent, which through capillary action draws a small volume of the liquid complexing agent into the tip. Potentiometric microelectrodes have been developed for a number of clinically important analytes, including H+, K+, Na+, Ca2+, Cl, and I [Bakker, E.; Pretsch, E. Trends Anal. Chem. 2008, 27, 612–618]. Environmental Applications Although ion-selective electrodes are used in environmental analysis, their application is not as widespread as in clinical analysis. Although standard potentiometric methods are available for the analysis of CN, F, NH3, and $\text{NO}_3^-$ in water and wastewater, other analytical methods generally provide better detection limits. One potential advantage of an ion-selective electrode is the ability to incorporate it into a flow cell for the continuous monitoring of wastewater streams. Potentiometric Titrations One method for determining the equivalence point of an acid–base titration is to use a pH electrode to monitor the change in pH during the titration. A potentiometric determination of the equivalence point is possible for acid–base, complexation, redox, and precipitation titrations, as well as for titrations in aqueous and nonaqueous solvents. Acid–base, complexation, and precipitation potentiometric titrations usually are monitored with an ion-selective electrode that responds the analyte, although an electrode that responds to the titrant or a reaction product also can be used. A redox electrode, such as a Pt wire, and a reference electrode are used for potentiometric redox titrations. More details about potentiometric titrations are found in Chapter 9. Evaluation Scale of Operation The working range for most ion-selective electrodes is from a maximum concentration of 0.1–1 M to a minimum concentration of $10^{-5}-10^{-11}$ M [(a) Bakker, E.; Pretsch, E. Anal. Chem. 2002, 74, 420A–426A; (b) Bakker, E.; Pretsch, E. Trends Anal. Chem. 2005, 24, 199–207]. This broad working range extends from major analytes to ultratrace analytes, and is significantly greater than many other analytical techniques. To use a conventional ion-selective electrode we need a minimum sample volume of several mL (a macro sample). Microelectrodes, such as the one shown in Figure 11.2.19 , are used with an ultramicro sample, although care is needed to ensure that the sample is representative of the original sample. Accuracy The accuracy of a potentiometric analysis is limited by the error in measuring Ecell. Several factors contribute to this measurement error, including the contribution to the potential from interfering ions, the finite current that passes through the cell while we measure the potential, differences between the analyte’s activity coefficient in the samples and the standard solutions, and junction potentials. We can limit the effect of an interfering ion by including a separation step before the potentiometric analysis. Modern high impedance potentiometers minimize the amount of current that passes through the electrochemical cell. Finally, we can minimize the errors due to activity coefficients and junction potentials by matching the matrix of the standards to that of the sample. Even in the best circumstances, however, a difference of approximately ±1 mV for samples with equal concentrations of analyte is not unusual. We can evaluate the effect of uncertainty on the accuracy of a potentiometric measurement by using a propagation of uncertainty. For a membrane ion-selective electrode the general expression for potential is $E_{\mathrm{cell}}=K+\frac{R T}{z F} \ln \left[ A\right] \nonumber$ where z is the analyte’s, A, charge. From Table 4.3.1 in Chapter 4, the uncertainty in the cell potential, $\Delta E_\text{cell}$ is $\triangle E_{\text {cell}}=\frac{R T}{z F} \times \frac{\Delta [A]}{[A]} \nonumber$ Rearranging and multiplying through by 100 gives the percent relative error in concentration as $\% \text { relative error }=\frac{\Delta[A]}{[A]} \times 100=\frac{\triangle E_{\mathrm{cell}}}{R T / z F} \times 100 \label{11.22}$ The relative error in concentration, therefore, is a function of the measurement error for the electrode’s potential, $\Delta E_\text{cell}$, and the analyte’s charge. Table 11.2.7 provides representative values for ions with charges of ±1 and ±2 at a temperature of 25oC. Accuracies of 1–5% for monovalent ions and 2–10% for divalent ions are typical. Although Equation \ref{11.22} applies to membrane electrodes, we can use if for a metallic electrode by replacing z with n. Table 11.2.7 . Relationship Between the Uncertainty in Measuring Ecell and the Relative Error in the Analyte's Concentration $\Delta E_\text{cell} (\pm \text{ mV})$ % relative error when $z = \pm 1$ % relative error when $z = \pm 2$ 0.1 $\pm 0.4$ $\pm 0.8$ 0.5 $\pm 1.9$ $\pm 3.9$ 1.0 $\pm 3.0$ $\pm 7.8$ 1.5 $\pm 5.8$ $\pm 11.1$ 2.0 $\pm 7.8$ $\pm 15.6$ Precision Precision in potentiometry is limited by variations in temperature and the sensitivity of the potentiometer. Under most conditions—and when using a simple, general-purpose potentiometer—we can measure the potential with a repeatability of ±0.1 mV. Using Table 11.2.7 , this corresponds to an uncertainty of ±0.4% for monovalent analytes and ±0.8% for divalent analytes. The reproducibility of potentiometric measurements is about a factor of ten poorer. Sensitivity The sensitivity of a potentiometric analysis is determined by the term RT/nF or RT/zF in the Nernst equation. Sensitivity is best for smaller values of n or z. Selectivity As described earlier, most ion-selective electrodes respond to more than one analyte; the selectivity for the analyte, however, often is significantly greater than the sensitivity for the interfering ions. The manufacturer of an ion-selective usually provides an ISE’s selectivity coefficients, which allows us to determine whether a potentiometric analysis is feasible for a given sample. Time, Cost, and Equipment In comparison to other techniques, potentiometry provides a rapid, relatively low-cost means for analyzing samples. The limiting factor when analyzing a large number of samples is the need to rinse the electrode between samples. The use of inexpensive, disposable ion-selective electrodes can increase a lab’s sample throughput. Figure 11.2.20 shows one example of a disposable ISE for Ag+ [Tymecki, L.; Zwierkowska, E.; Głąb, S.; Koncki, R. Sens. Actuators B 2003, 96, 482–488]. Commercial instruments for measuring pH or potential are available in a variety of price ranges, and includes portable models for use in the field.
textbooks/chem/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/11%3A_Electrochemical_Methods/11.02%3A_Potentiometric_Methods.txt
In a potentiometric method of analysis we determine an analyte’s concentration by measuring the potential of an electrochemical cell under static conditions in which no current flows and the concentrations of species in the electrochemical cell remain fixed. Dynamic techniques, in which current passes through the electrochemical cell and concentrations change, also are important electrochemical methods of analysis. In this section we consider coulometry. Voltammetry and amperometry are covered in Chapter 11.4. Coulometry is based on an exhaustive electrolysis of the analyte. By exhaustive we mean that the analyte is oxidized or reduced completely at the working electrode, or that it reacts completely with a reagent generated at the working electrode. There are two forms of coulometry: controlled-potential coulometry, in which we apply a constant potential to the electrochemical cell, and controlled-current coulometry, in which we pass a constant current through the electrochemical cell. During an electrolysis, the total charge, Q, in coulombs, that passes through the electrochemical cell is proportional to the absolute amount of analyte by Faraday’s law $Q=n F N_{A} \label{11.1}$ where n is the number of electrons per mole of analyte, F is Faraday’s constant (96 487 C mol–1), and NA is the moles of analyte. A coulomb is equivalent to an A•sec; thus, for a constant current, i, the total charge is $Q=i t_{e} \label{11.2}$ where te is the electrolysis time. If the current varies with time, as it does in controlled-potential coulometry, then the total charge is $Q=\int_{0}^{t_e} i(t) d t \label{11.3}$ In coulometry, we monitor current as a function of time and use either Equation \ref{11.2} or Equation \ref{11.3} to calculate Q. Knowing the total charge, we then use Equation \ref{11.1} to determine the moles of analyte. To obtain an accurate value for NA, all the current must oxidize or reduce the analyte; that is, coulometry requires 100% current efficiency or an accurate measurement of the current efficiency using a standard. Current efficiency is the percentage of current that actually leads to the analyte’s oxidation or reduction. Controlled-Potential Coulometry The easiest way to ensure 100% current efficiency is to hold the working electrode at a constant potential where the analyte is oxidized or reduced completely and where no potential interfering species are oxidized or reduced. As electrolysis progresses, the analyte’s concentration and the current decrease. The resulting current-versus-time profile for controlled-potential coulometry is shown in Figure 11.3.1 . Integrating the area under the curve (Equation \ref{11.3}) from t = 0 to t = te gives the total charge. In this section we consider the experimental parameters and instrumentation needed to develop a controlled-potential coulometric method of analysis. Selecting a Constant Potential To understand how an appropriate potential for the working electrode is selected, let’s develop a constant-potential coulometric method for Cu2+ based on its reduction to copper metal at a Pt working electrode. $\mathrm{Cu}^{2+}(a q)+2 e^{-} \rightleftharpoons \mathrm{Cu}(s) \label{11.4}$ Figure 11.3.2 shows a ladder diagram for an aqueous solution of Cu2+. From the ladder diagram we know that reaction \ref{11.4} is favored when the working electrode’s potential is more negative than +0.342 V versus the standard hydrogen electrode. To ensure a 100% current efficiency, however, the potential must be sufficiently more positive than +0.000 V so that the reduction of H3O+ to H2 does not contribute significantly to the total current flowing through the electrochemical cell. We can use the Nernst equation for reaction \ref{11.4} to estimate the minimum potential for quantitatively reducing Cu2+. $E=E_{\mathrm{Cu}^{2+} / \mathrm{Cu}}^{\mathrm{o}}-\frac{0.05916}{2} \log \frac{1}{\left[\mathrm{Cu}^{2+}\right]} \label{11.5}$ So why are we using the concentration of Cu2+ in Equation \ref{11.5} instead of its activity? In potentiometry we use activity because we use Ecell to determine the analyte’s concentration. Here we use the Nernst equation to help us select an appropriate potential. Once we identify a potential, we can adjust its value as needed to ensure a quantitative reduction of Cu2+. In addition, in coulometry the analyte’s concentration is given by the total charge, not the applied potential. If we define a quantitative electrolysis as one in which we reduce 99.99% of Cu2+ to Cu, then the concentration of Cu2+ at te is $\left[\mathrm{Cu}^{2+}\right]_{t_{e}}=0.0001 \times\left[\mathrm{Cu}^{2+}\right]_{0} \label{11.6}$ where [Cu2+]0 is the initial concentration of Cu2+ in the sample. Substituting Equation \ref{11.6} into Equation \ref{11.5} allows us to calculate the desired potential. $E=E_{\mathrm{Cu}^{2+} / \mathrm{Cu}}^{\circ}-\frac{0.05916}{2} \log \frac{1}{0.0001 \times\left[\mathrm{Cu}^{2+}\right]} \nonumber$ If the initial concentration of Cu2+ is $1.00 \times 10^{-4}$ M, for example, then the working electrode’s potential must be more negative than +0.105 V to quantitatively reduce Cu2+ to Cu. Note that at this potential H3O+ is not reduced to H2, maintaining 100% current efficiency. Many controlled-potential coulometric methods for Cu2+ use a potential that is negative relative to the standard hydrogen electrode—see, for example, Rechnitz, G. A. Controlled-Potential Analysis, Macmillan: New York, 1963, p.49. Based on the ladder diagram in Figure 11.3.2 you might expect that applying a potential <0.000 V will partially reduce H3O+ to H2, resulting in a current efficiency that is less than 100%. The reason we can use such a negative potential is that the reaction rate for the reduction of H3O+ to H2 is very slow at a Pt electrode. This results in a significant overpotential—the need to apply a potential more positive or a more negative than that predicted by thermodynamics—which shifts Eo for the H3O+/H2 redox couple to a more negative value. Minimizing Electrolysis Time In controlled-potential coulometry, as shown in Figure 11.3.1 , the current decreases over time. As a result, the rate of electrolysis—recall from Chapter 11.1 that current is a measure of rate—becomes slower and an exhaustive electrolysis of the analyte may require a long time. Because time is an important consideration when designing an analytical method, we need to consider the factors that affect the analysis time. We can approximate the current’s change as a function of time in Figure 11.3.1 as an exponential decay; thus, the current at time t is $i_{t}=i_{0} e^{-k t} \label{11.7}$ where i0 is the current at t = 0 and k is a rate constant that is directly proportional to the area of the working electrode and the rate of stirring, and that is inversely proportional to the volume of solution. For an exhaustive electrolysis in which we oxidize or reduce 99.99% of the analyte, the current at the end of the analysis, te, is $i_{t_{e}} \leq 0.0001 \times i_{0} \label{11.8}$ Substituting Equation \ref{11.8} into Equation \ref{11.7} and solving for te gives the minimum time for an exhaustive electrolysis as $t_{e}=-\frac{1}{k} \times \ln (0.0001)=\frac{9.21}{k} \nonumber$ From this equation we see that a larger value for k reduces the analysis time. For this reason we usually carry out a controlled-potential coulometric analysis in a small volume electrochemical cell, using an electrode with a large surface area, and with a high stirring rate. A quantitative electrolysis typically requires approximately 30–60 min, although shorter or longer times are possible. Instrumentation A three-electrode potentiostat is used to set the potential in controlled-potential coulometry (see Figure 11.1.5). The working electrodes is usually one of two types: a cylindrical Pt electrode manufactured from platinum-gauze (Figure 11.3.3 ), or a Hg pool electrode. The large overpotential for the reduction of H3O+ at Hg makes it the electrode of choice for an analyte that requires a negative potential. For example, a potential more negative than –1 V versus the SHE is feasible at a Hg electrode—but not at a Pt electrode—even in a very acidic solution. Because mercury is easy to oxidize, it is less useful if we need to maintain a potential that is positive with respect to the SHE. Platinum is the working electrode of choice when we need to apply a positive potential. The auxiliary electrode, which often is a Pt wire, is separated by a salt bridge from the analytical solution. This is necessary to prevent the electrolysis products generated at the auxiliary electrode from reacting with the analyte and interfering in the analysis. A saturated calomel or Ag/AgCl electrode serves as the reference electrode. The other essential need for controlled-potential coulometry is a means for determining the total charge. One method is to monitor the current as a function of time and determine the area under the curve, as shown in Figure 11.3.1 . Modern instruments use electronic integration to monitor charge as a function of time. The total charge at the end of the electrolysis is read directly from a digital readout. Electrogravimetry If the product of controlled-potential coulometry forms a deposit on the working electrode, then we can use the change in the electrode’s mass as the analytical signal. For example, if we apply a potential that reduces Cu2+ to Cu at a Pt working electrode, the difference in the electrode’s mass before and after electrolysis is a direct measurement of the amount of copper in the sample. As we learned in Chapter 8, we call an analytical technique that uses mass as a signal a gravimetric technique; thus, we call this electrogravimetry. Controlled-Current Coulometry A second approach to coulometry is to use a constant current in place of a constant potential, which results in the current-versus-time profile shown in Figure 11.3.4 . Controlled-current coulometry has two advantages over controlled-potential coulometry. First, the analysis time is shorter because the current does not decrease over time. A typical analysis time for controlled-current coulometry is less than 10 min, compared to approximately 30–60 min for controlled-potential coulometry. Second, because the total charge simply is the product of current and time (Equation \ref{11.2}), there is no need to integrate the current-time curve in Figure 11.3.4 . Using a constant current presents us with two important experimental problems. First, during electrolysis the analyte’s concentration—and, therefore, the current that results from its oxidation or reduction—decreases continuously. To maintain a constant current we must allow the potential to change until another oxidation reaction or reduction reaction occurs at the working electrode. Unless we design the system carefully, this secondary reaction results in a current efficiency that is less than 100%. The second problem is that we need a method to determine when the analyte's electrolysis is complete. As shown in Figure 11.3.1 , in a controlled-potential coulometric analysis we know that electrolysis is complete when the current reaches zero, or when it reaches a constant background or residual current. In a controlled-current coulometric analysis, however, current continues to flow even when the analyte’s electrolysis is complete. A suitable method for determining the reaction’s endpoint, te, is needed. Maintaining Current Efficiency To illustrate why a change in the working electrode’s potential may result in a current efficiency of less than 100%, let’s consider the coulometric analysis for Fe2+ based on its oxidation to Fe3+ at a Pt working electrode in 1 M H2SO4. $\mathrm{Fe}^{2+}(a q) \rightleftharpoons \text{ Fe}^{3+}(a q)+e^{-} \nonumber$ Figure 11.3.5 shows the ladder diagram for this system. At the beginning of the analysis, the potential of the working electrode remains nearly constant at a level near its initial value. As the concentration of Fe2+ decreases and the concentration of Fe3+ increases, the working electrode’s potential shifts toward more positive values until the oxidation of H2O begins. $2 \mathrm{H}_{2} \mathrm{O}(l)\rightleftharpoons \text{ O}_{2}(g)+4 \mathrm{H}^{+}(a q)+4 e^{-} \nonumber$ Because a portion of the total current comes from the oxidation of H2O, the current efficiency for the analysis is less than 100% and we cannot use Equation \ref{11.1} to determine the amount of Fe2+ in the sample. Although we cannot prevent the potential from drifting until another species undergoes oxidation, we can maintain a 100% current efficiency if the product of that secondary oxidation reaction both rapidly and quantitatively reacts with the remaining Fe2+. To accomplish this we add an excess of Ce3+ to the analytical solution. As shown in Figure 11.3.6 , when the potential of the working electrode shifts to a more positive potential, Ce3+ begins to oxidize to Ce4+ $\mathrm{Ce}^{3+}(a q) \rightleftharpoons \text{ Ce}^{4+}(a q)+e^{-} \label{11.9}$ The Ce4+ that forms at the working electrode rapidly mixes with the solution where it reacts with any available Fe2+. $\mathrm{Ce}^{4+}(a q)+\text{ Fe}^{2+}(a q) \rightleftharpoons \text{ Ce}^{3+}(a q)+\text{ Fe}^{3+}(a q) \label{11.10}$ Combining reaction \ref{11.9} and reaction \ref{11.10} shows that the net reaction is the oxidation of Fe2+ to Fe3+ $\mathrm{Fe}^{2+}(a q) \rightleftharpoons \text{ Fe}^{3+}(a q)+e^{-} \nonumber$ which maintains a current efficiency of 100%. A species used to maintain 100% current efficiency is called a mediator. Endpoint Determination Adding a mediator solves the problem of maintaining 100% current efficiency, but it does not solve the problem of determining when the analyte's electrolysis is complete. Using the analysis for Fe2+ in Figure 11.3.6 , when the oxidation of Fe2+ is complete current continues to flow from the oxidation of Ce3+, and, eventually, the oxidation of H2O. What we need is a signal that tells us when no more Fe2+ is present in the solution. For our purposes, it is convenient to treat a controlled-current coulometric analysis as a reaction between the analyte, Fe2+, and the mediator, Ce3+, as shown by reaction \ref{11.10}. This reaction is identical to a redox titration; thus, we can use the end points for a redox titration—visual indicators and potentiometric or conductometric measurements—to signal the end of a controlled-current coulometric analysis. For example, ferroin provides a useful visual endpoint for the Ce3+ mediated coulometric analysis for Fe2+, changing color from red to blue when the electrolysis of Fe2+ is complete. Reaction \ref{11.10} is the same reaction we used in Chapter 9 to develop our understanding of redox titrimetry. Instrumentation Controlled-current coulometry normally is carried out using a two-electrode galvanostat, which consists of a working electrode and a counter electrode. The working electrode—often a simple Pt electrode—also is called the generator electrode since it is where the mediator reacts to generate the species that reacts with the analyte. If necessary, the counter electrode is isolated from the analytical solution by a salt bridge or a porous frit to prevent its electrolysis products from reacting with the analyte. Alternatively, we can generate the oxidizing agent or the reducing agent externally, and allow it to flow into the analytical solution. Figure 11.3.7 shows one simple method for accomplishing this. A solution that contains the mediator flows into a small-volume electrochemical cell with the products exiting through separate tubes. Depending upon the analyte, the oxidizing agent or the reducing reagent is delivered to the analytical solution. For example, we can generate Ce4+ using an aqueous solution of Ce3+, directing the Ce4+ that forms at the anode to our sample. Figure 11.1.4 shows an example of a manual galvanostat. Although a modern galvanostat uses very different circuitry, you can use Figure 11.1.4 and the accompanying discussion to understand how we can use the working electrode and the counter electrode to control the current. Figure 11.1.4 includes an optional reference electrode, but its presence or absence is not important if we are not interested in monitoring the working electrode’s potential. There are two other crucial needs for controlled-current coulometry: an accurate clock for measuring the electrolysis time, te, and a switch for starting and stopping the electrolysis. An analog clock can record time to the nearest ±0.01 s, but the need to stop and start the electrolysis as we approach the endpoint may result in an overall uncertainty of ±0.1 s. A digital clock allows for a more accurate measurement of time, with an overall uncertainty of ±1 ms. The switch must control both the current and the clock so that we can make an accurate determination of the electrolysis time. Coulometric Titrations A controlled-current coulometric method sometimes is called a coulometric titration because of its similarity to a conventional titration. For example, in the controlled-current coulometric analysis for Fe2+ using a Ce3+ mediator, the oxidation of Fe2+ by Ce4+ (reaction \ref{11.10}) is identical to the reaction in a redox titration. There are other similarities between controlled-current coulometry and titrimetry. If we combine Equation \ref{11.1} and Equation \ref{11.2} and solve for the moles of analyte, NA, we obtain the following equation. $N_{A}=\frac{i}{n F} \times t_{e} \label{11.11}$ Compare Equation \ref{11.11} to the relationship between the moles of analyte, NA, and the moles of titrant, NT, in a titration $N_{A}=N_{T}=M_{T} \times V_{T} \nonumber$ where MT and VT are the titrant’s molarity and the volume of titrant at the end point. In constant-current coulometry, the current source is equivalent to the titrant and the value of that current is analogous to the titrant’s molarity. Electrolysis time is analogous to the volume of titrant, and te is equivalent to the a titration’s end point. Finally, the switch for starting and stopping the electrolysis serves the same function as a buret’s stopcock. For simplicity, we assumed above that the stoichiometry between the analyte and titrant is 1:1. The assumption, however, is not important and does not effect our observation of the similarity between controlled-current coulometry and a titration. Quantitative Applications Coulometry is used for the quantitative analysis of both inorganic and organic analytes. Examples of controlled-potential and controlled-current coulometric methods are discussed in the following two sections. Controlled-Potential Coulometry The majority of controlled-potential coulometric analyses involve the determination of inorganic cations and anions, including trace metals and halides ions. Table 11.3.1 summarizes several of these methods. Table 11.3.1 . Representative Controlled-Potential Coulometric Analyses for Inorganic Ions analyte electrolytic reaction electrode antimony $\text{Sb}(\text{III}) + 3 e^{-} \rightleftharpoons \text{Sb}$ Pt arsenic $\text{As}(\text{III}) \rightleftharpoons \text{As(V)} + 2 e^{-}$ Pt cadmium $\text{Cd(II)} + 2 e^{-} \rightleftharpoons \text{Cd}$ Pt or Hg cobalt $\text{Co(II)} + 2 e^{-} \rightleftharpoons \text{Co}$ Pt or Hg copper $\text{Cu(II)} + 2 e^{-} \rightleftharpoons \text{Cu}$ Pt or Hg halides (X) $\text{Ag} + \text{X}^- \rightleftharpoons \text{AgX} + e^-$ Ag iron $\text{Fe(II)} \rightleftharpoons \text{Fe(III)} + e^-$ Pt lead $\text{Pb(II)} + 2 e^{-} \rightleftharpoons \text{Pb}$ Pt or Hg nickel $\text{Ni(II)} + 2 e^{-} \rightleftharpoons \text{Ni}$ Pt or Hg plutonium $\text{Pu(III)} \rightleftharpoons \text{Pu(IV)} + e^-$ Pt silver $\text{Ag(I)} + 1 e^{-} \rightleftharpoons \text{Ag}$ Pt tin $\text{Sn(II)} + 2 e^{-} \rightleftharpoons \text{Sn}$ Pt uranium $\text{U(VI)} + 2 e^{-} \rightleftharpoons \text{U(IV})$ Pt or Hg zinc $\text{Zn(II)} + 2 e^{-} \rightleftharpoons \text{Zn}$ Pt or Hg Source: Rechnitz, G. A. Controlled-Potential Analysis, Macmillan: New York, 1963. Electrolytic reactions are written in terms of the change in the analyte’s oxidation state. The actual species in solution depends on the analyte. The ability to control selectivity by adjusting the working electrode’s potential makes controlled-potential coulometry particularly useful for the analysis of alloys. For example, we can determine the composition of an alloy that contains Ag, Bi, Cd, and Sb by dissolving the sample and placing it in a matrix of 0.2 M H2SO4 along with a Pt working electrode and a Pt counter electrode. If we apply a constant potential of +0.40 V versus the SCE, Ag(I) deposits on the electrode as Ag and the other metal ions remain in solution. When electrolysis is complete, we use the total charge to determine the amount of silver in the alloy. Next, we shift the working electrode’s potential to –0.08 V versus the SCE, depositing Bi on the working electrode. When the coulometric analysis for bismuth is complete, we determine antimony by shifting the working electrode’s potential to –0.33 V versus the SCE, depositing Sb. Finally, we determine cadmium following its electrodeposition on the working electrode at a potential of –0.80 V versus the SCE. We also can use controlled-potential coulometry for the quantitative analysis of organic compounds, although the number of applications is significantly less than that for inorganic analytes. One example is the six-electron reduction of a nitro group, –NO2, to a primary amine, –NH2, at a mercury electrode. Solutions of picric acid—also known as 2,4,6-trinitrophenol, or TNP, a close relative of TNT—is analyzed by reducing it to triaminophenol. Another example is the successive reduction of trichloroacetate to dichloroacetate, and of dichloroacetate to monochloroacetate $\text{Cl}_3\text{CCOO}^-(aq) + \text{H}_3\text{O}^+(aq) + 2 e^- \rightleftharpoons \text{Cl}_2\text{HCCOO}^-(aq) + \text{Cl}^-(aq) + \text{H}_2\text{O}(l) \nonumber$ $\text{Cl}_2\text{HCCOO}^-(aq) + \text{ H}_3\text{O}^+(aq) + 2 e^- \rightleftharpoons \text{ ClH}_2\text{CCOO}^-(aq) + \text{ Cl}^-(aq) + \text{H}_2\text{O}(l) \nonumber$ We can analyze a mixture of trichloroacetate and dichloroacetate by selecting an initial potential where only the more easily reduced trichloroacetate reacts. When its electrolysis is complete, we can reduce dichloroacetate by adjusting the potential to a more negative potential. The total charge for the first electrolysis gives the amount of trichloroacetate, and the difference in total charge between the first electrolysis and the second electrolysis gives the amount of dichloroacetate. Controlled-Current Coulometry (Coulometric Titrations) The use of a mediator makes a coulometric titration a more versatile analytical technique than controlled-potential coulometry. For example, the direct oxidation or reduction of a protein at a working electrode is difficult if the protein’s active redox site lies deep within its structure. A coulometric titration of the protein is possible, however, if we use the oxidation or reduction of a mediator to produce a solution species that reacts with the protein. Table 11.3.2 summarizes several controlled-current coulometric methods based on a redox reaction using a mediator. Table 11.3.2 . Representative Examples of Coulometric Redox Titrations mediator electrochemically generated reagent and reaction representative application Ag+ $\mathrm{Ag}^{+} \rightleftharpoons \textbf{Ag}^\textbf{2+}+e^{-}$ $\mathbf{H}_{2} \mathbf{C}_{2} \mathbf{O}_{4}(a q)+2 \mathrm{Ag}^{2+}(a q)+2 \mathrm{H}_{2} \mathrm{O}(l) \rightleftharpoons \ 2\text{CO}_2(g) + 2\text{Ag}^+(aq) + 2\text{H}_3\text{O}^+(aq)$ Br $2\mathrm{Br}^{-} \rightleftharpoons \textbf{Br}_\textbf{2}+2 e^{-}$ $\textbf{H}_\textbf{2} \textbf{S}(a q)+\text{ Br}_{2}(\mathrm{aq})+2 \mathrm{H}_{2} \mathrm{O}(\mathrm{l}) \rightleftharpoons \ \text{S}(s) + 2\text{Br}^-(aq) + 2\text{H}_3\text{O}^+(aq)$ Ce3+ $\mathrm{Ce}^{3+} \rightleftharpoons \textbf{Ce}^\textbf{4+}+e^{-}$ $\textbf{Fe}(\mathbf{C N})_\textbf{6}^\textbf{4–}(a q)+\text{ Ce}^{4+}(a q) \rightleftharpoons \ \mathrm{Fe}(\mathrm{CN})_{6}^{3-}(a q)+\text{ Ce}^{3+}(a q)$ Cl $2\mathrm{Cl}^{-} \rightleftharpoons \textbf{Cl}_\textbf{2}+2 e^{-}$ $\textbf{Ti(I)}(a q)+\text{ Cl}_{2}(a q) \rightleftharpoons \mathrm{Ti}(\mathrm{III})(a q)+2 \mathrm{Cl}^{-}(a q)$ Fe3+ $\mathrm{Fe}^{3+} +e^{-} \rightleftharpoons \textbf{Fe}^\textbf{2+}$ $\mathbf{Cr}_\textbf{2} \mathbf{O}_\textbf{7}^\mathbf{2-}(a q)+6 \mathrm{Fe}^{2+}(a q)+14 \mathrm{H}_{3} \mathrm{O}^{+}(a q) \rightleftharpoons \ 2\text{Cr}^{3+}(aq) + 6\text{Fe}^{3+}(aq) + 21\text{H}_2\text{O}(l)$ I $3\mathrm{I}^{-} \rightleftharpoons \textbf{I}_\textbf{3}^\textbf{–}+2 e^{-}$ $2 \mathbf{S}_\mathbf{2} \mathbf{O}_\mathbf{3}^\mathbf{2-}(a q)+\mathrm{I}_{3}^{-}(a q) \rightleftharpoons \text{S}_{4} \mathrm{O}_{6}^{2-}(a q)+3 \mathrm{I}^{-}(a q)$ Mn2+ $\mathrm{Mn}^{2+} \rightleftharpoons \textbf{Mn}^\textbf{3+}+e^{-}$ $\textbf{As(III)}(a q)+2 \text{Mn}^{3+}(aq) \rightleftharpoons \text{As(V)}(a q)+2 \text{Mn}^{2+}(a q)$ Note: The electrochemically generated reagent and the analyte are shown in bold. For an analyte that is not easy to oxidize or reduce, we can complete a coulometric titration by coupling a mediator’s oxidation or reduction to an acid–base, precipitation, or complexation reaction that involves the analyte. For example, if we use H2O as a mediator, we can generate H3O+at the anode $6 \mathrm{H}_{2} \mathrm{O}(l) \rightleftharpoons 4 \mathrm{H}_{3} \text{O}^{+}(a q)+\text{ O}_{2}(g)+4 e^{-} \nonumber$ and generate OH at the cathode. $2 \mathrm{H}_{2} \mathrm{O}(l)+2 e^{-} \rightleftharpoons 2 \mathrm{OH}^{-}(a q)+\text{ H}_{2}(g) \nonumber$ If we carry out the oxidation or reduction of H2O using the generator cell in Figure 11.3.7 , then we can selectively dispense H3O+ or OH into a solution that contains the analyte. The resulting reaction is identical to that in an acid–base titration. Coulometric acid–base titrations have been used for the analysis of strong and weak acids and bases, in both aqueous and non-aqueous matrices. Table 11.3.3 summarizes several examples of coulometric titrations that involve acid–base, complexation, and precipitation reactions. Table 11.3.3 . Representative Coulometric Titrations Using Acid–Base, Complexation, and Precipitation Reactions type of reaction mediator electrochemically generated reagent and reaction representative application acid-base H2O $6 \mathrm{H}_{2} \mathrm{O} \rightleftharpoons 4 \textbf{H}_\mathbf{3} \textbf{O}^\mathbf{+}+\text{ O}_{2}+e^{-}$ $\textbf{OH}^\mathbf{-}(a q)+\text{ H}_{3} \mathrm{O}^{+}(a q) \rightleftharpoons 2 \mathrm{H}_{2} \mathrm{O}(l)$ acid-base H2O $2 \mathrm{H}_{2} \mathrm{O}+2 e^{-}\rightleftharpoons 2 \textbf{OH}^\mathbf{-}+\text{ H}_{2}$ $\textbf{H}_\mathbf{3} \textbf{O}^\mathbf{+}(a q)+\text{ OH}^{-}(a q) \rightleftharpoons 2 \mathrm{H}_{2} \mathrm{O}(l)$ complexation HgNH3Y2– (Y = EDTA) $\mathrm{HgNH}_{3} \mathrm{Y}^{2-}+\text{ NH}_{4}^{+} + 2 e^{-} \rightleftharpoons \ \textbf{HY}^\mathbf{3-}+\text{ Hg}+2 \mathrm{NH}_{3}$ $\mathbf{Ca}^\mathbf{2+}(a q)+ \text{ HY}^{3-}(a q)+ \text{ H}_{2} \text{O}(l)\rightleftharpoons \ \text{CaY}^{2-}(a q)+ \text{ H}_{3} \text{O}^{+}(a q)$ complexation Ag $\mathrm{Ag} \rightleftharpoons \textbf{ Ag}^\mathbf{+}+e^{-}$ $\mathbf{I}^\mathbf{-}(a q)+\text{ Ag}^{+}(a q) \rightleftharpoons \operatorname{Ag} \mathrm{I}(s)$ precipitation Hg $2 \mathrm{Hg} \rightleftharpoons \mathbf{H} \mathbf{g}_{2}^{2+}+2 e^{-}$ $2 \textbf{Cl}^\mathbf{-}(a q)+\text{ Hg}_{2}^{2+}(a q) \rightleftharpoons \text{ Hg}_{2} \mathrm{Cl}_{2}(s)$ precipitation $\text{Fe(CN)}_6^{3-}$ $\mathrm{Fe}(\mathrm{CN})_{6}^{3-}+e^{-}\rightleftharpoons \textbf{ Fe(CN)}_\mathbf{6}^\mathbf{4-}$ $3 \mathbf{Zn}^\mathbf{2+}(a q)+ \text{K}^{+}(a q) +2 \text{Fe(CN)}_{6}^{4-}(a q) \rightleftharpoons \ \text{K}_{2} \text{Zn}_{3}\left[\text{Fe(CN)}_{6}\right]_{2}(s)$ Note: The electrochemically generated reagent and the analyte are shown in bold. In comparison to a conventional titration, a coulometric titration has two important advantages. The first advantage is that electrochemically generating a titrant allows us to use a reagent that is unstable. Although we cannot prepare and store a solution of a highly reactive reagent, such as Ag2+ or Mn3+, we can generate them electrochemically and use them in a coulometric titration. Second, because it is relatively easy to measure a small quantity of charge, we can use a coulometric titration to determine an analyte whose concentration is too small for a conventional titration. Quantitative Calculations The absolute amount of analyte in a coulometric analysis is determined using Faraday’s law (Equation \ref{11.1}) and the total charge given by Equation \ref{11.2} or by Equation \ref{11.3}. The following example shows the calculations for a typical coulometric analysis. Example 11.3.1 To determine the purity of a sample of Na2S2O3, a sample is titrated coulometrically using I as a mediator and $\text{I}_3^-$ as the titrant. A sample weighing 0.1342 g is transferred to a 100-mL volumetric flask and diluted to volume with distilled water. A 10.00-mL portion is transferred to an electrochemical cell along with 25 mL of 1 M KI, 75 mL of a pH 7.0 phosphate buffer, and several drops of a starch indicator solution. Electrolysis at a constant current of 36.45 mA requires 221.8 s to reach the starch indicator endpoint. Determine the sample’s purity. Solution As shown in Table 11.3.2 , the coulometric titration of $\text{S}_2 \text{O}_3^{2-}$ with $\text{I}_3^-$ is $2 \mathrm{S}_{2} \mathrm{O}_{3}^{2-}(a q)+\text{ I}_{3}^{-}(a q)\rightleftharpoons \text{ S}_{4} \mathrm{O}_{6}^{2-}(a q)+3 \mathrm{I}^{-}(a q) \nonumber$ The oxidation of $\text{S}_2 \text{O}_3^{2-}$ to $\text{S}_4 \text{O}_6^{2-}$ requires one electron per $\text{S}_2 \text{O}_3^{2-}$ (n = 1). Combining Equation \ref{11.1} and Equation \ref{11.2}, and solving for the moles and grams of Na2S2O3 gives $N_{A} =\frac{i t_{e}}{n F}=\frac{(0.03645 \text{ A})(221.8 \text{ s})}{\left(\frac{1 \text{ mol } e^{-}}{\text{mol Na}_{2} \mathrm{S}_{2} \mathrm{O}_{3}}\right)\left(\frac{96487 \text{ C}}{\text{mol } e^{-}}\right)} =8.379 \times 10^{-5} \text{ mol Na}_{2} \mathrm{S}_{2} \mathrm{O}_{3} \nonumber$ This is the amount of Na2S2O3 in a 10.00-mL portion of a 100-mL sample; thus, there are 0.1325 grams of Na2S2O3 in the original sample. The sample’s purity, therefore, is $\frac{0.1325 \text{ g} \text{ Na}_{2} \mathrm{S}_{2} \mathrm{O}_{3}}{0.1342 \text{ g} \text { sample }} \times 100=98.73 \% \text{ w} / \text{w } \mathrm{Na}_{2} \mathrm{S}_{2} \mathrm{O}_{3} \nonumber$ Note that for Equation \ref{11.1} and Equation \ref{11.2} it does not matter whether $\text{S}_2 \text{O}_3^{2-}$ is oxidized at the working electrode or is oxidized by $\text{I}_3^-$. Exercise 11.3.1 To analyze a brass alloy, a 0.442-g sample is dissolved in acid and diluted to volume in a 500-mL volumetric flask. Electrolysis of a 10.00-mL sample at –0.3 V versus a SCE reduces Cu2+ to Cu, requiring a total charge of 16.11 C. Adjusting the potential to –0.6 V versus a SCE and completing the electrolysis requires 0.442 C to reduce Pb2+ to Pb. Report the %w/w Cu and Pb in the alloy. Answer The reduction of Cu2+ to Cu requires two electrons per mole of Cu (n = 2). Using Equation \ref{11.1}, we calculate the moles and the grams of Cu in the portion of sample being analyzed. $N_{C u}=\frac{Q}{n F}=\frac{16.11 \text{ C}}{\frac{2 \text{ mol } e^{-}}{\mathrm{mol} \text{ Cu}} \times \frac{96487 \text{ C}}{\text{ mol } e^{-}}}=8.348 \times 10^{-5} \text{ mol Cu} \nonumber$ $8.348 \times 10^{-5} \text{ mol Cu} \times \frac{63.55 \text{ g Cu} }{\text{mol Cu}}=5.301 \times 10^{-3} \text{ g Cu} \nonumber$ This is the Cu from a 10.00 mL portion of a 500.0 mL sample; thus, the %/w/w copper in the original sample of brass is $\frac{5.301 \times 10^{-3} \text{ g Cu} \times \frac{500.0 \text{ mL}}{10.00 \text{ mL}}}{0.442 \text{ g sample} } \times 100=60.0 \% \text{ w/w Cu} \nonumber$ For lead, we follow the same process; thus $N_{\mathrm{Pb}}=\frac{Q}{n F}=\frac{0.422 \text{ C}}{\frac{2 \text{ mol } e^-}{\text{mol Pb}} \times \frac{96487 \text{ C}}{\text{mol } e^{-}}}=2.19 \times 10^{-6} \text{ mol Pb} \nonumber$ $2.19 \times 10^{-6} \text{ mol Pb}\times \frac{207.2 \text{ g Pb} }{\text{mol Cu} }=4.53 \times 10^{-4} \text{ g Pb} \nonumber$ $\frac{4.53 \times 10^{-4} \text{ g Pb} \times \frac{500.0 \text{ mL}}{10.00 \text{ mL}}}{0.442 \text{ g sample}} \times 100=5.12 \% \text{ w/w Pb} \nonumber$ Representative Method 11.3.1: Determination of Dichromate by a Coulometric Redox Titration The best way to appreciate the theoretical and the practical details discussed in this section is to carefully examine a typical analytical method. Although each method is unique, the following description of the determination of $\text{Cr}_2 \text{O}_7^{2-}$ provides an instructive example of a typical procedure. The description here is based on Bassett, J.; Denney, R. C.; Jeffery, G. H.; Mendham, J. Vogel’s Textbook of Quantitative Inorganic Analysis, Longman: London, 1978, p. 559–560. Description of the Method Thee concentration of $\text{Cr}_2 \text{O}_7^{2-}$ in a sample is determined by a coulometric redox titration using Fe3+ as a mediator and electrogenerated Fe3+ as the titrant. The endpoint of the titration is determined potentiometrically. Procedure The electrochemical cell consists of a Pt working electrode and a Pt counter electrode placed in separate cells connected by a porous glass disk. Fill the counter electrode’s cell with 0.2 M Na2SO4, keeping the level above that of the solution in the working electrode’s cell. Connect a platinum electrode and a tungsten electrode to a potentiometer so that you can measure the working electrode’s potential during the analysis. Prepare a mediator solution of approximately 0.3 M NH4Fe(SO4)2. Add 5.00 mL of sample, 2 mL of 9 M H2SO4, and 10–25 mL of the mediator solution to the working electrode’s cell, and add distilled water as needed to cover the electrodes. Bubble pure N2 through the solution for 15 min to remove any O2 that is present. Maintain the flow of N2 during the electrolysis, turning if off momentarily when measuring the potential. Stir the solution using a magnetic stir bar. Adjust the current to 15–50 mA and begin the titration. Periodically stop the titration and measure the potential. Construct a titration curve of potential versus time and determine the time needed to reach the equivalence point. Questions 1. Is the platinum working electrode the cathode or the anode? Reduction of Fe3+ to Fe2+ occurs at the working electrode, making it the cathode in this electrochemical cell. 2. Why is it necessary to remove dissolved oxygen by bubbling N2 through the solution? Any dissolved O2 will oxidize Fe2+ back to Fe3+, as shown by the following reaction. $4\text{Fe}^{2+}(aq) + \text{ O}_2 + \text{ 4H}_3\text{O}^+(aq) \rightleftharpoons 4\text{Fe}^{3+}(aq) + 6\text{H}_2\text{O}(l) \nonumber$ To maintain current efficiency, all the Fe2+ must react with $\text{Cr}_2 \text{O}_7^{2-}$. The reaction of Fe2+ with O2 means that more of the Fe3+ mediator is needed, increasing the time to reach the titration’s endpoint. As a result, we report the presence of too much $\text{Cr}_2 \text{O}_7^{2-}$. 3. What is the effect on the analysis if the NH4Fe(SO4)2 is contaminated with trace amounts of Fe2+? How can you compensate for this source of Fe2+? There are two sources of Fe2+: that generated from the mediator and that present as an impurity. Because the total amount of Fe2+ that reacts with $\text{Cr}_2 \text{O}_7^{2-}$ remains unchanged, less Fe2+ is needed from the mediator. This decreases the time needed to reach the titration’s end point. Because the apparent current efficiency is greater than 100%, the reported concentration of $\text{Cr}_2 \text{O}_7^{2-}$ is too small. We can remove trace amount of Fe2+ from the mediator’s solution by adding H2O2 and heating at 50–70oC until the evolution of O2 ceases, converting the Fe2+ to Fe3+. Alternatively, we can complete a blank titration to correct for any impurities of Fe2+ in the mediator. 4. Why is the level of solution in the counter electrode’s cell maintained above the solution level in the working electrode’s cell? This prevents the solution that contains the analyte from entering the counter electrode’s cell. The oxidation of H2O at the counter electrode produces O2, which can react with the Fe2+ generated at the working electrode or the Cr3+ resulting from the reaction of Fe2+ and $\text{Cr}_2 \text{O}_7^{2-}$. In either case, the result is a positive determinate error. Characterization Applications One useful application of coulometry is determining the number of electrons involved in a redox reaction. To make the determination, we complete a controlled-potential coulometric analysis using a known amount of a pure compound. The total charge at the end of the electrolysis is used to determine the value of n using Faraday’s law (Equation \ref{11.1}). Example 11.3.2 A 0.3619-g sample of tetrachloropicolinic acid, C6HNO2Cl4, is dissolved in distilled water, transferred to a 1000-mL volumetric flask, and diluted to volume. An exhaustive controlled-potential electrolysis of a 10.00-mL portion of this solution at a spongy silver cathode requires 5.374 C of charge. What is the value of n for this reduction reaction? Solution The 10.00-mL portion of sample contains 3.619 mg, or $1.39 \times 10^{-5}$ mol of tetrachloropicolinic acid. Solving Equation \ref{11.1} for n and making appropriate substitutions gives $n=\frac{Q}{F N_{A}}=\frac{5.374 \text{ C}}{\left(96478 \text{ C/mol } e^{-}\right)\left(1.39 \times 10^{-5} \text{ mol } \mathrm{C}_{6} \mathrm{HNO}_{2} \mathrm{Cl}_{4}\right)} = 4.01 \text{ mol e}^-/\text{mol } \mathrm{C}_{6} \mathrm{HNO}_{2} \mathrm{Cl}_{4} \nonumber$ Thus, reducing a molecule of tetrachloropicolinic acid requires four electrons. The overall reaction, which results in the selective formation of 3,6-dichloropicolinic acid, is Evaluation Scale of Operation A coulometric method of analysis can analyze a small absolute amount of an analyte. In controlled-current coulometry, for example, the moles of analyte consumed during an exhaustive electrolysis is given by Equation \ref{11.11}. An electrolysis using a constant current of 100 μA for 100 s, for example, consumes only $1 \times 10^{-7}$ mol of analyte if n = 1. For an analyte with a molecular weight of 100 g/mol, $1 \times 10^{-7}$ mol of analyte corresponds to only 10 μg. The concentration of analyte in the electrochemical cell, however, must be sufficient to allow an accurate determination of the endpoint. When using a visual end point, the smallest concentration of analyte that can be determined by a coulometric titration is approximately 10–4 M. As is the case for a conventional titration, a coulometric titration using a visual end point is limited to major and minor analytes. A coulometric titration to a preset potentiometric endpoint is feasible even if the analyte’s concentration is as small as 10–7 M, extending the analysis to trace analytes [Curran, D. J. “Constant-Current Coulometry,” in Kissinger, P. T.; Heineman, W. R., eds., Laboratory Techniques in Electroanalytical Chemistry, Marcel Dekker Inc.: New York, 1984, pp. 539–568]. Accuracy In controlled-current coulometry, accuracy is determined by the accuracy with which we can measure current and time, and by the accuracy with which we can identify the end point. The maximum measurement errors for current and time are about ±0.01% and ±0.1%, respectively. The maximum end point error for a coulometric titration is at least as good as that for a conventional titration, and is often better when using small quantities of reagents. Together, these measurement errors suggest that an accuracy of 0.1%–0.3% is feasible. The limiting factor in many analyses, therefore, is current efficiency. A current efficiency of more than 99.5% is fairly routine, and it often exceeds 99.9%. In controlled-potential coulometry, accuracy is determined by current efficiency and by the determination of charge. If the sample is free of interferents that are easier to oxidize or reduce than the analyte, a current efficiency of greater than 99.9% is routine. When an interferent is present, it can often be eliminated by applying a potential where the exhaustive electrolysis of the interferents is possible without the simultaneous electrolysis of the analyte. Once the interferent is removed the potential is switched to a level where electrolysis of the analyte is feasible. The limiting factor in the accuracy of many controlled-potential coulometric methods of analysis is the determination of charge. With electronic integrators the total charge is determined with an accuracy of better than 0.5%. If we cannot obtain an acceptable current efficiency, an electrogravimetric analysis is possible if the analyte—and only the analyte—forms a solid deposit on the working electrode. In this case the working electrode is weighed before beginning the electrolysis and reweighed when the electrolysis is complete. The difference in the electrode’s weight gives the analyte’s mass. Precision Precision is determined by the uncertainties in measuring current, time, and the endpoint in controlled-current coulometry or the charge in controlled-potential coulometry. Precisions of ±0.1–0.3% are obtained routinely in coulometric titrations, and precisions of ±0.5% are typical for controlled-potential coulometry. Sensitivity For a coulometric method of analysis, the calibration sensitivity is equivalent to nF in Equation \ref{11.1}. In general, a coulometric method is more sensitive if the analyte’s oxidation or reduction involves a larger value of n. Selectivity Selectivity in controlled-potential and controlled-current coulometry is improved by adjusting solution conditions and by selecting the electrolysis potential. In controlled-potential coulometry, the potential is fixed by the potentiostat, and in controlled-current coulometry the potential is determined by the redox reaction with the mediator. In either case, the ability to control the electrolysis potential affords some measure of selectivity. By adjusting pH or by adding a complexing agent, it is possible to shift the potential at which an analyte or interferent undergoes oxidation or reduction. For example, the standard-state reduction potential for Zn2+ is –0.762 V versus the SHE. If we add a solution of NH3, forming $\text{Zn(NH}_3\text{)}_4^{2+}$, the standard state potential shifts to –1.04 V. This provides an additional means for controlling selectivity when an analyte and an interferent undergo electrolysis at similar potentials. Time, Cost, and Equipment Controlled-potential coulometry is a relatively time consuming analysis, with a typical analysis requiring 30–60 min. Coulometric titrations, on the other hand, require only a few minutes, and are easy to adapt to an automated analysis. Commercial instrumentation for both controlled-potential and controlled-current coulometry is available, and is relatively inexpensive. Low cost potentiostats and constant-current sources are available for approximately \$1000.
textbooks/chem/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/11%3A_Electrochemical_Methods/11.03%3A_Coulometric_Methods.txt
In voltammetry we apply a time-dependent potential to an electrochemical cell and measure the resulting current as a function of that potential. We call the resulting plot of current versus applied potential a voltammogram, and it is the electrochemical equivalent of a spectrum in spectroscopy, providing quantitative and qualitative information about the species involved in the oxidation or reduction reaction [Maloy, J. T. J. Chem. Educ. 1983, 60, 285–289]. The earliest voltammetric technique is polarography, developed by Jaroslav Heyrovsky in the early 1920s—an achievement for which he was awarded the Nobel Prize in Chemistry in 1959. Since then, many different forms of voltammetry have been developed, a few of which are highlighted in Figure 11.1.6. Before examining these techniques and their applications in more detail, we must first consider the basic experimental design for voltammetry and the factors influencing the shape of the resulting voltammogram. For an on-line introduction to much of the material in this section, see Analytical Electrochemistry: The Basic Concepts by Richard S. Kelly, a resource that is part of the Analytical Sciences Digital Library. Voltammetric Measurements Although early voltammetric methods used only two electrodes, a modern voltammeter makes use of a three-electrode potentiostat, such as that shown in Figure 11.1.5. In voltammetry we apply a time-dependent potential excitation signal to the working electrode—changing its potential relative to the fixed potential of the reference electrode—and measure the current that flows between the working electrode and the auxiliary electrode. The auxiliary electrode generally is a platinum wire and the reference electrode usually is a SCE or a Ag/AgCl electrode. Figure 11.1.5 shows an example of a manual three-electrode potentiostat. Although a modern potentiostat uses very different circuitry, you can use Figure 11.1.5 and the accompanying discussion to understand how we can control the potential of working electrode and measure the resulting current. For the working electrode we can choose among several different materials, including mercury, platinum, gold, silver, and carbon. The earliest voltammetric techniques used a mercury working electrode. Because mercury is a liquid, the working electrode usual is a drop suspended from the end of a capillary tube. In the hanging mercury drop electrode, or HMDE, we extrude the drop of Hg by rotating a micrometer screw that pushes the mercury from a reservoir through a narrow capillary tube (Figure 11.4.1 a). In the dropping mercury electrode, or DME, mercury drops form at the end of the capillary tube as a result of gravity (Figure 11.4.1 b). Unlike the HMDE, the mercury drop of a DME grows continuously—as mercury flows from the reservoir under the influence of gravity—and has a finite lifetime of several seconds. At the end of its lifetime the mercury drop is dislodged, either manually or on its own, and is replaced by a new drop. The static mercury drop electrode, or SMDE, uses a solenoid driven plunger to control the flow of mercury (Figure 11.4.1 c). Activation of the solenoid momentarily lifts the plunger, allowing mercury to flow through the capillary, forming a single, hanging Hg drop. Repeated activation of the solenoid produces a series of Hg drops. In this way the SMDE may be used as either a HMDE or a DME. There is one additional type of mercury electrode: the mercury film electrode. A solid electrode—typically carbon, platinum, or gold—is placed in a solution of Hg2+ and held at a potential where the reduction of Hg2+ to Hg is favorable, depositing a thin film of mercury on the solid electrode’s surface. Mercury has several advantages as a working electrode. Perhaps its most important advantage is its high overpotential for the reduction of H3O+ to H2, which makes accessible potentials as negative as –1 V versus the SCE in acidic solutions and –2 V versus the SCE in basic solutions (Figure 11.4.2 ). A species such as Zn2+, which is difficult to reduce at other electrodes without simultaneously reducing H3O+, is easy to reduce at a mercury working electrode. Other advantages include the ability of metals to dissolve in mercury—which results in the formation of an amalgam—and the ability to renew the surface of the electrode by extruding a new drop. One limitation to mercury as a working electrode is the ease with which it is oxidized. Depending on the solvent, a mercury electrode can not be used at potentials more positive than approximately –0.3 V to +0.4 V versus the SCE. Solid electrodes constructed using platinum, gold, silver, or carbon may be used over a range of potentials, including potentials that are negative and positive with respect to the SCE (Figure 11.4.2 ). For example, the potential window for a Pt electrode extends from approximately +1.2 V to –0.2 V versus the SCE in acidic solutions, and from +0.7 V to –1 V versus the SCE in basic solutions. A solid electrode can replace a mercury electrode for many voltammetric analyses that require negative potentials, and is the electrode of choice at more positive potentials. Except for the carbon paste electrode, a solid electrode is fashioned into a disk and sealed into the end of an inert support with an electrical lead (Figure 11.4.3 ). The carbon paste electrode is made by filling the cavity at the end of the inert support with a paste that consists of carbon particles and a viscous oil. Solid electrodes are not without problems, the most important of which is the ease with which the electrode’s surface is altered by the adsorption of a solution species or by the formation of an oxide layer. For this reason a solid electrode needs frequent reconditioning, either by applying an appropriate potential or by polishing. A typical arrangement for a voltammetric electrochemical cell is shown in Figure 11.4.4 . In addition to the working electrode, the reference electrode, and the auxiliary electrode, the cell also includes a N2-purge line for removing dissolved O2, and an optional stir bar. Electrochemical cells are available in a variety of sizes, allowing the analysis of solution volumes ranging from more than 100 mL to as small as 50 μL. Current In Voltammetry When we oxidize an analyte at the working electrode, the resulting electrons pass through the potentiostat to the auxiliary electrode, reducing the solvent or some other component of the solution matrix. If we reduce the analyte at the working electrode, the current flows from the auxiliary electrode to the cathode. In either case, the current from the redox reactions at the working electrode and the auxiliary electrodes is called a faradaic current. In this section we consider the factors affecting the magnitude of the faradaic current, as well as the sources of any non-faradaic currents. Sign Conventions Because the reaction of interest occurs at the working electrode, we describe the faradaic current using this reaction. A faradaic current due to the analyte’s reduction is a cathodic current, and its sign is positive. An anodic current results from the analyte’s oxidation at the working electrode, and its sign is negative. Influence of Applied Potential on the Faradaic Current As an example, let’s consider the faradaic current when we reduce $\text{Fe(CN)}_6^{3-}$ to $\text{Fe(CN)}_6^{4-}$ at the working electrode. The relationship between the concentrations of $\text{Fe(CN)}_6^{3-}$, the concentration of $\text{Fe(CN)}_6^{4-}$, and the potential is given by the Nernst equation $E=+0.356 \text{ V}-0.05916 \log \frac{\left[\mathrm{Fe}(\mathrm{CN})_{6}^{4-}\right]_{x=0}}{\left[\mathrm{Fe}(\mathrm{CN})_{6}^{3-}\right]_{x=0}} \nonumber$ where +0.356V is the standard-statepotential for the $\text{Fe(CN)}_6^{3-}$/$\text{Fe(CN)}_6^{4-}$ redox couple, and x = 0 indicates that the concentrations of $\text{Fe(CN)}_6^{3-}$- and $\text{Fe(CN)}_6^{4-}$ are those at the surface of the working electrode. We use surface concentrations instead of bulk concentrations because the equilibrium position for the redox reaction $\mathrm{Fe}(\mathrm{CN})_{6}^{3-}(a q)+e^{-}\rightleftharpoons\mathrm{Fe}(\mathrm{CN})_{6}^{4-}(a q) \nonumber$ is established at the electrode’s surface. Let’s assume we have a solution for which the initial concentration of $\text{Fe(CN)}_6^{3-}$ is 1.0 mM and that $\text{Fe(CN)}_6^{4-}$ is absent. Figure 11.4.5 shows the ladder diagram for this solution. If we apply a potential of +0.530 V to the working electrode, the concentrations of $\text{Fe(CN)}_6^{3-}$ and $\text{Fe(CN)}_6^{4-}$ at the surface of the electrode are unaffected, and no faradaic current is observed. If we switch the potential to +0.356 V some of the $\text{Fe(CN)}_6^{3-}$ at the electrode’s surface is reduced to $\text{Fe(CN)}_6^{4-}$until we reach a condition where $\left[\mathrm{Fe}(\mathrm{CN})_{6}^{3-}\right]_{x=0}=\left[\mathrm{Fe}(\mathrm{CN})_{6}^{4-}\right]_{x=0}=0.50 \text{ mM} \nonumber$ This is the first of the five important principles of electrochemistry outlined in Chapter 11.1: the electrode’s potential determines the analyte’s form at the electrode’s surface. If this is all that happens after we apply the potential, then there would be a brief surge of faradaic current that quickly returns to zero, which is not the most interesting of results. Although the concentrations of $\text{Fe(CN)}_6^{3-}$ and $\text{Fe(CN)}_6^{4-}$ at the electrode surface are 0.50 mM, their concentrations in bulk solution remains unchanged. This is the second of the five important principles of electrochemistry outlined in Chapter 11.1: the analyte’s concentration at the electrode may not be the same as its concentration in bulk solution. Because of this difference in concentration, there is a concentration gradient between the solution at the electrode’s surface and the bulk solution. This concentration gradient creates a driving force that transports $\text{Fe(CN)}_6^{4-}$ away from the electrode and that transports $\text{Fe(CN)}_6^{3-}$ to the electrode (Figure 11.4.6 ). As the $\text{Fe(CN)}_6^{3-}$ arrives at the electrode it, too, is reduced to $\text{Fe(CN)}_6^{4-}$. A faradaic current continues to flow until there is no difference between the concentrations of $\text{Fe(CN)}_6^{3-}$ and $\text{Fe(CN)}_6^{4-}$ at the electrode and their concentrations in bulk solution. Although the potential at the working electrode determines if a faradaic current flows, the magnitude of the current is determined by the rate of the resulting oxidation or reduction reaction. Two factors contribute to the rate of the electrochemical reaction: the rate at which the reactants and products are transported to and from the electrode—what we call mass transport—and the rate at which electrons pass between the electrode and the reactants and products in solution. This is the fourth of the five important principles of electrochemistry outlined in Chapter 11.1: current is a measure of rate. Influence of Mass Transport on the Faradaic Current There are three modes of mass transport that affect the rate at which reactants and products move toward or away from the electrode surface: diffusion, migration, and convection. Diffusion occurs whenever the concentration of an ion or a molecule at the surface of the electrode is different from that in bulk solution. If we apply a potential sufficient to completely reduce $\text{Fe(CN)}_6^{3-}$ at the electrode surface, the result is a concentration gradient similar to that shown in Figure 11.4.7 . The region of solution over which diffusion occurs is the diffusion layer. In the absence of other modes of mass transport, the width of the diffusion layer, $\delta$, increases with time as the $\text{Fe(CN)}_6^{3-}$ must diffuse from an increasingly greater distance. Convection occurs when we mix the solution, which carries reactants toward the electrode and removes products from the electrode. The most common form of convection is stirring the solution with a stir bar; other methods include rotating the electrode and incorporating the electrode into a flow-cell. The final mode of mass transport is migration, which occurs when a charged particle in solution is attracted to or repelled from an electrode that carries a surface charge. If the electrode carries a positive charge, for example, an anion will move toward the electrode and a cation will move toward the bulk solution. Unlike diffusion and convection, migration affects only the mass transport of charged particles. The movement of material to and from the electrode surface is a complex function of all three modes of mass transport. In the limit where diffusion is the only significant form of mass transport, the current in a voltammetric cell is equal to $i=\frac{n F A D\left(C_{\text {bulk }}-C_{x=0}\right)}{\delta} \label{11.1}$ where n the number of electrons in the redox reaction, F is Faraday’s constant, A is the area of the electrode, D is the diffusion coefficient for the species reacting at the electrode, Cbulk and Cx = 0 are its concentrations in bulk solution and at the electrode surface, and $\delta$ is the thickness of the diffusion layer. For Equation \ref{11.1} to be valid, convection and migration must not interfere with the formation of a diffusion layer. We can eliminate migration by adding a high concentration of an inert supporting electrolyte. Because ions of similar charge equally are attracted to or repelled from the surface of the electrode, each has an equal probability of undergoing migration. A large excess of an inert electrolyte ensures that few reactants or products experience migration. Although it is easy to eliminate convection by not stirring the solution, there are experimental designs where we cannot avoid convection, either because we must stir the solution or because we are using an electrochemical flow cell. Fortunately, as shown in Figure 11.4.8 , the dynamics of a fluid moving past an electrode results in a small diffusion layer—typically 1–10 μm in thickness—in which the rate of mass transport by convection drops to zero. Effect of Electron Transfer Kinetics on the Faradaic Current The rate of mass transport is one factor that influences the current in voltammetry. The ease with which electrons move between the electrode and the species that reacts at the electrode also affects the current. When electron transfer kinetics are fast, the redox reaction is at equilibrium. Under these conditions the redox reaction is electrochemically reversible and the Nernst equation applies. If the electron transfer kinetics are sufficiently slow, the concentration of reactants and products at the electrode surface—and thus the magnitude of the faradaic current—are not what is predicted by the Nernst equation. In this case the system is electrochemically irreversible. Charging Currents In addition to the faradaic current from a redox reaction, the current in an electrochemical cell includes other, nonfaradaic sources. Suppose the charge on an electrode is zero and we suddenly change its potential so that the electrode’s surface acquires a positive charge. Cations near the electrode’s surface will respond to this positive charge by migrating away from the electrode; anions, on the other hand, will migrate toward the electrode. This migration of ions occurs until the electrode’s positive surface charge and the negative charge of the solution near the electrode are equal. Because the movement of ions and the movement of electrons are indistinguishable, the result is a small, short-lived nonfaradaic current that we call the charging current. Every time we change the electrode’s potential, a transient charging current flows. The migration of ions in response to the electrode’s surface charge leads to the formation of a structured electrode-solution interface that we call the electrical double layer, or EDL. When we change an electrode’s potential, the charging current is the result of a restructuring of the EDL. The exact structure of the electrical double layer is not important in the context of this text, but you can consult this chapter’s additional resources for additional information. Residual Current Even in the absence of analyte, a small, measurable current flows through an electrochemical cell. This residual current has two components: a faradaic current due to the oxidation or reduction of trace impurities and a nonfaradaic charging current. Methods for discriminating between the analyte’s faradaic current and the residual current are discussed later in this chapter. Shape of Voltammograms The shape of a voltammogram is determined by several experimental factors, the most important of which are how we measure the current and whether convection is included as a means of mass transport. As shown in Figure 11.4.9 , despite an abundance of different voltammetric techniques, several of which are discussed in this chapter, there are only three common shapes for voltammograms. For the voltammogram in Figure 11.4.9 a, the current increases from a background residual current to a limiting current, il. Because the faradaic current is inversely proportional to $\delta$ (Equation \ref{11.1}), a limiting current occurs only if the thickness of the diffusion layer remains constant because we are stirring the solution (see Figure 11.4.8 ). In the absence of convection the diffusion layer increases with time (see Figure 11.4.7 ). As shown in Figure 11.4.9 b, the resulting voltammogram has a peak current instead of a limiting current. For the voltammograms in Figure 11.4.9 a and Figure 11.4.9 b, we measure the current as a function of the applied potential. We also can monitor the change in current, $\Delta i$, following a change in potential. The resulting voltammogram, shown in Figure 11.4.9 c, also has a peak current. Quantitative and Qualitative Aspects of Voltammetry Earlier we described a voltammogram as the electrochemical equivalent of a spectrum in spectroscopy. In this section we consider how we can extract quantitative and qualitative information from a voltammogram. For simplicity we will limit our treatment to voltammograms similar to Figure 11.4.9 a. Determining Concentration Let’s assume that the redox reaction at the working electrode is $O+n e^{-} \rightleftharpoons R \label{11.2}$ where O is the analyte’s oxidized form and R is its reduced form. Let’s also assume that only O initially is present in bulk solution and that we are stirring the solution. When we apply a potential that results in the reduction of O to R, the current depends on the rate at which O diffuses through the fixed diffusion layer shown in Figure 11.4.7 . Using Equation \ref{11.1}, the current, i, is $i=K_{O}\left([O]_{\text {bulk }}-[O]_{x=0}\right) \label{11.3}$ where KO is a constant equal to $n F A D_O / \delta$. When we reach the limiting current, il, the concentration of O at the electrode surface is zero and Equation \ref{11.3} simplifies to $i_{l}=K_{O}[O]_{\mathrm{bulk}} \label{11.4}$ Equation \ref{11.4} shows us that the limiting current is a linear function of the concentration of O in bulk solution. To determine the value of KO we can use any of the standardization methods covered in Chapter 5. Equations similar to Equation \ref{11.4} can be developed for the other two types of voltammograms shown in Figure 11.4.9 . Determining the Standard-State Potential To extract the standard-state potential from a voltammogram, we need to rewrite the Nernst equation for reaction \ref{11.2} $E=E_{O / R}^{\circ}-\frac{0.05916}{n} \log \frac{[R]_{x=0}}{[O]_{x=0}} \label{11.5}$ in terms of current instead of the concentrations of O and R. We will do this in several steps. First, we substitute Equation \ref{11.4} into Equation \ref{11.3} and rearrange to give $[O]_{x=0}=\frac{i_{l}-i}{K_{O}} \label{11.6}$ Next, we derive a similar equation for [R]x = 0, by noting that $i=K_{R}\left([R]_{x=0}-[R]_{\mathrm{bulk}}\right) \nonumber$ Because the concentration of [R]bulk is zero—remember our assumption that the initial solution contains only O—we can simplify this equation $i=K_{R}[R]_{x=0} \nonumber$ and solve for [R]x = 0. $[R]_{x=0}=\frac{i}{K_{R}} \label{11.7}$ Now we are ready to finish our derivation. Substituting Equation \ref{11.7} and Equation \ref{11.6} into Equation \ref{11.5} and rearranging leaves us with $E=E_{O / R}^{\circ}-\frac{0.05916}{n} \log \frac{K_{O}}{K_{R}}-\frac{0.05916}{n} \log \frac{i}{i_{l} - i} \label{11.8}$ When the current, i, is half of the limiting current, il, $i=0.5 \times i_{l} \nonumber$ we can simplify Equation \ref{11.8} to $E_{1 / 2}=E_{O / R}^{\circ}-\frac{0.05916}{n} \log \frac{K_{O}}{K_{R}} \label{11.9}$ where E1/2 is the half-wave potential (Figure 11.4.10 ). If KO is approximately equal to KR, which often is the case, then the half-wave potential is equal to the standard-state potential. Note that Equation \ref{11.9} is valid only if the redox reaction is electrochemically reversible. Voltammetric Techniques In voltammetry there are three important experimental parameters under our control: how we change the potential applied to the working electrode, when we choose to measure the current, and whether we choose to stir the solution. Not surprisingly, there are many different voltammetric techniques. In this section we consider several important examples. Polarography The first important voltammetric technique to be developed—polarography—uses the dropping mercury electrode shown in Figure 11.4.1 b as the working electrode. As shown in Figure 11.4.11 , the current is measured while applying a linear potential ramp. Although polarography takes place in an unstirred solution, we obtain a limiting current instead of a peak current. When a Hg drop separates from the glass capillary and falls to the bottom of the electrochemical cell, it mixes the solution. Each new Hg drop, therefore, grows into a solution whose composition is identical to the bulk solution. The oscillations in the current are a result of the Hg drop’s growth, which leads to a time-dependent change in the area of the working electrode. The limiting current—which also is called the diffusion current—is measured using either the maximum current, imax, or from the average current, iavg. The relationship between the analyte’s concentration, CA, and the limiting current is given by the Ilkovic equations $i_{\max }=706 n D^{1 / 2} m^{2 / 3} t^{1 / 6} C_{A}=K_{\max } C_{A} \nonumber$ $i_{avg}=607 n D^{1 / 2} m^{2 / 3} t^{1 / 6} C_{A}=K_{\mathrm{avg}} C_{A} \nonumber$ where n is the number of electrons in the redox reaction, D is the analyte’s diffusion coefficient, m is the flow rate of Hg, t is the drop’s lifetime and Kmax and Kavg are constants. The half-wave potential, E1/2, provides qualitative information about the redox reaction. Normal polarography has been replaced by various forms of pulse polarography, several examples of which are shown in Figure 11.4.12 [Osteryoung, J. J. Chem. Educ. 1983, 60, 296–298]. Normal pulse polarography (Figure 11.4.12 a), for example, uses a series of potential pulses characterized by a cycle of time $\tau$, a pulse-time of tp, a pulse potential of $\Delta E_\text{p}$, and a change in potential per cycle of $\Delta E_\text{s}$. Typical experimental conditions for normal pulse polarography are $\tau \approx 1 \text{ s}$, tp ≈ 50 ms, and $\Delta E_\text{s} \approx 2 \text{ mV}$. The initial value of $\Delta E_\text{p} \approx 2 \text{ mV}$, and it increases by ≈ 2 mV with each pulse. The current is sampled at the end of each potential pulse for approximately 17 ms before returning the potential to its initial value. The shape of the resulting voltammogram is similar to Figure 11.4.11 , but without the current oscillations. Because we apply the potential for only a small portion of the drop’s lifetime, there is less time for the analyte to undergo oxidation or reduction and a smaller diffusion layer. As a result, the faradaic current in normal pulse polarography is greater than in the polarography, resulting in better sensitivity and smaller detection limits. In differential pulse polarography (Figure 11.4.12 b) the current is measured twice per cycle: for approximately 17 ms before applying the pulse and for approximately 17 ms at the end of the cycle. The difference in the two currents gives rise to the peak-shaped voltammogram. Typical experimental conditions for differential pulse polarography are $\tau \approx 1 \text{ s}$, tp ≈ 50 ms, $\Delta E_\text{p}$ ≈ 50 mV, and $\Delta E_\text{s}$ ≈ 2 mV. The voltammogram for differential pulse polarography is approximately the first derivative of the voltammogram for normal pulse polarography. To see why this is the case, note that the change in current over a fixed change in potential, $\Delta i / \Delta E$, approximates the slope of the voltammogram for normal pulse polarography. You may recall that the first derivative of a function returns the slope of the function at each point. The first derivative of a sigmoidal function is a peak-shaped function. Other forms of pulse polarography include staircase polarography (Figure 11.4.12 c) and square-wave polarography (Figure 11.4.12 d). One advantage of square-wave polarography is that we can make $\tau$ very small—perhaps as small as 5 ms, compared to 1 s for other forms of pulse polarography—which significantly decreases analysis time. For example, suppose we need to scan a potential range of 400 mV. If we use normal pulse polarography with a $\Delta E_\text{s}$ of 2 mV/cycle and a $\tau$ of 1 s/cycle, then we need 200 s to complete the scan. If we use square-wave polarography with a $\Delta E_\text{s}$ of 2 mV/cycle and a $\tau$ of 5 ms/cycle, we can complete the scan in 1 s. At this rate, we can acquire a complete voltammogram using a single drop of Hg! Polarography is used extensively for the analysis of metal ions and inorganic anions, such as $\text{IO}_3^-$ and $\text{NO}_3^-$. We also can use polarography to study organic compounds with easily reducible or oxidizable functional groups, such as carbonyls, carboxylic acids, and carbon-carbon double bonds. Hydrodynamic Voltammetry In polarography we obtain a limiting current because each drop of mercury mixes the solution as it falls to the bottom of the electrochemical cell. If we replace the DME with a solid electrode (see Figure 11.4.3 ), we can still obtain a limiting current if we mechanically stir the solution during the analysis, using either a stir bar or by rotating the electrode. We call this approach hydrodynamic voltammetry. Hydrodynamic voltammetry uses the same potential profiles as in polarography, such as a linear scan (Figure 11.4.11 ) or a differential pulse (Figure 11.4.12 b). The resulting voltammograms are identical to those for polarography, except for the lack of current oscillations from the growth of the mercury drops. Because hydrodynamic voltammetry is not limited to Hg electrodes, it is useful for analytes that undergo oxidation or reduction at more positive potentials. Stripping Voltammetry Another important voltammetric technique is stripping voltammetry, which consists of three related techniques: anodic stripping voltammetry, cathodic stripping voltammetry, and adsorptive stripping voltammetry. Because anodic stripping voltammetry is the more widely used of these techniques, we will consider it in greatest detail. Anodic stripping voltammetry consists of two steps (Figure 11.4.13 ). The first step is a controlled potential electrolysis in which we hold the working electrode—usually a hanging mercury drop or a mercury film electrode—at a cathodic potential sufficient to deposit the metal ion on the electrode. For example, when analyzing Cu2+ the deposition reaction is $\mathrm{Cu}^{2+}+2 e^{-} \rightleftharpoons \mathrm{Cu}(\mathrm{Hg}) \nonumber$ where Cu(Hg) indicates that the copper is amalgamated with the mercury. This step serves as a means of concentrating the analyte by transferring it from the larger volume of the solution to the smaller volume of the electrode. During most of the electrolysis we stir the solution to increase the rate of deposition. Near the end of the deposition time we stop the stirring—eliminating convection as a mode of mass transport—and allow the solution to become quiescent. Typical deposition times of 1–30 min are common, with analytes at lower concentrations requiring longer times. In the second step, we scan the potential anodically—that is, toward a more positive potential. When the working electrode’s potential is sufficiently positive, the analyte is stripped from the electrode, returning to solution in its oxidized form. $\mathrm{Cu}(\mathrm{Hg})\rightleftharpoons \text{ Cu}^{2+}+2 e^{-} \nonumber$ Monitoring the current during the stripping step gives a peak-shaped voltammogram, as shown in Figure 11.4.13 . The peak current is proportional to the analyte’s concentration in the solution. Because we are concentrating the analyte in the electrode, detection limits are much smaller than other electrochemical techniques. An improvement of three orders of magnitude—the equivalent of parts per billion instead of parts per million—is routine. Anodic stripping voltammetry is very sensitive to experimental conditions, which we must carefully control to obtain results that are accurate and precise. Key variables include the area of the mercury film or the size of the hanging Hg drop, the deposition time, the rest time, the rate of stirring, and the scan rate during the stripping step. Anodic stripping voltammetry is particularly useful for metals that form amalgams with mercury, several examples of which are listed in Table 11.4.1 . Table 11.4.1 . Representative Examples of Analytes Determined by Stripping Voltammetry anodic stripping voltammetry cathodic stripping voltammetry adsorptive stripping voltammetry Bi3+ Br bilirubin Cd2+ Cl codeine Cu2+ I cocaine Ga3+ mercaptans (RSH) digitoxin In3+ S2– dopamine Pb2+ SCN heme Tl+ monesin Sn2+ testosterone Zn2+ Source: Compiled from Peterson, W. M.; Wong, R. V. Am. Lab. November 1981, 116–128; Wang, J. Am. Lab. May 1985, 41–50. The experimental design for cathodic stripping voltammetry is similar to anodic stripping voltammetry with two exceptions. First, the deposition step involves the oxidation of the Hg electrode to $\text{Hg}_2^{2+}$, which then reacts with the analyte to form an insoluble film at the surface of the electrode. For example, when Cl is the analyte the deposition step is $2 \mathrm{Hg}(l)+2 \mathrm{Cl}^{-}(a q) \rightleftharpoons \text{ Hg}_{2} \mathrm{Cl}_{2}(s)+2 e^{-} \nonumber$ Second, stripping is accomplished by scanning cathodically toward a more negative potential, reducing $\text{Hg}_2^{2+}$ back to Hg and returning the analyte to solution. $\mathrm{Hg}_{2} \mathrm{Cl}_{2}(s)+2 e^{-}\rightleftharpoons 2 \mathrm{Hg}( l)+2 \mathrm{Cl}^{-}(a q) \nonumber$ Table 11.4.1 lists several analytes analyzed successfully by cathodic stripping voltammetry. In adsorptive stripping voltammetry, the deposition step occurs without electrolysis. Instead, the analyte adsorbs to the electrode’s surface. During deposition we maintain the electrode at a potential that enhances adsorption. For example, we can adsorb a neutral molecule on a Hg drop if we apply a potential of –0.4 V versus the SCE, a potential where the surface charge of mercury is approximately zero. When deposition is complete, we scan the potential in an anodic or a cathodic direction, depending on whether we are oxidizing or reducing the analyte. Examples of compounds that have been analyzed by absorptive stripping voltammetry also are listed in Table 11.4.1 . Cyclic Voltammetry In the voltammetric techniques consider to this point we scan the potential in one direction, either to more positive potentials or to more negative potentials. In cyclic voltammetry we complete a scan in both directions. Figure 11.4.14 a shows a typical potential-excitation signal. In this example, we first scan the potential to more positive values, resulting in the following oxidation reaction for the species R. $R \rightleftharpoons O+n e^{-} \nonumber$ When the potential reaches a predetermined switching potential, we reverse the direction of the scan toward more negative potentials. Because we generated the species O on the forward scan, during the reverse scan it reduces back to R. $O+n e^{-} \rightleftharpoons R \nonumber$ Cyclic voltammetry is carried out in an unstirred solution, which, as shown in Figure 11.4.14 b, results in peak currents instead of limiting currents. The voltammogram has separate peaks for the oxidation reaction and for the reduction reaction, each characterized by a peak potential and a peak current. The peak current in cyclic voltammetry is given by the Randles-Sevcik equation $i_{p}=\left(2.69 \times 10^{5}\right) n^{3 / 2} A D^{1 / 2} \nu^{1 / 2} C_{A} \nonumber$ where n is the number of electrons in the redox reaction, A is the area of the working electrode, D is the diffusion coefficient for the electroactive species, $\nu$ is the scan rate, and CA is the concentration of the electroactive species at the electrode. For a well-behaved system, the anodic and the cathodic peak currents are equal, and the ratio ip,a/ip,c is 1.00. The half-wave potential, E1/2, is midway between the anodic and cathodic peak potentials. $E_{1 / 2}=\frac{E_{p, a}+E_{p, c}}{2} \nonumber$ Scanning the potential in both directions provides an opportunity to explore the electrochemical behavior of species generated at the electrode. This is a distinct advantage of cyclic voltammetry over other voltammetric techniques. Figure 11.4.15 shows the cyclic voltammogram for the same redox couple at both a faster and a slower scan rate. At the faster scan rate, 11.4.15 a, we see two peaks. At the slower scan rate in Figure 11.4.15 b, however, the peak on the reverse scan disappears. One explanation for this is that the products from the reduction of R on the forward scan have sufficient time to participate in a chemical reaction whose products are not electroactive. Amperometry The final voltammetric technique we will consider is amperometry, in which we apply a constant potential to the working electrode and measure current as a function of time. Because we do not vary the potential, amperometry does not result in a voltammogram. One important application of amperometry is in the construction of chemical sensors. One of the first amperometric sensors was developed in 1956 by L. C. Clark to measure dissolved O2 in blood. Figure 11.4.16 shows the sensor’s design, which is similar to a potentiometric membrane electrode. A thin, gas-permeable membrane is stretched across the end of the sensor and is separated from the working electrode and the counter electrode by a thin solution of KCl. The working electrode is a Pt disk cathode, and a Ag ring anode serves as the counter electrode. Although several gases can diffuse across the membrane, including O2, N2, and CO2, only oxygen undergoes reduction at the cathode $\mathrm{O}_{2}(g)+4 \mathrm{H}_{3} \mathrm{O}^{+}(a q)+4 e^{-}\rightleftharpoons 6 \mathrm{H}_{2} \mathrm{O}(l) \nonumber$ with its concentration at the electrode’s surface quickly reaching zero. The concentration of O2 at the membrane’s inner surface is fixed by its diffusion through the membrane, which creates a diffusion profile similar to that in Figure 11.4.8 . The result is a steady-state current that is proportional to the concentration of dissolved oxygen. Because the electrode consumes oxygen,the sample is stirred to prevent the depletion of O2 at the membrane’s outer surface. The oxidation of the Ag anode is the other half-reaction. $\mathrm{Ag}(s)+\text{ Cl}^{-}(a q)\rightleftharpoons \mathrm{AgCl}(s)+e^{-} \nonumber$ Another example of an amperometric sensor is a glucose sensor. In this sensor the single membrane in Figure 11.4.16 is replaced with three membranes. The outermost membrane of polycarbonate is permeable to glucose and O2. The second membrane contains an immobilized preparation of glucose oxidase that catalyzes the oxidation of glucose to gluconolactone and hydrogen peroxide. $\beta-\mathrm{D}-\text {glucose }(a q)+\text{ O}_{2}(a q)+\mathrm{H}_{2} \mathrm{O}(l)\rightleftharpoons \text {gluconolactone }(a q)+\text{ H}_{2} \mathrm{O}_{2}(a q) \nonumber$ The hydrogen peroxide diffuses through the innermost membrane of cellulose acetate where it undergoes oxidation at a Pt anode. $\mathrm{H}_{2} \mathrm{O}_{2}(a q)+2 \mathrm{OH}^{-}(a q) \rightleftharpoons \text{ O}_{2}(a q)+2 \mathrm{H}_{2} \mathrm{O}(l)+2 e^{-} \nonumber$ Figure 11.4.17 summarizes the reactions that take place in this amperometric sensor. FAD is the oxidized form of flavin adenine nucleotide—the active site of the enzyme glucose oxidase—and FADH2 is the active site’s reduced form. Note that O2 serves a mediator, carrying electrons to the electrode. By changing the enzyme and mediator, it is easy to extend to the amperometric sensor in Figure 11.4.17 to the analysis of other analytes. For example, a CO2 sensor has been developed using an amperometric O2 sensor with a two-layer membrane, one of which contains an immobilized preparation of autotrophic bacteria [Karube, I.; Nomura, Y.; Arikawa, Y. Trends in Anal. Chem. 1995, 14, 295–299]. As CO2 diffuses through the membranes it is converted to O2 by the bacteria, increasing the concentration of O2 at the Pt cathode. Quantitative Applications Voltammetry has been used for the quantitative analysis of a wide variety of samples, including environmental samples, clinical samples, pharmaceutical formulations, steels, gasoline, and oil. Selecting the Voltammetric Technique The choice of which voltammetric technique to use depends on the sample’s characteristics, including the analyte’s expected concentration and the sample’s location. For example, amperometry is ideally suited for detecting analytes in flow systems, including the in vivo analysis of a patient’s blood or as a selective sensor for the rapid analysis of a single analyte. The portability of amperometric sensors, which are similar to potentiometric sensors, also make them ideal for field studies. Although cyclic voltammetry is used to determine an analyte’s concentration, other methods described in this chapter are better suited for quantitative work. Pulse polarography and stripping voltammetry frequently are interchangeable. The choice of which technique to use often depends on the analyte’s concentration and the desired accuracy and precision. Detection limits for normal pulse polarography generally are on the order of 10–6 M to 10–7 M, and those for differential pulse polarography, staircase, and square wave polarography are between 10–7 M and 10–9 M. Because we concentrate the analyte in stripping voltammetry, the detection limit for many analytes is as little as 10–10 M to 10–12 M. On the other hand, the current in stripping voltammetry is much more sensitive than pulse polarography to changes in experimental conditions, which may lead to poorer precision and accuracy. We also can use pulse polarography to analyze a wider range of inorganic and organic analytes because there is no need to first deposit the analyte at the electrode surface. Stripping voltammetry also suffers from occasional interferences when two metals, such as Cu and Zn, combine to form an intermetallic compound in the mercury amalgam. The deposition potential for Zn . is sufficiently negative that any Cu2+ in the sample also deposits into the mercury drop or film, leading to the formation of intermetallic compounds such as CuZn and CuZn2. During the stripping step, zinc in the intermetallic compounds strips at potentials near that of copper, decreasing the current for zinc at its usual potential and increasing the apparent current for copper. It is possible to overcome this problem by adding an element that forms a stronger intermetallic compound with the interfering metal. Thus, adding Ga3+ minimizes the interference of Cu when analyzing for Zn by forming an intermetallic compound of Cu and Ga. Correcting the Residual Current In any quantitative analysis we must correct the analyte’s signal for signals that arise from other sources. The total current, itot, in voltammetry consists of two parts: the current from the analyte’s oxidation or reduction, iA, and a background or residual current, ir. $i_{t o t}=i_{A}+i_{r} \nonumber$ The residual current, in turn, has two sources. One source is a faradaic current from the oxidation or reduction of trace interferents in the sample, iint. The other source is the charging current, ich, that accompanies a change in the working electrode’s potential. $i_{r}=i_{\mathrm{int}}+i_{c h} \nonumber$ We can minimize the faradaic current due to impurities by carefully preparing the sample. For example, one important impurity is dissolved O2, which undergoes a two-step reduction: first to H2O2 at a potential of –0.1 V versus the SCE, and then to H2O at a potential of –0.9 V versus the SCE. Removing dissolved O2 by bubbling an inert gas such as N2 through the sample eliminates this interference. After removing the dissolved O2, maintaining a blanket of N2 over the top of the solution prevents O2 from reentering the solution. The cell in Figure 11.4.4 shows a typical N2 purge line. There are two methods to compensate for the residual current. One method is to measure the total current at potentials where the analyte’s faradaic current is zero and extrapolate it to other potentials. This is the method shown in Figure 11.4.9 . One advantage of extrapolating is that we do not need to acquire additional data. An important disadvantage is that an extrapolation assumes that any change in the residual current with potential is predictable, which may not be the case. A second, and more rigorous approach, is to obtain a voltammogram for an appropriate blank. The blank’s residual current is then subtracted from the sample’s total current. Analysis for Single Components The analysis of a sample with a single analyte is straightforward using any of the standardization methods discussed in Chapter 5. Example 11.4.1 The concentration of As(III) in water is determined by differential pulse polarography in 1 M HCl. The initial potential is set to –0.1 V versus the SCE and is scanned toward more negative potentials at a rate of 5 mV/s. Reduction of As(III) to As(0) occurs at a potential of approximately –0.44 V versus the SCE. The peak currents for a set of standard solutions, corrected for the residual current, are shown in the following table. [As(III)] (µM) ip (µM) 1.00 0.298 3.00 0.947 6.00 1.83 9.00 2.72 What is the concentration of As(III) in a sample of water if its peak current is 1.37 μA? Solution Linear regression gives the calibration curve shown in Figure 11.4.18 , with an equation of $i_{p}=0.0176+3.01 \times[\mathrm{As}(\mathrm{III})] \nonumber$ Substituting the sample’s peak current into the regression equation gives the concentration of As(III) as 4.49 μM. Exercise 11.4.1 The concentration of copper in a sample of sea water is determined by anodic stripping voltammetry using the method of standard additions. The analysis of a 50.0-mL sample gives a peak current of 0.886 μA. After adding a 5.00-μL spike of 10.0 mg/L Cu2+, the peak current increases to 2.52 μA. Calculate the μg/L copper in the sample of sea water. Answer For anodic stripping voltammetry, the peak current, ip, is a linear function of the analyte’s concentration $i_{p}=K \times C_{\mathrm{Cu}} \nonumber$ where K is a constant that accounts for experimental parameters such as the electrode’s area, the diffusion coefficient for Cu2+, the deposition time, and the rate of stirring. For the analysis of the sample before the standard addition we know that the current is $i_{p}=0.886 \ \mu \mathrm{A}=K \times C_{\mathrm{Cu}} \nonumber$ and after the standard addition the current is $i_{p}=2.52 \ \mu \mathrm{A}=K\left\{C_{\mathrm{Cu}} \times \frac{50.00 \ \mathrm{mL}}{50.005 \ \mathrm{mL}}+\frac{10.00 \mathrm{mg} \mathrm{Cu}}{\mathrm{L}} \times \frac{0.005 \ \mathrm{mL}}{50.005 \ \mathrm{mL}}\right\} \nonumber$ where 50.005 mL is the total volume after we add the 5.00 μL spike. Solving each equation for K and combining leaves us with the following equation. $\frac{0.886 \ \mu \mathrm{A}}{C_{\mathrm{Cu}}}=K=\frac{2.52 \ \mu \mathrm{A}}{C_{\mathrm{Cu}} \times \frac{50.00 \ \mathrm{mL}}{50.005 \ \mathrm{mL}}+\frac{10.00 \ \mathrm{mg} \text{ Cu}}{\mathrm{L}} \times \frac{0.005 \ \mathrm{mL}}{50.005 \ \mathrm{mL}}} \nonumber$ Solving this equation for CCu gives its value as $5.42 \times 10^{-4}$ mg Cu2+/L, or 0.542 μg Cu2+/L. Multicomponent Analysis Voltammetry is a particularly attractive technique for the analysis of samples that contain two or more analytes. Provided that the analytes behave independently, the voltammogram of a multicomponent mixture is a summation of each analyte’s individual voltammograms. As shown in Figure 11.4.19 , if the separation between the half-wave potentials or between the peak potentials is sufficient, we can determine the presence of each analyte as if it is the only analyte in the sample. The minimum separation between the half-wave potentials or peak potentials for two analytes depends on several factors, including the type of electrode and the potential-excitation signal. For normal polarography the separation is at least ±0.2–0.3 V, and differential pulse voltammetry requires a minimum separation of ±0.04–0.05 V. If the voltammograms for two analytes are not sufficiently separated, a simultaneous analysis may be possible. An example of this approach is outlined the following example. Example 11.4.2 The differential pulse polarographic analysis of a mixture of indium and cadmium in 0.1 M HCl is complicated by the overlap of their respective voltammograms [Lanza P. J. Chem. Educ. 1990, 67, 704–705]. The peak potential for indium is at –0.557 V and that for cadmium is at –0.597 V. When a 0.800-ppm indium standard is analyzed, $\Delta i_p$ (in arbitrary units) is 200.5 at –0.557 V and 87.5 at –0.597 V relative to a saturated Ag/AgCl reference electorde. A standard solution of 0.793 ppm cadmium has a $\Delta i_p$ of 58.5 at –0.557 V and 128.5 at –0.597 V. What is the concentration of indium and cadmium in a sample if $\Delta i_p$ is 167.0 at a potential of –0.557 V and 99.5 at a potential of –0.597V. Solution The change in current, $\Delta i_p$, in differential pulse polarography is a linear function of the analyte’s concentration $\Delta i_{p}=k_{A} C_{A} \nonumber$ where kA is a constant that depends on the analyte and the applied potential, and CA is the analyte’s concentration. To determine the concentrations of indium and cadmium in the sample we must first find the value of kA for each analyte at each potential. For simplicity we will identify the potential of –0.557 V as E1, and that for –0.597 V as E2. The values of kA are \begin{aligned} k_{\mathrm{In}, E_{1}} &=\frac{200.5}{0.800 \ \mathrm{ppm}}=250.6 \ \mathrm{ppm}^{-1} \ k_{\mathrm{In}, E_{2}} &=\frac{87.5}{0.800 \ \mathrm{ppm}}=109.4 \ \mathrm{ppm}^{-1} \ k_{\mathrm{Cd} E_{1}} &=\frac{58.5}{0.793 \ \mathrm{ppm}}=73.8 \ \mathrm{ppm}^{-1} \ k_{\mathrm{Cd} E_{2}} &=\frac{128.5}{0.793 \ \mathrm{ppm}}=162.0 \ \mathrm{ppm}^{-1} \end{aligned} \nonumber Next, we write simultaneous equations for the current at the two potentials. $\begin{array}{l}{\Delta i_{E_{1}}=167.0=250.6 \ \mathrm{ppm}^{-1} \times C_{\mathrm{In}}+73.8 \ \mathrm{ppm}^{-1} \times C_{\mathrm{Cd}}} \ {\triangle i_{E_{2}}=99.5=109.4 \ \mathrm{ppm}^{-1} \times C_{\mathrm{In}}+162.0 \ \mathrm{ppm}^{-1} \times C_{\mathrm{Cd}}}\end{array} \nonumber$ Solving the simultaneous equations, which is left as an exercise, gives the concentration of indium as 0.606 ppm and the concentration of cadmium as 0.205 ppm. Environmental Samples Voltammetry is one of several important analytical techniques for the analysis of trace metals in environmental samples, including groundwater, lakes, rivers and streams, seawater, rain, and snow. Detection limits at the parts-per-billion level are routine for many trace metals using differential pulse polarography, with anodic stripping voltammetry providing parts-per-trillion detection limits for some trace metals. One interesting environmental application of anodic stripping voltammetry is the determination of a trace metal’s chemical form within a water sample. Speciation is important because a trace metal’s bioavailability, toxicity, and ease of transport through the environment often depends on its chemical form. For example, a trace metal that is strongly bound to colloidal particles generally is not toxic because it is not available to aquatic lifeforms. Unfortunately, anodic stripping voltammetry can not distinguish a trace metal’s exact chemical form because closely related species, such as Pb2+ and PbCl+, produce a single stripping peak. Instead, trace metals are divided into “operationally defined” categories that have environmental significance. Operationally defined means that an analyte is divided into categories by the specific methods used to isolate it from the sample. There are many examples of operational definitions in the environmental literature. The distribution of trace metals in soils and sediments, for example, often is defined in terms of the reagents used to extract them; thus, you might find an operational definition for Zn2+ in a lake sediment as that extracted using 1.0 M sodium acetate, or that extracted using 1.0 M HCl. Although there are many speciation schemes in the environmental literature, we will consider one proposed by Batley and Florence [see (a) Batley, G. E.; Florence, T. M. Anal. Lett. 1976, 9, 379–388; (b) Batley, G. E.; Florence, T. M. Talanta 1977, 24, 151–158; (c) Batley, G. E.; Florence, T. M. Anal. Chem. 1980, 52, 1962–1963; (d) Florence, T. M., Batley, G. E.; CRC Crit. Rev. Anal. Chem. 1980, 9, 219–296]. This scheme, which is outlined in Table 11.4.2 , combines anodic stripping voltammetry with ion-exchange and UV irradiation, dividing soluble trace metals into seven groups. In the first step, anodic stripping voltammetry in a pH 4.8 acetic acid buffer differentiates between labile metals and nonlabile metals. Only labile metals—those present as hydrated ions, weakly bound complexes, or weakly adsorbed on colloidal surfaces—deposit at the electrode and give rise to a signal. Total metal concentration are determined by ASV after digesting the sample in 2 M HNO3 for 5 min, which converts all metals into an ASV-labile form. Table 11.4.2 . Operational Speciation of Soluble Trace Metals method speciation of soluble metals ASV labile metals nonlabile or bound metals Ion-Exchange removed not removed removed not removed UV Irradiation released not released released not released released not released Groups I II III IV V VI VII Group I: free metal ions; weaker labile organic complexes and inorganic complexes Group II: stronger labile organic complexes; labile metals absorbed on organic solids Group III: stronger labile inorganic complexes; labile metals absorbed on inorganic solids Group IV: weaker nonlabile organic complexes Group V: weaker nonlabile inorganic complexes Group VI: stronger nonlabile organic complexes; nonlabile metals absorbed on organic solids Group VII: stronger nonlabile inorganic complexes; nonlabile metals absorbed on inorganic solids Operational definitions of speciation from (a)Batley,G.E.;Florence,T.M.Anal.Lett.1976,9,379–388;(b)Batley,G.E.;Florence,T.M.Talanta1977,24,151–158; (c) Batley, G. E.; Florence, T. M. Anal. Chem. 1980, 52, 1962–1963; (d) Florence, T. M., Batley, G. E.; CRC Crit. Rev. Anal. Chem. 1980, 9, 219–296. A Chelex-100 ion-exchange resin further differentiates between strongly bound metals—usually metals bound to inorganic and organic solids, but also those tightly bound to chelating ligands—and more loosely bound metals. Finally, UV radiation differentiates between metals bound to organic phases and inorganic phases. The analysis of seawater samples, for example, suggests that cadmium, copper, and lead are present primarily as labile organic complexes or as labile adsorbates on organic colloids (Group II in Table 11.4.2 ). Differential pulse polarography and stripping voltammetry are used to determine trace metals in airborne particulates, incinerator fly ash, rocks, minerals, and sediments. The trace metals, of course, are first brought into solution using a digestion or an extraction. Amperometric sensors also are used to analyze environmental samples. For example, the dissolved O2 sensor described earlier is used to determine the level of dissolved oxygen and the biochemical oxygen demand, or BOD, of waters and wastewaters. The latter test—which is a measure of the amount of oxygen required by aquatic bacteria as they decompose organic matter—is important when evaluating the efficiency of a wastewater treatment plant and for monitoring organic pollution in natural waters. A high BOD suggests that the water has a high concentration of organic matter. Decomposition of this organic matter may seriously deplete the level of dissolved oxygen in the water, adversely affecting aquatic life. Other amperometric sensors are available to monitor anionic surfactants in water, and CO2, H2SO4, and NH3 in atmospheric gases. Clinical Samples Differential pulse polarography and stripping voltammetry are used to determine the concentration of trace metals in a variety of clinical samples, including blood, urine, and tissue. The determination of lead in blood is of considerable interest due to concerns about lead poisoning. Because the concentration of lead in blood is so small, anodic stripping voltammetry frequently is the more appropriate technique. The analysis is complicated, however, by the presence of proteins that may adsorb to the mercury electrode, inhibiting either the deposition or stripping of lead. In addition, proteins may prevent the electrodeposition of lead through the formation of stable, nonlabile complexes. Digesting and ashing the blood sample mini- mizes this problem. Differential pulse polarography is useful for the routine quantitative analysis of drugs in biological fluids, at concentrations of less than 10–6 M [Brooks, M. A. “Application of Electrochemistry to Pharmaceutical Analysis,” Chapter 21 in Kissinger, P. T.; Heinemann, W. R., eds. Laboratory Techniques in Electroanalytical Chemistry, Marcel Dekker, Inc.: New York, 1984, pp 539–568.]. Amperometric sensors using enzyme catalysts also have many clinical uses, several examples of which are shown in Table 11.4.3 . Table 11.4.3 . Representative Amperometric Biosensors analyte enzyme species detected choline choline oxidase H2O2 ethanol alcohol oxidase H2O2 formaldehyde formaldehyde dehydrogenase NADH glucose glucose oxidase H2O2 glutamine glutaminase, glutamine oxidase H2O2 glycerol glycerol dehydrogenase NADH, O2 lactate lactate oxidase H2O2 phenol polyphenol oxidase quinone inorganic phosphorous nucleoside phosphoylase O2 Source: Cammann, K.; Lemke, U.; Rohen, A.; Sander, J.; Wilken, H.; Winter, B. Angew. Chem. Int. Ed. Engl. 1991, 30, 516–539. Miscellaneous Samples In addition to environmental samples and clinical samples, differential pulse polarography and stripping voltammetry are used for the analysis of trace metals in other sample, including food, steels and other alloys, gasoline, gunpowder residues, and pharmaceuticals. Voltammetry is an important technique for the quantitative analysis of organics, particularly in the pharmaceutical industry where it is used to determine the concentration of drugs and vitamins in formulations. For example, voltammetric methods are available for the quantitative analysis of vitamin A, niacinamide, and riboflavin. When the compound of interest is not electroactive, it often can be derivatized to an electroactive form. One example is the differential pulse polarographic determination of sulfanilamide, which is converted into an electroactive azo dye by coupling with sulfamic acid and 1-napthol. Representative Method 11.4.1: Determination of Chloropromazine in a Pharmaceutical Product The best way to appreciate the theoretical and the practical details discussed in this section is to carefully examine a typical analytical method. Although each method is unique, the following description of the determination of chloropromazine in a pharmaceutical product provides an instructive example of a typical procedure. The description here is based on a method from Pungor, E. A Practical Guide to Instrumental Analysis, CRC Press: Boca Raton, FL, 1995, pp. 34–37. Description of Method Chlorpromazine, also is known by its trade name Thorazine, is an antipsychotic drug used in the treatment of schizophrenia. The amount of chlorpromazine in a pharmaceutical product is determined voltammetrically at a graphite working electrode in a unstirred solution, with calibration by the method of standard additions. Procedure Add 10.00 mL of an electrolyte solution consisting of 0.01 M HCl and 0.1 M KCl to the electrochemical cell. Place a graphite working electrode, a Pt auxiliary electrode, and a SCE reference electrode in the cell, and record the voltammogram from 0.2 V to 2.0 V at a scan rate of 50 mV/s. Weigh out an appropriate amount of the pharmaceutical product and dissolve it in a small amount of the electrolyte. Transfer the solution to a 100-mL volumetric flask and dilute to volume with the electrolyte. Filter a small amount of the diluted solution and transfer 1.00 mL of the filtrate to the voltammetric cell. Mix the contents of the voltammetric cell and allow the solution to sit for 10 s before recording the voltammogram. Return the potential to 0.2 V, add 1.00 mL of a chlorpromazine standard and record the voltammogram. Report the %w/w chlorpromazine in the formulation. Questions 1. Is chlorpromazine undergoing oxidation or reduction at the graphite working electrode? Because we are scanning toward more positive potentials, we are oxidizing chlorpromazine. 2. Why does this procedure use a graphite electrode instead of a Hg electrode? As shown in Figure 11.4.2 , the potential window for a Hg electrode extends from approximately –0.3 V to between –1V and –2 V, de- pending on the pH. Because we are scanning the potential from 0.2 V to 2.0 V, we cannot use a Hg electrode. 3. Many voltammetric procedures require that we first remove dissolved O2 by bubbling N2 through the solution. Why is this not necessary for this analysis? Dissolved O2 is a problem when we scan toward more negative potentials, because its reduction may produce a significant cathodic current. In this procedure we are scanning toward more positive potentials and generating anodic currents; thus, dissolved O2 is not an interferent and does not need to be removed. 4. What is the purpose of recording a voltammogram in the absence of chlorpromazine? This voltammogram serves as a blank, which provides a measurement of the residual current due to the electrolyte. Because the potential window for a graphite working electrode (see Figure 11.4.2 ) does not extend to 2.0 V, there is a measurable anodic residual current due to the solvent’s oxidation. Having measured this residual current, we can subtract it from the total current in the presence of chlorpromazine. 5. Based on the description of this procedure, what is the shape of the resulting voltammogram. You may wish to review the three common shapes shown in Figure 11.4.9 . Because the solution is unstirred, the voltammogram will have a peak current similar to that shown in Figure 11.4.9 b. Characterization Applications In the previous section we learned how to use voltammetry to determine an analyte’s concentration in a variety of different samples. We also can use voltammetry to characterize an analyte’s properties, including verifying its electrochemical reversibility, determining the number of electrons transferred during its oxidation or reduction, and determining its equilibrium constant in a coupled chemical reaction. Electrochemical Reversibility and Determination of n Earlier in this chapter we derived a relationship between E1/2 and the standard-state potential for a redox couple (Equation \ref{11.9}), noting that a redox reaction must be electrochemically reversible. How can we tell if a redox reaction is reversible by looking at its voltammogram? For a reversible redox reaction Equation \ref{11.8}, which we repeat here, describes the relationship between potential and current for a voltammetric experiment with a limiting current. $E=E_{O / R}^{\circ}-\frac{0.05916}{n} \log \frac{K_{O}}{K_{R}}-\frac{0.05916}{n} \log \frac{i}{i_{l} - i} \nonumber$ If a reaction is electrochemically reversible, a plot of E versus log(i/il i) is a straight line with a slope of –0.05916/n. In addition, the slope should yield an integer value for n. Example 11.4.3 The following data were obtained from a linear scan hydrodynamic voltammogram of a reversible reduction reaction. E (V vs. SCE) current (μA) –0.358 0.37 –0.372 0.95 –0.382 1.71 –0.400 3.48 –0.410 4.20 –0.435 4.97 The limiting current is 5.15 μA. Show that the reduction reaction is reversible, and determine values for n and for E1/2. Solution Figure 11.4.20 shows a plot of E versus log(i/ili). Because the result is a straight-line, we know the reaction is electrochemically reversible under the conditions of the experiment. A linear regression analysis gives the equation for the straight line as $E=-0.391 \mathrm{V}-0.0300 \log \frac{i}{i_{l}-i} \nonumber$ From Equation \ref{11.8}, the slope is equivalent to –0.05916/n; solving for n gives a value of 1.97, or 2 electrons. From Equation \ref{11.8} and Equation \ref{11.9}, we know that E1/2 is the y-intercept for a plot of E versus log(i/il i); thus, E1/2 for the data in this example is –0.391 V versus the SCE. We also can use cyclic voltammetry to evaluate electrochemical reversibility by looking at the difference between the peak potentials for the anodic and the cathodic scans. For an electrochemically reversible reaction, the following equation holds true. $\Delta E_{p}=E_{p, a}-E_{p, c}=\frac{0.05916 \ \mathrm{V}}{n} \nonumber$ As an example, for a two-electron reduction we expect a $\Delta E_p$ of approximately 29.6 mV. For an electrochemically irreversible reaction the value of $\Delta E_p$ is larger than expected. Determining Equilibrium Constants for Coupled Chemical Reactions Another important application of voltammetry is determining the equilibrium constant for a solution reaction that is coupled to a redox reaction. The presence of the solution reaction affects the ease of electron transfer in the redox reaction, shifting E1/2 to a more negative or to a more positive potential. Consider, for example, the reduction of O to R $O+n e^{-} \rightleftharpoons R \nonumber$ the voltammogram for which is shown in Figure 11.4.21 . If we introduce a ligand, L, that forms a strong complex with O, then we also must consider the reaction $O+p L\rightleftharpoons O L_{p} \nonumber$ In the presence of the ligand, the overall redox reaction is $O L_{p}+n e^{-} \rightleftharpoons R+p L \nonumber$ Because of its stability, the reduction of the OLp complex is less favorable than the reduction of O. As shown in Figure 11.4.21 , the resulting voltammogram shifts to a potential that is more negative than that for O. Furthermore, the shift in the voltammogram increases as we increase the ligand’s concentration. We can use this shift in the value of E1/2 to determine both the stoichiometry and the formation constant for a metal-ligand complex. To derive a relationship between the relevant variables we begin with two equations: the Nernst equation for the reduction of O $E=E_{O / R}^{\circ}-\frac{0.05916}{n} \log \frac{[R]_{x=0}}{[O]_{x=0}} \label{11.10}$ and the stability constant, $\beta_p$ for the metal-ligand complex at the electrode surface. $\beta_{p} = \frac{\left[O L_p\right]_{x = 0}}{[O]_{x = 0}[L]_{x = 0}^p} \label{11.11}$ In the absence of ligand the half-wave potential occurs when [R]x = 0 and [O]x = 0 are equal; thus, from the Nernst equation we have $\left(E_{1 / 2}\right)_{n c}=E_{O / R}^{\circ} \label{11.12}$ where the subscript “nc” signifies that the complex is not present. When ligand is present we must account for its effect on the concentration of O. Solving Equation \ref{11.1} for [O]x = 0 and substituting into the Equation \ref{11.10} gives $E=E_{O/R}^{\circ}-\frac{0.05916}{n} \log \frac{[R]_{x=0}[L]_{x=0}^{p} \beta_{p}}{\left[O L_{p}\right]_{x=0}} \label{11.13}$ If the formation constant is sufficiently large, such that essentially all O is present as the complex OLp, then [R]x = 0 and [OLp]x = 0 are equal at the half-wave potential, and Equation \ref{11.13} simplifies to $\left(E_{1 / 2}\right)_{c} = E_{O/R}^{\circ} - \frac{0.05916}{n} \log{} [L]_{x=0}^{p} \beta_{p} \label{11.14}$ where the subscript “c” indicates that the complex is present. Defining $\Delta E_{1/2}$ as $\triangle E_{1 / 2}=\left(E_{1 / 2}\right)_{c}-\left(E_{1 / 2}\right)_{n c} \label{11.15}$ and substituting Equation \ref{11.12} and Equation \ref{11.14} and expanding the log term leaves us with the following equation. $\Delta E_{1 / 2}=-\frac{0.05916}{n} \log \beta_{p}-\frac{0.05916 p}{n} \log {[L]} \label{11.16}$ A plot of $\Delta E_{1/2}$ versus log[L] is a straight-line, with a slope that is a function of the metal-ligand complex’s stoichiometric coefficient, p, and a y-intercept that is a function of its formation constant $\beta_p$. Example 11.4.4 A voltammogram for the two-electron reduction (n = 2) of a metal, M, has a half-wave potential of –0.226 V versus the SCE. In the presence of an excess of ligand, L, the following half-wave potentials are recorded. [L] (M) (E1/2)c (V vs. SCE) 0.020 –0.494 0.040 –0.512 0.060 –0.523 0.080 –0.530 0.100 –0.536 Determine the stoichiometry of the metal-ligand complex and its formation constant. Solution We begin by calculating values of $\Delta E_{1/2}$ using Equation \ref{11.15}, obtaining the values in the following table. [L] (M) $\Delta E_{1/2}$ (V vs. SCE) 0.020 –0.268 0.040 –0.286 0.060 –0.297 0.080 –0.304 0.100 –0.310 Figure 11.4.22 shows the resulting plot of $\Delta E_{1/2}$ as a function of log[L]. A linear regression analysis gives the equation for the straight line as $\triangle E_{1 / 2}=-0.370 \mathrm{V}-0.0601 \log {[L]} \nonumber$ From Equation \ref{11.16} we know that the slope is equal to –0.05916p/n. Using the slope and n = 2, we solve for p obtaining a value of 2.03 ≈ 2. The complex’s stoichiometry, therefore, is ML2. We also know, from Equation \ref{11.16}, that the y-intercept is equivalent to –(0.05916/n)log$\beta_p$. Solving for $\beta_2$ gives a formation constant of $3.2 \times 10^{12}$. Exercise 11.4.2 The voltammogram for 0.50 mM Cd2+ has an E1/2 of –0.565 V versus an SCE. After making the solution 0.115 M in ethylenediamine, E1/2 is –0.845 V, and E1/2 is –0.873 V when the solution is 0.231 M in ethylenediamine. Determine the stoichiometry of the Cd2+–ethylenediamine complex and its formation constant. The data in this problem comes from Morinaga, K. “Polarographic Studies of Metal Complexes. V. Ethylenediamine Complexes of Cadmium, Nickel, and Zinc,” Bull. Chem. Soc. Japan 1956, 29, 793–799. Answer For simplicity, we will use en as a shorthand notation for ethylenediamine. From the three half-wave potentials we have a $\Delta E_{1/2}$ of –0.280 V for 0.115 M en and a $\Delta E_{1/2}$ of –0.308 V for 0.231 M en. Using Equation \ref{11.16} we write the following two equations. $\begin{array}{l}{-0.280=-\frac{0.05916}{2} \log \beta_{p}-\frac{0.05916 p}{2} \log (0.115)} \ {-0.308=-\frac{0.05916}{2} \log \beta_{p}-\frac{0.05916 p}{2} \log (0.231)}\end{array} \nonumber$ To solve for the value of p, we first subtract the second equation from the first equation $0.028=-\frac{0.05916 p}{2} \log (0.115)-\left\{-\frac{0.05916 p}{2} \log (0.231)\right\} \nonumber$ which eliminates the term with $\beta_p$. Next we solve this equation for p $0.028=\left(2.778 \times 10^{-2}\right) \times p-\left(1.882 \times 10^{-2}\right) \times p =\left(8.96 \times 10^{-3}\right) \times p \nonumber$ obtaining a value of 3.1, or p ≈ 3. Thus, the complex is Cd(en)3. To find the formation complex, $\beta_3$, we return to Equation \ref{11.16}, using our value for p. Using the data for an en concentration of 0.115 M \begin{aligned}-0.280=-& \frac{0.05916}{2} \log \beta_{3}-\frac{0.05916 \times 3}{2} \log (0.115) \ &-0.363=-\frac{0.05916}{2} \log \beta_{3} \end{aligned} \nonumber gives a value for $\beta_3$ of $1.92 \times 10^{12}$. Using the data for an en concentration of 0.231 M gives a value of $2.10 \times 10^{12}$. As suggested by Figure 11.4.15 , cyclic voltammetry is one of the most powerful electrochemical techniques for exploring the mechanism of coupled electrochemical and chemical reactions. The treatment of this aspect of cyclic voltammetry is beyond the level of this text, although you can consult this chapter’s additional resources for additional information. Evaluation Scale of Operation Detection levels at the parts-per-million level are routine. For some analytes and for some voltammetric techniques, lower detection limits are possible. Detection limits at the parts-per-billion and the part-per-trillion level are possible with stripping voltammetry. Although most analyses are carried out in conventional electrochemical cells using macro samples, the availability of microelectrodes with diameters as small as 2 μm, allows for the analysis of samples with volumes under 50 μL. For example, the concentration of glucose in 200-μm pond snail neurons was monitored successfully using an amperometric glucose electrode with a 2 mm tip [Abe, T.; Lauw, L. L.; Ewing, A. G. J. Am. Chem. Soc. 1991, 113, 7421–7423]. Accuracy The accuracy of a voltammetric analysis usually is limited by our ability to correct for residual currents, particularly those due to charging. For an analyte at the parts-per-million level, an accuracy of ±1–3% is routine. Accuracy decreases for samples with significantly smaller concentrations of analyte. Precision Precision generally is limited by the uncertainty in measuring the limiting current or the peak current. Under most conditions, a precision of ±1–3% is reasonable. One exception is the analysis of ultratrace analytes in complex matrices by stripping voltammetry, in which the precision may be as poor as ±25%. Sensitivity In many voltammetric experiments, we can improve the sensitivity by adjusting the experimental conditions. For example, in stripping voltammetry we can improve sensitivity by increasing the deposition time, by increasing the rate of the linear potential scan, or by using a differential-pulse technique. One reason that potential pulse techniques are popular is that they provide an improvement in current relative to a linear potential scan. Selectivity Selectivity in voltammetry is determined by the difference between half-wave potentials or peak potentials, with a minimum difference of ±0.2–0.3 V for a linear potential scan and ±0.04–0.05 V for differential pulse voltammetry. We often can improve selectivity by adjusting solution conditions. The addition of a complexing ligand, for example, can substantially shift the potential where a species is oxidized or reduced to a potential where it no longer interferes with the determination of an analyte. Other solution parameters, such as pH, also can be used to improve selectivity. Time, Cost, and Equipment Commercial instrumentation for voltammetry ranges from <$1000 for simple instruments to >$20,000 for a more sophisticated instrument. In general, less expensive instrumentation is limited to linear potential scans. More expensive instruments provide for more complex potential-excitation signals using potential pulses. Except for stripping voltammetry, which needs a long deposition time, voltammetric analyses are relatively rapid.
textbooks/chem/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/11%3A_Electrochemical_Methods/11.04%3A_Voltammetric_and_Amperometric_Methods.txt
1. Identify the anode and the cathode for the following electrochemical cells, and identify the oxidation or the reduction reaction at each electrode. (a) Pt| FeCl2 (aq, 0.015), FeCl3 (aq, 0.045) || AgNO3 (aq, 0.1) | Ag (b) Ag | AgBr(s), NaBr (aq, 1.0) || CdCl2 (aq, 0.05) | Cd (c) Pb | PbSO4 (s), H2SO4 (aq, 1.5) || H2SO4 (aq, 2.0), PbSO4 (s) | PbO2 2. Calculate the potential for each electrochemical cell in problem 1. The values in parentheses are the activities of the associated species. 3. Calculate the activity of KI, x, in the following electrochemical cell if the potential is +0.294 V. Ag | AgCl (s), NaCl (aq, 0.1) || KI (aq, x), I2 (s) | Pt 4. What reaction prevents us from using Zn as an electrode of the first kind in an acidic solution? Which other metals do you expect to behave in the same manner as Zn when immersed in an acidic solution? 5. Creager and colleagues designed a salicylate ion-selective electrode using a PVC membrane impregnated with tetraalkylammonium salicylate [Creager, S. E.; Lawrence, K. D.; Tibbets, C. R. J. Chem. Educ. 1995, 72, 274–276]. To determine the ion-selective electrode’s selectivity coefficient for benzoate, they prepared a set of salicylate calibration standards in which the concentration of benzoate was held constant at 0.10 M. Using the following data, determine the value of the selectivity coefficient. [salicylate] (M) potential (mV) 1.0 20.2 $1.0 \times 10^{-1}$ 73.5 $1.0 \times 10^{-2}$ 126 $1.0 \times 10^{-3}$ 168 $1.0 \times 10^{-4}$ 182 $1.0 \times 10^{-5}$ 182 $1.0 \times 10^{-6}$ 177 What is the maximum acceptable concentration of benzoate if you plan to use this ion-selective electrode to analyze a sample that contains as little as 10–5 M salicylate with an accuracy of better than 1%? 6. Watanabe and co-workers described a new membrane electrode for the determination of cocaine, a weak base alkaloid with a pKa of 8.64 [Watanabe, K.; Okada, K.; Oda, H.; Furuno, K.; Gomita, Y.; Katsu, T. Anal. Chim. Acta 1995, 316, 371–375]. The electrode’s response for a fixed concentration of cocaine is independent of pH in the range of 1–8, but decreases sharply above a pH of 8. Offer an explanation for this pH dependency. 7. Figure 11.2.14 shows a schematic diagram for an enzyme electrode that responds to urea by using a gas-sensing NH3 electrode to measure the amount of ammonia released following the enzyme’s reaction with urea. In turn, the NH3 electrode uses a pH electrode to monitor the change in pH due to the ammonia. The response of the urea electrode is given by equation 11.2.12. Beginning with equation 11.2.19, which gives the potential of a pH electrode, show that equation 11.2.12 for the urea electrode is correct. 8. Explain why the response of an NH3-based urea electrode (Figure 11.2.14 and equation 11.2.12) is different from the response of a urea electrode in which the enzyme is coated on the glass membrane of a pH electrode (Figure 11.2.15 and equation 11.2.13). 9. A potentiometric electrode for HCN uses a gas-permeable membrane, a buffered internal solution of 0.01 M KAg(CN)2, and a Ag2S ISE electrode that is immersed in the internal solution. Consider the equilibrium reactions that take place within the internal solution and derive an equation that relates the electrode’s potential to the concentration of HCN in the sample. To check your work, search on-line for US Patent 3859191 and consult Figure 2. 10. Mifflin and associates described a membrane electrode for the quantitative analysis of penicillin in which the enzyme penicillinase is immobilized in a polyacrylamide gel coated on the glass membrane of a pH electrode [Mifflin, T. E.; Andriano, K. M.; Robbins, W. B. J. Chem. Educ. 1984, 61, 638–639]. The following data were collected using a set of penicillin standards. [penicillin] (M) potential (mV) $1.0 \times 10^{-2}$ 220 $2.0 \times 10^{-3}$ 204 $1.0 \times 10^{-3}$ 190 $2.0 \times 10^{-4}$ 153 $1.0 \times 10^{-4}$ 135 $1.0 \times 10^{-5}$ 96 $1.0 \times 10^{-6}$ 80 (a) Over what range of concentrations is there a linear response? (b) What is the calibration curve’s equation for this concentration range? (c) What is the concentration of penicillin in a sample that yields a potential of 142 mV? 11. An ion-selective electrode can be placed in a flow cell into which we inject samples or standards. As the analyte passes through the cell, a potential spike is recorded instead of a steady-state potential. The concentration of K+ in serum has been determined in this fashion using standards prepared in a matrix of 0.014 M NaCl [Meyerhoff, M. E.; Kovach, P. M. J. Chem. Educ. 1983, 9, 766–768]. [K+] (mM) E (arb. units) [K+] (mM) E (arb. units) 0.10 25.5 0.60 58.7 0.20 37.2 0.80 64.0 0.40 50.8 1.00 66.8 A 1.00-mL sample of serum is diluted to volume in a 10-mL volumetric flask and analyzed, giving a potential of 51.1 (arbitrary units). Report the concentration of K+ in the sample of serum. 12. Wang and Taha described an interesting application of potentiometry, which they call batch injection [Wang, J.; Taha, Z. Anal. Chim. Acta 1991, 252, 215–221]. As shown in the figure below, an ion-selective electrode is placed in an inverted position in a large volume tank, and a fixed volume of a sample or a standard solution is injected toward the electrode’s surface using a micropipet. The response of the electrode is a spike in potential that is proportional to the analyte’s concentration. The following data were collected using a pH electrode and a set of pH standards. pH potential (mV) 2.0 +300 3.0 +240 4.0 +168 5.0 +81 6.0 +35 8.0 –92 9.0 –168 10.0 –235 11.0 –279 Determine the pH of the following samples given the recorded peak potentials: tomato juice, 167 mV; tap water, –27 mV; coffee, 122 mV. 13. The concentration of $\text{NO}_3^-$ in a water sample is determined by a one-point standard addition using a $\text{NO}_3^-$ ion-selective electrode. A 25.00-mL sample is placed in a beaker and a potential of 0.102 V is measured. A 1.00-mL aliquot of a 200.0-mg/L standard solution of $\text{NO}_3^-$ is added, after which the potential is 0.089 V. Report the mg $\text{NO}_3^-$/L in the water sample. 14. In 1977, when I was an undergraduate student at Knox College, my lab partner and I completed an experiment to determine the concentration of fluoride in tap water and the amount of fluoride in toothpaste. The data in this problem are from my lab notebook. (a) To analyze tap water, we took three 25.0-mL samples and added 25.0 mL of TISAB to each. We measured the potential of each solution using a F ISE and an SCE reference electrode. Next, we made five 1.00-mL additions of a standard solution of 100.0 ppm F to each sample, and measured the potential after each addition, recording the potential three times. mL of standard added potential (mV), replicate 1 potential (mV), replicate 2 potential (mV), replicate 3 0.00 –79 –82 –83 1.00 –119 –119 –118 2.00 –133 –133 –133 3.00 –142 –142 –142 4.00 –149 –148 –148 5.00 –154 –153 –153 Report the parts-per-million of F in the tap water. (b) To analyze the toothpaste, we measured 0.3619 g into a 100-mL volumetric flask, added 50.0 mL of TISAB, and diluted to volume with distilled water. After we ensured that the sample was thoroughly mixed, we transferred three 20.0-mL portions into separate beakers and measured the potential of each using a F ISE and an SCE reference electrode. Next, we made five 1.00-mL additions of a standard solution of 100.0 ppm F to each sample, and measured the potential after each addition, recording the potential three times. mL of standard added potential (mV), replicate 1 potential (mV), replicate 2 potential (mV), replicate 3 0.00 –55 –54 –55 1.00 –82 –82 –83 2.00 –94 –94 –94 3.00 –102 –103 –102 4.00 –108 –108 –109 5.00 –112 –112 –113 Report the parts-per-million F in the toothpaste. 15. You are responsible for determining the amount of KI in iodized salt and decide to use an I ion-selective electrode. Describe how you would perform this analysis using external standards and how you would per-form this analysis using the method of standard additions. 16. Explain why each of the following decreases the analysis time in controlled-potential coulometry: a larger surface area for the working electrode; a smaller volume of solution; and a faster stirring rate. 17. The purity of a sample of picric acid, C6H3N3O7, is determined by controlled-potential coulometry, converting picric acid to triaminophenol, C6H9N3O. A 0.2917-g sample of picric acid is placed in a 1000-mL volumetric flask and diluted to volume. A 10.00-mL portion of this solution is transferred to a coulometric cell and sufficient water added so that the Pt cathode is immersed. An exhaustive electrolysis of the sample re-quires 21.67 C of charge. Report the purity of the picric acid. 18. The concentration of H2S in the drainage from an abandoned mine is determined by a coulometric titration using KI as a mediator and $\text{I}_3^-$ as the titrant. $\text{H}_{2}\text{S}(a q)+\ \mathrm{I}_{3}^{-}(a q)+2 \mathrm{H}_{2} \mathrm{O}(l)\rightleftharpoons2 \mathrm{H}_{3} \mathrm{O}^{+}(a q)+3 \mathrm{I}^{-}(a q)+\mathrm{S}(s) \nonumber$ A 50.00-mL sample of water is placed in a coulometric cell, along with an excess of KI and a small amount of starch as an indicator. Electrolysis is carried out at a constant current of 84.6 mA, requiring 386 s to reach the starch end point. Report the concentration of H2S in the sample in μg/mL. 19. One method for the determination of a given mass of H3AsO3 is a coulometric titration using $\text{I}_3^-$ as a titrant. The relevant standard-state reactions and potentials are summarized here \begin{aligned} \mathrm{H}_{3} \mathrm{AsO}_{4}(a q)+2 \mathrm{H}^{+}(a q)+2 \mathrm{e}^{-} &\rightleftharpoons \ \mathrm{H}_{3} \mathrm{AsO}_{3}(a q)+\ \mathrm{H}_{2} \mathrm{O}(l) \ \mathrm{I}_{3}^{-}(a q)+2 \mathrm{e}^{-} &\rightleftharpoons 3 \mathrm{I}^{-}(a q) \end{aligned} \nonumber with standard state reduction potentials of, respectively, +0.559 V and +0.536 V. Explain why the coulometric titration is carried out in a neutral solution (pH ≈ 7) instead of in a strongly acidic solution (pH < 0). 20. The production of adiponitrile, NC(CH2)4CN, from acrylonitrile, CH2=CHCN, is an important industrial process. A 0.594-g sample of acrylonitrile is placed in a 1-L volumetric flask and diluted to volume. An exhaustive controlled-potential electrolysis of a 1.00-mL portion of the diluted acrylonitrile requires 1.080 C of charge. What is the value of n for the reduction of acrylonitrile to adiponitrile? 21. The linear-potential scan hydrodynamic voltammogram for a mixture of Fe2+ and Fe3+ is shown in the figure below where il,a and il,c are the anodic and cathodic limiting currents. (a) Show that the potential is given by $E = E_{\text{Fe}^{3+}/\text{Fe}^{2+}}^{\circ} - 0.05916 \log \frac {K_{\text{Fe}^{3+}}} {K_{\text{Fe}^{2+}}} - 0.05916 \log \frac {i - i_{l,a}}{i_{l,c} - i} \nonumber$ (b) What is the potential when i = 0 for a solution that is 0.100 mM Fe3+ and 0.050 mM Fe2+? 22. The amount of sulfur in aromatic monomers is determined by differential pulse polarography. Standard solutions are prepared for analysis by dissolving 1.000 mL of the purified monomer in 25.00 mL of an electrolytic solvent, adding a known amount of sulfur, deaerating, and measuring the peak current. The following results were obtained for a set of calibration standards. µg S added peak current (µA) 0 0.14 28 0.70 56 1.23 112 2.41 168 3.42 Analysis of a 1.000-mL sample, treated in the same manner as the standards, gives a peak current of 1.77 μA. Report the mg S/mL in the sample. 23. The purity of a sample of K3Fe(CN)6 is determined using linear-potential scan hydrodynamic voltammetry at a glassy carbon electrode. The following data were obtained for a set of external calibration standards. [K3Fe(CN)6] (mM) limiting current (µA) 2.0 127 4.0 252 6.0 376 8.0 500 10.0 624 A sample of impure K3Fe(CN)6 is prepared for analysis by diluting a 0.246-g portion to volume in a 100-mL volumetric flask. The limiting current for the sample is 444 μA. Report the purity of this sample of K3Fe(CN)6. 24. One method for determining whether an individual recently fired a gun is to look for traces of antimony in residue collected from the individual’s hands. Anodic stripping voltammetry at a mercury film electrode is ideally suited for this analysis. In a typical analysis a sample is collected from a suspect using a cotton-tipped swab wetted with 5% v/v HNO3. After returning to the lab, the swab is placed in a vial that contains 5.0 mL of 4 M HCl that is 0.02 M in hydrazine sulfate. After soaking the swab, a 4.0-mL portion of the solution is transferred to an electrochemical cell along with 100 μL of 0.01 M HgCl2. After depositing the thin film of mercury and the antimony, the stripping step gives a peak current of 0.38 μA. After adding a standard addition of 100 μL of $5.00 \times 10^2$ ppb Sb, the peak current increases to 1.14 μA. How many nanograms of Sb were collected from the suspect’s hand? 25. Zinc is used as an internal standard in an analysis of thallium by differential pulse polarography. A standard solution of $5.00 \times 10^{-5}$ M Zn2+ and $2.50 \times 10^{-5}$ M Tl+ has peak currents of 5.71 μA and 3.19 μA, respectively. An 8.713-g sample of a zinc-free alloy is dissolved in acid, transferred to a 500-mL volumetric flask, and diluted to volume. A 25.0-mL portion of this solution is mixed with 25.0 mL of $5.00 \times 10^{-4}$ M Zn2+. Analysis of this solution gives peak currents of 12.3 μA and of 20.2 μA for Zn2+ and Tl+, respectively. Report the %w/w Tl in the alloy. 26. Differential pulse voltammetry at a carbon working electrode is used to determine the concentrations of ascorbic acid and caffeine in drug formulations [Lau, O.; Luk, S.; Cheung, Y. Analyst 1989, 114, 1047–1051]. In a typical analysis a 0.9183-g tablet is crushed and ground into a fine powder. A 0.5630-g sample of this powder is transferred to a 100-mL volumetric flask, brought into solution, and diluted to volume. A 0.500-mL portion of this solution is then transferred to a voltammetric cell that contains 20.00 mL of a suitable supporting electrolyte. The resulting voltammogram gives peak currents of 1.40 μA and 3.88 μA for ascorbic acid and for caffeine, respectively. A 0.500-mL aliquot of a standard solution that contains 250.0 ppm ascorbic acid and 200.0 ppm caffeine is then added. A voltammogram of this solution gives peak currents of 2.80 μA and 8.02 μA for ascorbic acid and caffeine, respectively. Report the milligrams of ascorbic acid and milligrams of caffeine in the tablet. 27. Ratana-ohpas and co-workers described a stripping analysis method for determining tin in canned fruit juices [Ratana-ohpas, R.; Kanatharana, P.; Ratana-ohpas, W.; Kongsawasdi, W. Anal. Chim. Acta 1996, 333, 115–118]. Standards of 50.0 ppb Sn4+, 100.0 ppb Sn4+, and 150.0 ppb Sn4+ were analyzed giving peak currents (arbitrary units) of 83.0, 171.6, and 260.2, respectively. A 2.00-mL sample of lychee juice is mixed with 20.00 mL of 1:1 HCl/HNO3. A 0.500-mL portion of this mixture is added to 10 mL of 6 M HCl and the volume adjusted to 30.00 mL. Analysis of this diluted sample gave a signal of 128.2 (arbitrary units). Report the parts-per-million Sn4+ in the original sample of lychee juice. 28. Sittampalam and Wilson described the preparation and use of an amperometric sensor for glucose [Sittampalam, G.; Wilson, G. S. J. Chem. Educ. 1982, 59, 70–73]. The sensor is calibrated by measuring the steady-state current when it is immersed in standard solutions of glucose. A typical set of calibration data is shown here. [glucose] (mg/100 mL) current (arb. units) 2.0 17.2 4.0 32.9 6.0 52.1 8.0 68.0 10.0 85.8 A 2.00-mL sample is diluted to 10 mL in a volumetric flask and a steady-state current of 23.6 (arbitrary units) is measured. What is the concentration of glucose in the sample in mg/100 mL? 29. Differential pulse polarography is used to determine the concentrations of lead, thallium, and indium in a mixture. Because the peaks for lead and thallium, and for thallium and indium overlap, a simultaneous analysis is necessary. Peak currents (in arbitrary units) at –0.385 V, –0.455 V, and –0.557 V are measured for a single standard solution, and for a sample, giving the results shown in the following table. Report the mg/mL of Pb2+, Tl+ and In3+ in the sample. analyte [standard] (µg/mL) peak current at –0.385 V peak current at –0.455 V peak current at –0.557 V Pb2+ 1.0 26.1 2.9 0 Tl+ 2.0 7.8 23.5 3.2 In3+ 0.4 0 0 22.9 sample 60.6 28.8 54.1 30. Abass and co-workers developed an amperometric biosensor for $\text{NH}_4^+$ that uses the enzyme glutamate dehydrogenase to catalyze the following reaction $2 \text { - oxyglutarate }(a q)+ \ \mathrm{NH}_{4}^{+}(a q)+\mathrm{NADH}(a q)\rightleftharpoons\text { glutamate }(a q)+\ \mathrm{NAD}^{+}(a q)+\ \mathrm{H}_{2} \mathrm{O}(l) \nonumber$ where NADH is the reduced form of nicotinamide adenine dinucleotide [Abass, A. K.; Hart, J. P.; Cowell, D. C.; Chapell, A. Anal. Chim. Acta 1988, 373, 1–8]. The biosensor actually responds to the concentration of NADH, however, the rate of the reaction depends on the concentration of $\text{NH}_4^+$. If the initial concentrations of 2-oxyglutarate and NADH are the same for all samples and standards, then the signal is proportional to the concentration of $\text{NH}_4^+$. As shown in the following table, the sensitivity of the method is dependent on pH. pH sensitivity (nA S–1 M–1) 6.2 $1.67 \times 10^3$ 6.75 $5.00 \times 10^3$ 7.3 $9.33 \times 10^3$ 7.7 $1.04 \times 10^4$ 8.3 $1.27 \times 10^4$ 9.3 $2.67 \times 10^3$ Two possible explanations for the effect of pH on the sensitivity of this analysis are the acid–base chemistry of $\text{NH}_4^+$ and the acid–base chemistry of the enzyme. Given that the pKa for $\text{NH}_4^+$ is 9.244, explain the source of this pH-dependent sensitivity. 31. The speciation scheme for trace metals in Table 11.4.2 divides them into seven operationally defined groups by collecting and analyzing two samples following each of four treatments, requiring a total of eight samples and eight measurements. After removing insoluble particulates by filtration (treatment 1), the solution is analyzed for the concentration of ASV labile metals and for the total concentration of metals. A portion of the filtered solution is then passed through an ion-exchange column (treatment 2), and the concentrations of ASV metal and of total metal are determined. A second portion of the filtered solution is irradiated with UV light (treatment 3), and the concentrations of ASV metal and of total metal are measured. Finally, a third portion of the filtered solution is irradiated with UV light and passed through an ion-exchange column (treatment 4), and the concentrations of ASV labile metal and of total metal again are determined. The groups that are included in each measurement are summarized in the following table. treatment groups removed by treatement groups contributing to ASV-labile metals groups contributing to total metals 1 none I, II, III I, II, III, IV, V, VI, VII 2 I, IV, V II, III II, III, V1, VII 3 none I, II, III, IV, VI I, II, III, IV, V, VI, VII 4 I, II, IV, V, VI III III, VII (a) Explain how you can use these eight measurements to determine the concentration of metals present in each of the seven groups identified in Table 11.4.2. (b) Batley and Florence report the following results for the speciation of cadmium, lead, and copper in a sample of seawater [Batley, G. E.; Florence, T. M. Anal. Lett. 1976, 9, 379–388]. Determine the speciation of each metal in comment on your results. measurement treatement: ASV-labile or total ppb Cd2+ ppb Pb2+ ppb Cu2+ 1: ASV-labile 0.24 0.39 0.26 2: total 0.28 0.50 0.40 2: ASV-labile 0.21 0.33 0.17 2: total 0.26 0.43 0.24 3: ASV-labile 0.26 0.37 0.33 3: total 0.28 0.5 0.43 4: ASV-labile 0.00 0.00 0.00 4: total 0.02 0.12 0.10 32. The concentration of Cu2+ in seawater is determined by anodic stripping voltammetry at a hanging mercury drop electrode after first releasing any copper bound to organic matter. To a 20.00-mL sample of seawater is added 1 mL of 0.05 M HNO3 and 1 mL of 0.1% H2O2. The sample is irradiated with UV light for 8 hr and then diluted to volume in a 25-mL volumetric flask. Deposition of Cu2+ takes place at –0.3 V versus an SCE for 10 min, producing a peak current of 26.1 (arbitrary units). A second 20.00-mL sample of the seawater is treated identically, except that 0.1 mL of a 5.00 μM solution of Cu2+ is added, producing a peak current of 38.4 (arbitrary units). Report the concentration of Cu2+ in the seawater in mg/L. 33. Thioamide drugs are determined by cathodic stripping analysis [Davidson, I. E.; Smyth, W. F. Anal. Chem. 1977, 49, 1195–1198]. Deposition occurs at +0.05 V versus an SCE. During the stripping step the potential is scanned cathodically and a stripping peak is observed at –0.52 V. In a typical application a 2.00-mL sample of urine is mixed with 2.00 mL of a pH 4.78 buffer. Following a 2.00 min deposition, a peak current of 0.562 μA is measured. A 0.10-mL addition of a 5.00 μM solution of the drug is added to the same solution. A peak current of 0.837 μA is recorded using the same deposition and stripping conditions. Report the drug’s molar concentration in the urine sample. 34. The concentration of vanadium (V) in sea water is determined by adsorptive stripping voltammetry after forming a complex with catechol [van der Berg, C. M. G.; Huang, Z. Q. Anal. Chem. 1984, 56, 2383–2386]. The catechol-V(V) complex is deposited on a hanging mercury drop electrode at a potential of –0.1 V versus a Ag/AgCl reference electrode. A cathodic potential scan gives a stripping peak that is proportional to the concentration of V(V). The following standard additions are used to analyze a sample of seawater. [V (V)]added (M) peak current (µA) $2.0 \times 10^{-8}$ 24 $4.0 \times 10^{-8}$ 33 $8.0 \times 10^{-8}$ 52 $1.2 \times 10^{-7}$ 69 $1.8 \times 10^{-7}$ 97 $2.8 \times 10^{-7}$ 140 Determine the molar concentration of V (V) in the sample of sea water, assuming that the standard additions result in a negligible change in the sample’s volume. 35. The standard-state reduction potential for Cu2+ to Cu is +0.342 V versus the SHE. Given that Cu2+ forms a very stable complex with the ligand EDTA, do you expect that the standard-state reduction potential for Cu(EDTA)2– is greater than +0.342 V, less than +0.342 V, or equal to +0.342 V? Explain your reasoning. 36. The polarographic half-wave potentials (versus the SCE) for Pb2+ and for Tl+ in 1 M HCl are, respectively, –0.44 V and –0.45 V. In an electrolyte of 1 M NaOH, however, the half-wave potentials are –0.76 V for Pb2+ and –0.48 V for Tl+. Why does the change in electrolyte have such a significant effect on the half-wave potential for Pb2+, but not on the half-wave potential for Tl+? 37. The following data for the reduction of Pb2+ were collected by normal-pulse polarography. potential (V vs. SCE) current (µA) –0.345 0.16 –0.370 0.98 –0.383 2.05 –0.393 3.13 –0.409 4.62 –0.420 5.16 The limiting current was 5.67 μA. Verify that the reduction reaction is reversible and determine values for n and E1/2. The half-wave potentials for the normal-pulse polarograms of Pb2+ in the presence of several different concentrations of OH are shown in the following table. [OH] (M) $E_{1/2}$ (V vs. SCE) [OH] (M) $E_{1/2}$ (V vs. SCE) 0.050 –0.646 0.150 –0.689 0.100 –0.673 0.300 –0.715 Determine the stoichiometry of the Pb-hydroxide complex and its formation constant. 38. In 1977, when I was an undergraduate student at Knox College, my lab partner and I completed an experiment to study the voltammetric behavior of Cd2+ (in 0.1 M KNO3) and Ni2+ (in 0.2 M KNO3) at a dropping mercury electrode. The data in this problem are from my lab notebook. All potentials are relative to an SCE reference electrode. potential for Cd2+ (V) current for Cd2+ (µA) potential for Ni2+ (V) current for Ni2+ (µA) –0.60 4.5 –1.07 1.90 –0.58 3.4 –1.05 1.75 –0.56 2.1 –1.03 1.50 –0.54 0.6 –1.02 1.25 –0.52 0.2 –1.00 1.00 The limiting currents for Cd2+ was 4.8 μA and that for Ni2+ was 2.0 μA. Evaluate the electrochemical reversibility for each metal ion and comment on your results. 39. Baldwin and co-workers report the following data from a cyclic voltammetry study of the electrochemical behavior of p-phenylenediamine in a pH 7 buffer [Baldwin, R. P.; Ravichandran, K.; Johnson, R. K. J. Chem. Educ. 1984, 61, 820–823]. All potentials are measured relative to an SCE. scan rate (mV/s) Ep,a (V) Ep,c (V) ip,a (mA) ip,c (mA) 2 0.148 0.104 0.34 0.30 5 0.149 0.098 0.56 0.53 10 0.152 0.095 1.00 0.04 20 0.161 0.095 1.44 1.44 50 0.167 0.082 2.12 1.81 100 0.180 0.063 2.50 2.19 The initial scan is toward more positive potentials, leading to the oxidation reaction shown here. Use this data to show that the reaction is electrochemically irreversible. A reaction may show electrochemical irreversibility because of slow electron transfer kinetics or because the product of the oxidation reaction participates in a chemical reaction that produces an nonelectroactive species. Based on the data in this problem, what is the likely source of p-phenylenediamine’s electrochemical irreversibility?
textbooks/chem/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/11%3A_Electrochemical_Methods/11.05%3A_Problems.txt
The following set of experiments introduce students to the applications of electrochemistry. Experiments are grouped into four categories: general electrochemistry, preparation of electrodes, potentiometry, coulometry, and voltammetry and amperometry. General Electrochemistry • Chatmontree, A.; Chairam, S.; Supasorn, S.; Amatatongchai, M.; Jarujamrus, P; Tamuang, S.; Somsook E. “Student Fabriaction and Use of Simple, Low-Cost, Paper-Based Galvanic Cells to Investigate Electrochemistry,” J. Chem. Educ. 2015, 92, 1044–1048. • Mills, K. V.; Herrick, R. S.; Guilmette, L. W.; Nestor, L. P.; Shafer, H.; Ditzler, M. A. “Introducing Undergraduate Students to Electrochemistry: A Two-Week Discovery Chemistry Experiment,” J. Chem. Educ. 2008, 85, 1116–119. Preparation of Electrodes • Christopoulos, T. K.; Diamandis, E. P. “Use of a Sintered Glass Crucible for Easy Construction of Liquid-Membrane Ion-Selective Electrodes,” J. Chem. Educ. 1988, 65, 648. • Fricke, G. H.; Kuntz, M. J. “Inexpensive Solid-State Ion-Selective Electrodes for Student Use,” J. Chem. Educ. 1977, 54, 517–520. • Inamdar, S. N.; Bhat, M. A.; Haram, S. K. “Construction of Ag/AgCl Reference Electrode form Used Felt-Tipped Pen Barrel for Undergraduate Laboratory,” J. Chem. Educ. 2009, 86, 355–356. • Lloyd, B. W.; O’Brien, F. L.; Wilson, W. D. “Student Preparation and Analysis of Chloride and Calcium Ion Selective Electrodes,” J. Chem. Educ. 1976, 53, 328–330. • Mifflin, T. E.; Andriano, K. M.; Robbins, W. B. “Determination of Penicillin Using an Immobilized Enzyme Electrode,” J. Chem. Educ. 1984, 61, 638–639. • Palanivel, A.; Riyazuddin, P. “Fabrication of an Inexpensive Ion-Selective Electrode,” J. Chem. Educ. 1984, 61, 290. • Ramaley, L; Wedge, P. J.; Crain, S. M. “Inexpensive Instrumental Analysis: Part 1. Ion-Selective Electrodes,” J. Chem. Educ. 1994, 71, 164–167. • Selig, W. S. “Potentiometric Titrations Using Pencil and Graphite Sensors,” J. Chem. Educ. 1984, 61, 80–81. Potentiometry • Chan, W. H; Wong, M. S.; Yip, C. W. “Ion-Selective Electrode in Organic Analysis: A Salicylate Electrode,” J. Chem. Educ. 1986, 63, 915–916. • Harris, T. M. “Potentiometric Measurement in a Freshwater Aquarium,” J. Chem. Educ. 1993, 70, 340–341. • Kauffman, C. A.; Muza, A. L.; Porambo, M. W.; Marsh, A. L. “Use of a Commercial Silver-Silver Chloride Electrode for the Measurement of Cell Potentials to Determine Mean Ionic Activity Coefficients,” Chem. Educator 2010, 15, 178–180. • Martínez-Fàbregas, E.; Alegret, S. “A Practical Approach to Chemical Sensors through Potentiometric Transducers: Determination of Urea in Serum by Means of a Biosensor,” J. Chem. Educ. 1994, 71, A67–A70. • Moresco, H.; Sansón, P.; Seoane, G. “Simple Potentiometric Determination of Reducing Sugars,” J. Chem. Educ. 2008, 85, 1091–1093. • Radic, N.; Komijenovic, J. “Potentiometric Determination of an Overall Formation Constant Using an Ion-Selective Membrane Electrode,” J. Chem. Educ. 1993, 70, 509–511. • Riyazuddin, P.; Devika, D. “Potentiometric Acid–Base Titrations with Activated Graphite Electrodes,”J. Chem. Educ. 1997, 74, 1198–1199. Coulometry • Bertotti, M.; Vaz, J. M.; Telles, R. “Ascorbic Acid Determination in Natural Orange Juice,” J. Chem. Educ. 1995, 72, 445–447. • Kalbus, G. E.; Lieu, V. T. “Dietary Fat and Health: An Experiment on the Determination of Iodine Number of Fats and Oils by Coulometric Titration,” J. Chem. Educ. 1991, 68, 64–65. • Lötz, A. “A Variety of Electrochemical Methods in a Coulometric Titration Experiment,” J. Chem. Educ. 1998, 75, 775–777. • Swim, J.; Earps, E.; Reed, L. M.; Paul, D. “Constant-Current Coulometric Titration of Hydrochloric Acid,” J. Chem. Educ. 1996, 73, 679–683. Voltammetry and Amperometry • Blanco-López, M. C.; Lobo-Castañón, M. J.; Miranda-Ordieres, A. J. “Homemade Bienzymatic-Amperometric Biosensor for Beverages Analysis,” J. Chem. Educ. 2007, 84, 677–680. • García-Armada, P.; Losada, J.; de Vicente-Pérez, S. “Cation Analysis Scheme by Differential Pulse Polarography,” J. Chem. Educ. 1996, 73, 544–547. • Herrera-Melián, J. A.; Doña-Rodríguez, J. M.; Hernández-Brito, J.; Pérez-Peña, J. “Voltammetric Determination of Ni and Co in Water Samples,” J. Chem. Educ. 1997, 74, 1444–1445. • King, D.; Friend, J.; Kariuki, J. “Measuring Vitamin C Content of Commercial Orange Juice Using a Pencil Lead Electrode,” J. Chem. Educ. 2010, 87, 507–509. • Marin, D.; Mendicuti, F. “Polarographic Determination of Composition and Thermodynamic Stability Constant of a Complex Metal Ion,” J. Chem. Educ. 1988, 65, 916–918. • Messersmith, S. J. “Cyclic Voltammetry Simulations with DigiSim Software: An Upper-Level Undergraduate Experiment,” J. Chem. Educ. 2014, 91, 1498–1500. • Sadik, O. A.; Brenda, S.; Joasil, P.; Lord, J. “Electropolymerized Conducting Polymers as Glucose Sensors,” J. Chem. Educ. 1999, 76, 967–970. • Sittampalam, G.; Wilson, G. S. “Amperometric Determination of Glucose at Parts Per Million Levels with Immobilized Glucose Oxidase,” J. Chem. Educ. 1982, 59, 70–73. • Town, J. L.; MacLaren, F.; Dewald, H. D. “Rotating Disk Voltammetry Experiment,” J. Chem. Educ. 1991, 68, 352–354. • Wang, J. “Sensitive Electroanalysis Using Solid Electrodes,” J. Chem. Educ. 1982, 59, 691–692. • Wang, J. “Anodic Stripping Voltammetry,” J. Chem. Educ. 1983, 60, 1074–1075. • Wang, J.; Maccà, C. “Use of Blood-Glucose Test Strips for Introducing Enzyme Electrodes and Modern Biosensors,” J. Chem. Educ. 1996, 73, 797–800. • Wang, Q.; Geiger, A.; Frias, R; Golden, T. D. “An Introduction to Electrochemistry for Undergraduates: Detection of Vitamin C (Ascorbic Acid) by Inexpensive Electrode Sensors,” Chem. Educator 2000, 5, 58–60. The following general references providing a broad introduction to electrochemistry. • Adams, R. N. Electrochemistry at Solid Surfaces, Marcel Dekker: New York, 1969. • Bard, A. J.; Faulkner, L. R. Electrochemical Methods, Wiley: New York, 1980. • Faulkner, L. R. “Electrochemical Characterization of Chemical Systems” in Kuwana, T. E., ed. Physical Methods in Modern Chemical Analysis, Vol. 3, Academic Press: New York, 1983, pp. 137–248. • Kissinger, P. T.; Heineman, W. R. Laboratory Techniques in Electroanalytical Chemistry, Marcel Dekker: New York, 1984. • Lingane, J. J. Electroanalytical Chemistry, 2nd Ed., Interscience: New York, 1958. • Sawyer, D. T.; Roberts, J. L., Jr. Experimental Electrochemistry for Chemists, Wiley-Interscience: New York, 1974. • Vassos, B. H.; Ewing, G. W. Electroanalytical Chemistry, Wiley-Interscience: New York, 1983. These short articles provide a good introduction to important principles of electrochemistry. • Faulkner, L. R. “Understanding Electrochemistry: Some Distinctive Concepts,” J. Chem. Educ. 1983, 60, 262–264. • Huddle, P. A.; White, M. D.; Rogers, F. “Using a Teaching Model to Correct Known Misconceptions in Electrochemistry,” J. Chem. Educ. 2000, 77, 104–110. • Maloy, J. T. “Factors Affecting the Shape of Current-Potential Curves,” J. Chem. Educ. 1983, 60, 285–289. • Miles, D. T. “Run-D.M.C.: A Mnemonic Aid for Explaining Mass Transfer in Electrochemical Systems,” J. Chem. Educ. 2013, 90, 1649–1653. • Thompson, R. Q.; Craig, N. C. “Unified Electroanalytical Chemistry: Application of the Concept of Equilibrium,” J. Chem. Educ. 2001, 78, 928–934. • Zoski, C. G. “Charging Current Discrimination in Analytical Voltammetry,” J. Chem. Educ. 1986, 63, 910–914. Additional information on potentiometry and ion-selective electrodes can be found in the following sources. • Bakker, E.; Diamond, D.; Lewenstam, A.; Pretsch, E. “Ions Sensors: Current Limits and New Trends,” Anal. Chim. Acta 1999, 393, 11–18. • Bates, R. G. Determination of pH: Theory and Practice, 2nd ed., Wiley: New York, 1973. • Bobacka, J.; Ivaska, A.; Lewenstam, A. “Potentiometric Ion Sensors,” Chem. Rev. 2008, 108, 329–351. • Buck, R. P. “Potentiometry: pH Measurements and Ion Selective Electrodes” in Weissberger, A., ed. Physical Methods of Organic Chemistry, Vol. 1, Part IIA, Wiley: New York, 1971, pp. 61–162. • Cammann, K. Working With Ion-Selective Electrodes, Springer-Verlag: Berlin, 1977. • Evans, A. Potentiometry and Ion-Selective Electrodes, Wiley: New York, 1987. • Frant, M. S. “Where Did Ion Selective Electrodes Come From?” J. Chem. Educ. 1997, 74, 159–166. • Light, T. S. “Industrial Use and Application of Ion-Selective Electrodes,” J. Chem. Educ. 1997, 74, 171–177. • Rechnitz, G. A. “Ion and Bio-Selective Membrane Electrodes,” J. Chem. Educ. 1983, 60, 282–284. • Ruzicka, J. “The Seventies—Golden Age for Ion-Selective Electrodes,” J. Chem. Educ. 1997, 74, 167– 170. • Young, C. C. “Evolution of Blood Chemistry Analyzers Based on Ion Selective Electrodes,” J. Chem. Educ. 1997, 74, 177–182. The following sources provide additional information on electrochemical biosensors. • Alvarez-Icasa, M.; Bilitewski, U. “Mass Production of Biosensors,” Anal. Chem. 1993, 65, 525A– 533A. • Meyerhoff, M. E.; Fu, B.; Bakker, E. Yun, J-H; Yang, V. C. “Polyion-Sensititve Membrane Electrodes for Biomedical Analysis,” Anal. Chem. 1996, 68, 168A–175A. • Nicolini, C.; Adami, M; Antolini, F.; Beltram, F.; Sartore, M.; Vakula, S. “Biosensors: A Step to Bioelectronics,” Phys. World, May 1992, 30–34. • Rogers, K. R.; Williams. L. R. “Biosensors for Environmental Monitoring: A Regulatory Perspective,” Trends Anal. Chem. 1995, 14, 289–294. • Schultz, J. S. “Biosensors,” Sci. Am. August 1991, 64–69. • Thompson, M.; Krull, U. “Biosensors and the Transduction of Molecular Recognition,” Anal. Chem. 1991, 63, 393A–405A. • Vadgama, P. “Designing Biosensors,” Chem. Brit. 1992, 28, 249–252. A good source covering the clinical application of electrochemistry is listed below. • Wang, J. Electroanalytical Techniques in Clinical Chemistry and Laboratory Medicine, VCH: New York, 1998. Coulometry is covered in the following texts. • Rechnitz, G. A. Controlled-Potential Analysis, Macmillan: New York, 1963. • Milner, G. W. C.; Philips, G. Coulometry in Analytical Chemistry, Pergamon: New York, 1967. For a description of electrogravimetry, see the following resource. • Tanaka, N. “Electrodeposition”, in Kolthoff, I. M.; Elving, P. J., eds. Treatise on Analytical Chemistry, Part I: Theory and Practice, Vol. 4, Interscience: New York, 1963. The following sources provide additional information on polarography and pulse polarography. • Flato, J. B. “The Renaissance in Polarographic and Voltammetric Analysis,” Anal. Chem. 1972, 44(11), 75A–87A. • Kolthoff, I. M.; Lingane, J. J. Polarography, Interscience: New York, 1952. • Osteryoung, J. “Pulse Voltammetry,” J. Chem. Educ. 1983, 60, 296–298. Additional Information on stripping voltammetry is available in the following text. • Wang, J. Stripping Analysis, VCH Publishers: Deerfield Beach, FL, 1985. The following papers discuss the numerical simulation of voltammetry. • Bozzini, B. “A Simple Numerical Procedure for the Simulation of “Lifelike” Linear-Sweep Voltammo- grams,” J. Chem. Educ. 2000, 77, 100–103. • Howard, E.; Cassidy, J. “Analysis with Microelectrodes Using Microsoft Excel Solver,” J. Chem. Educ. 2000, 77, 409–411. • Kätelhön, E.; Compton, R. G. “Testing and Validating Electroanalytical Simulations,” Analyst, 2015, 140, 2592–2598. • Messersmith, S. J. “Cyclic Voltammetry Simulations with DigiSim Software: An Upper-Level Undergraduate Experiment,” J. Chem. Educ. 2014, 91, 1498–1500. Gathered together here are many useful resources for cyclic voltammetry, including experiments. • Carriedo, G. A. “The Use of Cyclic Voltammetry in the Study of the Chemistry of Metal–Carbonyls,” J. Chem. Educ. 1988, 65, 1020–1022. • García-Jareño, J. J.; Benito, D.; Navarro-Laboulais, J.; Vicente, F. “Electrochemical Behavior of Electrodeposited Prussian Blue Films on ITO Electrodes,” J. Chem. Educ. 1998, 75, 881–884. • Gilles de Pelichy, L. D.; Smith, E. T. “A Study of the Oxidation Pathway of Adrenaline by Cyclic Voltammetry,” Chem. Educator 1997, 2(2), 1–13. • Gomez, M. E.; Kaifer, A. E. “Voltammetric Behavior of a Ferrocene Derivative,” J. Chem. Educ. 1992, 69, 502–505. • Heffner, J. E.; Raber, J. C.; Moe, O. A.; Wigal, C. T. “Using Cyclic Voltammetry and Molecular Modeling to Determine Substituent Effects in the One-Electron Reduction of Benzoquinones,” J. Chem. Educ. 1998, 75, 365–367. • Heinze, J. “Cyclic Voltammetry—Electrochemical Spectroscopy,” Angew. Chem, Int. Ed. Eng. 1984, 23, 831–918. • Holder, G. N.; Farrar, D. G.; McClure, L. L. “Voltammetric Reductions of Ring-Substituted Acetophenones. 1. Determination of an Electron-Transfer Mechanism Using Cyclic Voltammetry and Computer Modeling: The Formation and Fate of a Radical Anion,” Chem. Educator 2001, 6, 343–349. • Ibanez, J. G.; Gonzalez, I.; Cardenas, M. A. “The Effect of Complex Formation Upon the Redox Potentials of Metal Ions: Cyclic Voltammetry Experiments,” J. Chem. Educ. 1988, 65, 173–175. • Ito, T.; Perara, D. M. N. T.; Nagasaka, S. “Gold Electrodes Modified with Self-Assembled Monolayers for Measuring l-Ascobric acid,” J. Chem. Educ. 2008, 85, 1112–1115. • Kissinger, P. T.; Heineman, W. R. “Cyclic Voltammetry,” J. Chem. Educ. 1983, 60, 702–706. • Mabbott, G. A. “An Introduction to Cyclic Voltammetry,” J. Chem. Educ. 1983, 60, 697–702. • Petrovic, S. “Cyclic Voltammetry of Hexachloroiridate (IV): An Alternative to the Electrochemical Study of the Ferricyanide Ion,” Chem. Educator 2000, 5, 231–235. • Toma, H. E.; Araki, K.; Dovidauskas, S. “A Cyclic Voltammetry Experiment Illustrating Redox Potentials, Equilibrium Constants and Substitution Reaction in Coordination Chemistry,” J. Chem. Educ. 2000, 77, 1351–1353. • Walczak, M. W.; Dryer, D. A.; Jacobson, D. D,; Foss, M. G.; Flynn, N. T. “pH-Dependent Redox Couple: Illustrating the Nernst Equation Using Cyclic Voltammetry,” J. Chem. Educ. 1997, 74, 1195–1197.
textbooks/chem/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/11%3A_Electrochemical_Methods/11.06%3A_Additional_Resources.txt
Chapter Summary In this chapter we introduced three electrochemical methods of analysis: potentiometry, coulometry, and voltammetry. In potentiometry we measure the potential at an indicator electrode without allowing any significant current to pass through the electrochemical cell, and use the Nernst equation to calculate the analyte’s activity after accounting for junction potentials. There are two broad classes of potentiometric electrodes: metallic electrodes and membrane electrodes. The potential of a metallic electrode is the result of a redox reaction at the electrode’s surface. An electrode of the first kind responds to the concentration of its cation in solution; thus, the potential of a Ag wire is determined by the activity of Ag+ in solution. If another species is in equilibrium with the metal ion, the electrode’s potential also responds to the concentration of that species. For example, the potential of a Ag wire in a solution of Cl responds to the concentration of Cl because the relative concentrations of Ag+ and Cl are fixed by the solubility product for AgCl. We call this an electrode of the second kind. The potential of a membrane electrode is determined by a difference in the composition of the solution on each side of the membrane. Electrodes that use a glass membrane respond to ions that bind to negatively charged sites on the membrane’s surface. A pH electrode is one example of a glass membrane electrode. Other kinds of membrane electrodes include those that use insoluble crystalline solids or liquid ion-exchangers incorporated into a hydrophobic membrane. The F ion-selective electrode, which uses a single crystal of LaF3 as the ion-selective membrane, is an example of a solid-state electrode. The Ca2+ ion-selective electrode, in which the chelating ligand di-(n-decyl)phosphate is immobilized in a PVC membrane, is an example of a liquid-based ion-selective electrode. Potentiometric electrodes are designed to respond to molecules by using a chemical reaction that produces an ion whose concentration is determined using a traditional ion-selective electrode. A gas-sensing electrode, for example, includes a gas permeable membrane that isolates the ion-selective electrode from the gas. When a gas-phase analyte diffuses across the membrane it alters the composition of the inner solution, which is monitored with an ion-selective electrode. An enzyme electrodes operate in the same way. Coulometric methods are based on Faraday’s law that the total charge or current passed during an electrolysis is proportional to the amount of reactants and products participating in the redox reaction. If the electrolysis is 100% efficient—which means that only the analyte is oxidized or reduced—then we can use the total charge or total current to determine the amount of analyte in a sample. In controlled-potential coulometry we apply a constant potential and measure the resulting current as a function of time. In controlled-current coulometry the current is held constant and we measure the time required to completely oxidize or reduce the analyte. In voltammetry we measure the current in an electrochemical cell as a function of the applied potential. There are several different voltammetric methods that differ in terms of the choice of working electrode, how we apply the potential, and whether we include convection (stirring) as a means for transporting of material to the working electrode. Polarography is a voltammetric technique that uses a mercury electrode and an unstirred solution. Normal polarography uses a dropping mercury electrode, or a static mercury drop electrode, and a linear potential scan. Other forms of polarography include normal pulse polarography, differential pulse polarography, staircase polarography, and square-wave polarography, all of which use a series of potential pulses. In hydrodynamic voltammetry the solution is stirred using either a magnetic stir bar or by rotating the electrode. Because the solution is stirred a dropping mercury electrode is not used; instead we use a solid electrode. Both linear potential scans and potential pulses can be applied. In stripping voltammetry the analyte is deposited on the electrode, usually as the result of an oxidation or reduction reaction. The potential is then scanned, either linearly or using potential pulses, in a direction that removes the analyte by a reduction or oxidation reaction. Amperometry is a voltammetric method in which we apply a constant potential to the electrode and measure the resulting current. Amperometry is most often used in the construction of chemical sensors for the quantitative analysis of single analytes. One important example is the Clark O2 electrode, which responds to the concentration of dissolved O2 in solutions such as blood and water. Key Terms amalgam anodic current cathode controlled-current coulometry coulometric titrations current efficiency diffusion layer electrochemically irreversible electrode of the second kind enzyme electrodes galvanostat hanging mercury drop electrode ionophore limiting current mediator migration overpotential potentiometer redox electrode salt bridge silver/silver chloride electrode static mercury drop electrode voltammetry amperometry asymmetry potential cathodic current controlled-potential coulometry coulometry cyclic voltammetry dropping mercury electrode electrochemically reversible electrochemistry faradaic current gas-sensing electrode hydrodynamic voltammetry ion selective electrode liquid-based ion-selective electrode membrane potential nonfaradaic current peak current potentiostat reference electrode saturated calomel electrode solid-state ion-selective electrodes stripping voltammetry voltammogram anode auxiliary electrode charging current convection counter electrode diffusion electrical double layer electrode of the first kind electrogravimetry Faraday’s law glass electrode indicator electrode junction potential mass transport mercury film electrode Ohm’s law polarography pulse polarography residual current selectivity coefficient standard hydrogen electrode total ionic strength adjustment buffer working electrode
textbooks/chem/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/11%3A_Electrochemical_Methods/11.07%3A_Chapter_Summary_and_Key_Terms.txt
Drawing from an arsenal of analytical techniques—many of which were the subject of the preceding four chapters—analytical chemists design methods that detect increasingly smaller concentrations of analyte in increasingly more complex matrices. Despite the power of these analytical techniques, they often suffer from a lack of selectivity. For this reason, many analytical procedures include a step to separate the analyte from potential interferents. Although effective, each additional step in an analytical procedure increases the analysis time and the cost of the analysis, and introduces uncertainty. In this chapter we consider two analytical techniques that avoid these limitations by combining the separation and analysis: chromatography and electrophoresis. • 12.1: Overview of Analytical Separations In Chapter 7 we examined several methods for separating an analyte from potential interferents. For example, in a liquid–liquid extraction the analyte and interferent initially are present in a single liquid phase. We add a second, immiscible liquid phase and thoroughly mix them by shaking. During this process the analyte and interferents partition between the two phases to different extents, effecting their separation. • 12.2: General Theory of Column Chromatography Of the two methods for bringing the stationary phase and the mobile phases into contact, the most important is column chromatography. In this section we develop a general theory that we may apply to any form of column chromatography. • 12.3: Optimizing Chromatographic Separations Now that we have defined the solute retention factor, selectivity, and column efficiency we are able to consider how they affect the resolution of two closely eluting peaks. • 12.4: Gas Chromatography In gas chromatography (GC) we inject the sample, which may be a gas or a liquid, into an gaseous mobile phase (often called the carrier gas). The mobile phase carries the sample through a packed or a capillary column that separates the sample’s components based on their ability to partition between the mobile phase and the stationary phase. • 12.5: High-Performance Liquid Chromatography In high-performance liquid chromatography (HPLC) we inject the sample, which is in solution form, into a liquid mobile phase. The mobile phase carries the sample through a packed or capillary column that separates the sample’s components based on their ability to partition between the mobile phase and the stationary phase. • 12.6: Other Forms of Chromatography At the beginning of Chapter 12.5, we noted that there are several different types of solute/stationary phase interactions in liquid chromatography, but limited our discussion to liquid–liquid chromatography. In this section we turn our attention to liquid chromatography techniques in which partitioning occurs by liquid–solid adsorption, ion-exchange, and size exclusion. • 12.7: Electrophoresis Electrophoresis is a class of separation techniques in which we separate analytes by their ability to move through a conductive medium—usually an aqueous buffer—in response to an applied electric field. In the absence of other effects, cations migrate toward the electric field’s negatively charged cathode. • 12.8: Problems End-of-chapter problems to test your understanding of topics covered in this chapter. • 12.9: Additional Resources A compendium of resources to accompany topics in this chapter. • 12.10: Chapter Summary and Key Terms Summary of chapter's main topics and list of key terms included in this chapter. 12: Chromatographic and Electrophoretic Methods In Chapter 7 we examined several methods for separating an analyte from potential interferents. For example, in a liquid–liquid extraction the analyte and interferent initially are present in a single liquid phase. We add a second, immiscible liquid phase and thoroughly mix them by shaking. During this process the analyte and interferents partition between the two phases to different extents, effecting their separation. After allowing the phases to separate, we draw off the phase enriched in analyte. Despite the power of liquid–liquid extractions, there are significant limitations. Two Limitations of Liquid-Liquid Extractions Suppose we have a sample that contains an analyte in a matrix that is incompatible with our analytical method. To determine the analyte’s concentration we first separate it from the matrix using a simple liquid–liquid extraction. If we have several analytes, we may need to complete a separate extraction for each analyte. For a complex mixture of analytes this quickly becomes a tedious process. This is one limitation to a liquid–liquid extraction. A more significant limitation is that the extent of a separation depends on the distribution ratio of each species in the sample. If the analyte’s distribution ratio is similar to that of another species, then their separation becomes impossible. For example, let’s assume that an analyte, A, and an interferent, I, have distribution ratios of, respectively, 5 and 0.5. If we use a liquid–liquid extraction with equal volumes of sample and extractant, then it is easy to show that a single extraction removes approximately 83% of the analyte and 33% of the interferent. Although we can remove 99% of the analyte with three extractions, we also remove 70% of the interferent. In fact, there is no practical combination of number of extractions or volumes of sample and extractant that produce an acceptable separation. From Chapter 7 we know that the distribution ratio, D, for a solute, S, is $D=\frac{[S]_{\mathrm{ext}}}{[S]_{\mathrm{samp}}} \nonumber$ where [S]ext is its equilibrium concentration in the extracting phase and [S]samp is its equilibrium concentration in the sample. We can use the distribution ratio to calculate the fraction of S that remains in the sample, qsamp, after an extraction $q_{\text {samp}}=\frac{V_{\text {samp }}}{D V_{\text {ext }}+V_{\text {samp }}} \nonumber$ where Vsamp is the volume of sample and Vext is the volume of the extracting phase. For example, if D = 10, Vsamp = 20, and Vext = 5, the fraction of S remaining in the sample after the extraction is $q_{\text { sanp }}=\frac{20}{10 \times 5+20}=0.29 \nonumber$ or 29%. The remaining 71% of the analyte is in the extracting phase. A Better Way to Separate Mixtures The problem with a liquid–liquid extraction is that the separation occurs in one direction only: from the sample to the extracting phase. Let’s take a closer look at the liquid–liquid extraction of an analyte and an interferent with distribution ratios of, respectively, 5 and 0.5. Figure 12.1.1 shows that a single extraction using equal volumes of sample and extractant transfers 83% of the analyte and 33% of the interferent to the extracting phase. If the original concentrations of A and I are identical, then their concentration ratio in the extracting phase after one extraction is $\frac{[A]}{[I]}=\frac{0.83}{0.33}=2.5 \nonumber$ A single extraction, therefore, enriches the analyte by a factor of $2.5 \times$. After completing a second extraction (Figure 12.1.1 ) and combining the two extracting phases, the separation of the analyte and the interferent, surprisingly, is less efficient. $\frac{[A]}{[I]}=\frac{0.97}{0.55}=1.8 \nonumber$ Figure 12.1.1 makes it clear why the second extraction results in a poorer overall separation: the second extraction actually favors the interferent! We can improve the separation by first extracting the solutes from the sample into the extracting phase and then extracting them back into a fresh portion of solvent that matches the sample’s matrix (Figure 12.1.2 ). Because the analyte has the larger distribution ratio, more of it moves into the extractant during the first extraction and less of it moves back to the sample phase during the second extraction. In this case the concentration ratio in the extracting phase after two extractions is significantly greater. $\frac{[A]}{[I]}=\frac{0.69}{0.11}=6.3 \nonumber$ Not shown in Figure 12.2 is that we can add a fresh portion of the extracting phase to the sample that remains after the first extraction (the bottom row of the first stage in Figure 12.2, beginning the process anew. As we increase the number of extractions, the analyte and the interferent each spread out in space over a series of stages. Because the interferent’s distribution ratio is smaller than the analyte’s, the interferent lags behind the analyte. With a sufficient number of extractions—that is, a sufficient number of stages—a complete separation of the analyte and interferent is possible. This process of extracting the solutes back and forth between fresh portions of the two phases, which we call a countercurrent extraction, was developed by Craig in the 1940s [Craig, L. C. J. Biol. Chem. 1944, 155, 519–534]. The same phenomenon forms the basis of modern chromatography. See Appendix 16 for a more detailed consideration of the mathematics behind a countercurrent extraction. Chromatographic Separations In chromatography we pass a sample-free phase, which we call the mobile phase, over a second sample-free stationary phase that remains fixed in space (Figure 12.1.3 ). We inject or place the sample into the mobile phase. As the sample moves with the mobile phase, its components partition between the mobile phase and the stationary phase. A component whose distribution ratio favors the stationary phase requires more time to pass through the system. Given sufficient time and sufficient stationary and mobile phase, we can separate solutes even if they have similar distribution ratios. There are many ways in which we can identify a chromatographic separation: by describing the physical state of the mobile phase and the stationary phase; by describing how we bring the stationary phase and the mobile phase into contact with each other; or by describing the chemical or physical interactions between the solute and the stationary phase. Let’s briefly consider how we might use each of these classifications. We can trace the history of chromatography to the turn of the century when the Russian botanist Mikhail Tswett used a column packed with calcium carbonate and a mobile phase of petroleum ether to separate colored pigments from plant extracts. As the sample moved through the column, the plant’s pigments separated into individual colored bands. After effecting the separation, the calcium carbonate was removed from the column, sectioned, and the pigments recovered. Tswett named the technique chromatography, combining the Greek words for “color” and “to write.” There was little interest in Tswett’s technique until Martin and Synge’s pioneering development of a theory of chromatography (see Martin, A. J. P.; Synge, R. L. M. “A New Form of Chromatogram Employing Two Liquid Phases,” Biochem. J. 1941, 35, 1358–1366). Martin and Synge were awarded the 1952 Nobel Prize in Chemistry for this work. Types of Mobile Phases and Stationary Phases The mobile phase is a liquid or a gas, and the stationary phase is a solid or a liquid film coated on a solid substrate. We often name chromatographic techniques by listing the type of mobile phase followed by the type of stationary phase. In gas–liquid chromatography, for example, the mobile phase is a gas and the stationary phase is a liquid film coated on a solid substrate. If a technique’s name includes only one phase, as in gas chromatography, it is the mobile phase. Contact Between the Mobile Phase and the Stationary Phase There are two common methods for bringing the mobile phase and the stationary phase into contact. In column chromatography we pack the stationary phase into a narrow column and pass the mobile phase through the column using gravity or by applying pressure. The stationary phase is a solid particle or a thin liquid film coated on either a solid particulate packing material or on the column’s walls. In planar chromatography the stationary phase is coated on a flat surface—typically, a glass, metal, or plastic plate. One end of the plate is placed in a reservoir that contains the mobile phase, which moves through the stationary phase by capillary action. In paper chromatography, for example, paper is the stationary phase. Interaction Between the Solute and the Stationary Phase The interaction between the solute and the stationary phase provides a third method for describing a separation (Figure 12.1.4 ). In adsorption chromatography, solutes separate based on their ability to adsorb to a solid stationary phase. In partition chromatography, the stationary phase is a thin liquid film on a solid support. Separation occurs because there is a difference in the equilibrium partitioning of solutes between the stationary phase and the mobile phase. A stationary phase that consists of a solid support with covalently attached anionic (e.g., $-\text{SO}_3^-$ ) or cationic (e.g., $-\text{N(CH}_3)_3^+$) functional groups is the basis for ion-exchange chromatography in which ionic solutes are attracted to the stationary phase by electrostatic forces. In size-exclusion chromatography the stationary phase is a porous particle or gel, with separation based on the size of the solutes. Larger solutes are unable to penetrate as deeply into the porous stationary phase and pass more quickly through the column. There are other interactions that can serve as the basis of a separation. In affinity chromatography the interaction between an antigen and an antibody, between an enzyme and a substrate, or between a receptor and a ligand forms the basis of a separation. See this chapter’s additional resources for some suggested readings. Electrophoretic Separations In chromatography, a separation occurs because there is a difference in the equilibrium partitioning of solutes between the mobile phase and the stationary phase. Equilibrium partitioning, however, is not the only basis for effecting a separation. In an electrophoretic separation, for example, charged solutes migrate under the influence of an applied potential. A separation occurs because of differences in the charges and the sizes of the solutes (Figure 12.1.5 ).
textbooks/chem/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/12%3A_Chromatographic_and_Electrophoretic_Methods/12.01%3A_Overview_of_Analytical_Separations.txt
Of the two methods for bringing the stationary phase and the mobile phases into contact, the most important is column chromatography. In this section we develop a general theory that we may apply to any form of column chromatography. Figure 12.2.1 provides a simple view of a liquid–solid column chromatography experiment. The sample is introduced as a narrow band at the top of the column. Ideally, the solute’s initial concentration profile is rectangular (Figure 12.2.2 a). As the sample moves down the column, the solutes begin to separate (Figure 12.2.1 b,c) and the individual solute bands begin to broaden and develop a Gaussian profile (Figure 12.2.2 b,c). If the strength of each solute’s interaction with the stationary phase is sufficiently different, then the solutes separate into individual bands (Figure 12.2.1 d and Figure 12.2.2 d). Figure 12.2.2 . An alternative view of the separation in Figure 12.2.1 showing the concentration of each solute as a function of distance down the column. We can follow the progress of the separation by collecting fractions as they elute from the column (Figure 12.2.1 e,f), or by placing a suitable detector at the end of the column. A plot of the detector’s response as a function of elution time, or as a function of the volume of mobile phase, is known as a chromatogram (Figure 12.2.3 ), and consists of a peak for each solute. There are many possible detectors that we can use to monitor the separation. Later sections of this chapter describe some of the most popular. We can characterize a chromatographic peak’s properties in several ways, two of which are shown in Figure 12.2.4 . Retention time, tr, is the time between the sample’s injection and the maximum response for the solute’s peak. A chromatographic peak’s baseline width, w, as shown in Figure 12.2.4 , is determined by extending tangent lines from the inflection points on either side of the peak through the baseline. Although usually we report tr and w using units of time, we can report them using units of volume by multiplying each by the mobile phase’s velocity, or report them in linear units by measuring distances with a ruler. For example, a solute’s retention volume,Vr, is $t_\text{r} \times u$ where u is the mobile phase’s velocity through the column. In addition to the solute’s peak, Figure 12.2.4 also shows a small peak that elutes shortly after the sample is injected into the mobile phase. This peak contains all nonretained solutes, which move through the column at the same rate as the mobile phase. The time required to elute the nonretained solutes is called the column’s void time, tm. Chromatographic Resolution The goal of chromatography is to separate a mixture into a series of chromatographic peaks, each of which constitutes a single component of the mixture. The resolution between two chromatographic peaks, RAB, is a quantitative measure of their separation, and is defined as $R_{A B}=\frac{t_{t, B}-t_{t,A}}{0.5\left(w_{B}+w_{A}\right)}=\frac{2 \Delta t_{r}}{w_{B}+w_{A}} \label{12.1}$ where B is the later eluting of the two solutes. As shown in Figure 12.2.5 , the separation of two chromatographic peaks improves with an increase in RAB. If the areas under the two peaks are identical—as is the case in Figure 12.2.5 —then a resolution of 1.50 corresponds to an overlap of only 0.13% for the two elution profiles. Because resolution is a quantitative measure of a separation’s success, it is a useful way to determine if a change in experimental conditions leads to a better separation. Example 12.2.1 In a chromatographic analysis of lemon oil a peak for limonene has a retention time of 8.36 min with a baseline width of 0.96 min. $\gamma$-Terpinene elutes at 9.54 min with a baseline width of 0.64 min. What is the resolution between the two peaks? Solution Using Equation \ref{12.1} we find that the resolution is $R_{A B}=\frac{2 \Delta t_{r}}{w_{B}+w_{A}}=\frac{2(9.54 \text{ min}-8.36 \text{ min})}{0.64 \text{ min}+0.96 \text{ min}}=1.48 \nonumber$ Exercise 12.2.1 Figure 12.2.6 shows the separation of a two-component mixture. What is the resolution between the two components? Use a ruler to measure $\Delta t_\text{r}$, wA, and wB in millimeters. Answer Because the relationship between elution time and distance is proportional, we can measure $\Delta t_\text{r}$, wA, and wB using a ruler. My measurements are 8.5 mm for $\Delta t_\text{r}$, and 12.0 mm each for wA and wB. Using these values, the resolution is $R_{A B}=\frac{2 \Delta t_{t}}{w_{A}+w_{B}}=\frac{2(8.5 \text{ mm})}{12.0 \text{ mm}+12.0 \text{ mm}}=0.70 \nonumber$ Your measurements for $\Delta t_\text{r}$, wA, and wB will depend on the relative size of your monitor or printout; however, your value for the resolution should be similar to the answer above. Equation \ref{12.1} suggests that we can improve resolution by increasing $\Delta t_\text{r}$, or by decreasing wA and wB (Figure 12.2.7 ). To increase $\Delta t_\text{r}$ we can use one of two strategies. One approach is to adjust the separation conditions so that both solutes spend less time in the mobile phase—that is, we increase each solute’s retention factor—which provides more time to effect a separation. A second approach is to increase selectivity by adjusting conditions so that only one solute experiences a significant change in its retention time. The baseline width of a solute’s peak depends on the solutes movement within and between the mobile phase and the stationary phase, and is governed by several factors that collectively we call column efficiency. We will consider each of these approaches for improving resolution in more detail, but first we must define some terms. Solute Retention Factor Let’s assume we can describe a solute’s distribution between the mobile phase and stationary phase using the following equilibrium reaction $S_{\text{m}} \rightleftharpoons S_{\text{s}} \nonumber$ where Sm is the solute in the mobile phase and Ss is the solute in the stationary phase. Following the same approach we used in Chapter 7.7 for liquid–liquid extractions, the equilibrium constant for this reaction is an equilibrium partition coefficient, KD. $K_{D}=\frac{\left[S_{\mathrm{s}}\right]}{\left[S_\text{m}\right]} \nonumber$ This is not a trivial assumption. In this section we are, in effect, treating the solute’s equilibrium between the mobile phase and the stationary phase as if it is identical to the equilibrium in a liquid–liquid extraction. You might question whether this is a reasonable assumption. There is an important difference between the two experiments that we need to consider. In a liquid–liquid extraction, which takes place in a separatory funnel, the two phases remain in contact with each other at all times, allowing for a true equilibrium. In chromatography, however, the mobile phase is in constant motion. A solute that moves into the stationary phase from the mobile phase will equilibrate back into a different portion of the mobile phase; this does not describe a true equilibrium. So, we ask again: Can we treat a solute’s distribution between the mobile phase and the stationary phase as an equilibrium process? The answer is yes, if the mobile phase velocity is slow relative to the kinetics of the solute’s movement back and forth between the two phase. In general, this is a reasonable assumption. In the absence of any additional equilibrium reactions in the mobile phase or the stationary phase, KD is equivalent to the distribution ratio, D, $D=\frac{\left[S_{0}\right]}{\left[S_\text{m}\right]}=\frac{(\operatorname{mol} \text{S})_\text{s} / V_\text{s}}{(\operatorname{mol} \text{S})_\text{m} / V_\text{m}}=K_{D} \label{12.2}$ where Vs and Vm are the volumes of the stationary phase and the mobile phase, respectively. A conservation of mass requires that the total moles of solute remain constant throughout the separation; thus, we know that the following equation is true. $(\operatorname{mol} \text{S})_{\operatorname{tot}}=(\operatorname{mol} \text{S})_{\mathrm{m}}+(\operatorname{mol} \text{S})_\text{s} \label{12.3}$ Solving Equation \ref{12.3} for the moles of solute in the stationary phase and substituting into Equation \ref{12.2} leaves us with $D = \frac{\left\{(\text{mol S})_{\text{tot}} - (\text{mol S})_\text{m}\right\} / V_{\mathrm{s}}}{(\text{mol S})_{\mathrm{m}} / V_{\mathrm{m}}} \nonumber$ Rearranging this equation and solving for the fraction of solute in the mobile phase, fm, gives $f_\text{m} = \frac {(\text{mol S})_\text{m}} {(\text{mol S})_\text{tot}} = \frac {V_\text{m}} {DV_\text{s} + V_\text{m}} \label{12.4}$ which is identical to the result for a liquid-liquid extraction (see Chapter 7). Because we may not know the exact volumes of the stationary phase and the mobile phase, we simplify Equation \ref{12.4} by dividing both the numerator and the denominator by Vm; thus $f_\text{m} = \frac {V_\text{m}/V_\text{m}} {DV_\text{s}/V_\text{m} + V_\text{m}/V_\text{m}} = \frac {1} {DV_\text{s}/V_\text{m} + 1} = \frac {1} {1+k} \label{12.5}$ where k $k=D \times \frac{V_\text{s}}{V_\text{m}} \label{12.6}$ is the solute’s retention factor. Note that the larger the retention factor, the more the distribution ratio favors the stationary phase, leading to a more strongly retained solute and a longer retention time. Other (older) names for the retention factor are capacity factor, capacity ratio, and partition ratio, and it sometimes is given the symbol $k^{\prime}$. Keep this in mind if you are using other resources. Retention factor is the approved name from the IUPAC Gold Book. We can determine a solute’s retention factor from a chromatogram by measuring the column’s void time, tm, and the solute’s retention time, tr (see Figure 12.2.4 ). Solving Equation \ref{12.5} for k, we find that $k=\frac{1-f_\text{m}}{f_\text{m}} \label{12.7}$ Earlier we defined fm as the fraction of solute in the mobile phase. Assuming a constant mobile phase velocity, we also can define fm as $f_\text{m}=\frac{\text { time spent in the mobile phase }}{\text { time spent in the stationary phase }}=\frac{t_\text{m}}{t_\text{r}} \nonumber$ Substituting back into Equation \ref{12.7} and rearranging leaves us with $k=\frac{1-\frac{t_{m}}{t_{t}}}{\frac{t_{\mathrm{m}}}{t_{\mathrm{r}}}}=\frac{t_{\mathrm{t}}-t_{\mathrm{m}}}{t_{\mathrm{m}}}=\frac{t_{\mathrm{r}}^{\prime}}{t_{\mathrm{m}}} \label{12.8}$ where $t_\text{r}^{\prime}$ is the adjusted retention time. Example 12.2.2 In a chromatographic analysis of low molecular weight acids, butyric acid elutes with a retention time of 7.63 min. The column’s void time is 0.31 min. Calculate the retention factor for butyric acid. Solution $k_{\mathrm{but}}=\frac{t_{\mathrm{r}}-t_{\mathrm{m}}}{t_{\mathrm{m}}}=\frac{7.63 \text{ min}-0.31 \text{ min}}{0.31 \text{ min}}=23.6 \nonumber$ Exercise 12.2.2 Figure 12.2.8 is the chromatogram for a two-component mixture. Determine the retention factor for each solute assuming the sample was injected at time t = 0. Answer Because the relationship between elution time and distance is proportional, we can measure tm, tr,1, and tr,2 using a ruler. My measurements are 7.8 mm, 40.2 mm, and 51.5 mm, respectively. Using these values, the retention factors for solute A and solute B are $k_{1}=\frac{t_{\mathrm{r} 1}-t_\text{m}}{t_\text{m}}=\frac{40.2 \text{ mm}-7.8 \text{ mm}}{7.8 \text{ mm}}=4.15 \nonumber$ $k_{2}=\frac{t_{\mathrm{r} 2}-t_\text{m}}{t_\text{m}}=\frac{51.5 \text{ mm}-7.8 \text{ mm}}{7.8 \text{ mm}}=5.60 \nonumber$ Your measurements for tm, tr,1, and tr,2 will depend on the relative size of your monitor or printout; however, your value for the resolution should be similar to the answer above. Selectivity Selectivity is a relative measure of the retention of two solutes, which we define using a selectivity factor, $\alpha$ $\alpha=\frac{k_{B}}{k_{A}}=\frac{t_{r, B}-t_{\mathrm{m}}}{t_{r, A}-t_{\mathrm{m}}} \label{12.9}$ where solute A has the smaller retention time. When two solutes elute with identical retention time, $\alpha = 1.00$; for all other conditions $\alpha > 1.00$. Example 12.2.3 In the chromatographic analysis for low molecular weight acids described in Example 12.2.2 , the retention time for isobutyric acid is 5.98 min. What is the selectivity factor for isobutyric acid and butyric acid? Solution First we must calculate the retention factor for isobutyric acid. Using the void time from Example 12.2.2 we have $k_{\mathrm{iso}}=\frac{t_{\mathrm{r}}-t_{\mathrm{m}}}{t_{\mathrm{m}}}=\frac{5.98 \text{ min}-0.31 \text{ min}}{0.31 \text{ min}}=18.3 \nonumber$ The selectivity factor, therefore, is $\alpha=\frac{k_{\text {but }}}{k_{\text {iso }}}=\frac{23.6}{18.3}=1.29 \nonumber$ Exercise 12.2.3 Determine the selectivity factor for the chromatogram in Exercise 12.2.2 . Answer Using the results from Exercise 12.2.2 , the selectivity factor is $\alpha=\frac{k_{2}}{k_{1}}=\frac{5.60}{4.15}=1.35 \nonumber$ Your answer may differ slightly due to differences in your values for the two retention factors. Column Efficiency Suppose we inject a sample that has a single component. At the moment we inject the sample it is a narrow band of finite width. As the sample passes through the column, the width of this band continually increases in a process we call band broadening. Column efficiency is a quantitative measure of the extent of band broadening. See Figure 12.2.1 and Figure 12.2.2 . When we inject the sample it has a uniform, or rectangular concentration profile with respect to distance down the column. As it passes through the column, the band broadens and takes on a Gaussian concentration profile. In their original theoretical model of chromatography, Martin and Synge divided the chromatographic column into discrete sections, which they called theoretical plates. Within each theoretical plate there is an equilibrium between the solute present in the stationary phase and the solute present in the mobile phase [Martin, A. J. P.; Synge, R. L. M. Biochem. J. 1941, 35, 1358–1366]. They described column efficiency in terms of the number of theoretical plates, N, $N=\frac{L}{H} \label{12.10}$ where L is the column’s length and H is the height of a theoretical plate. For any given column, the column efficiency improves—and chromatographic peaks become narrower—when there are more theoretical plates. If we assume that a chromatographic peak has a Gaussian profile, then the extent of band broadening is given by the peak’s variance or standard deviation. The height of a theoretical plate is the peak’s variance per unit length of the column $H=\frac{\sigma^{2}}{L} \label{12.11}$ where the standard deviation, $\sigma$, has units of distance. Because retention times and peak widths usually are measured in seconds or minutes, it is more convenient to express the standard deviation in units of time, $\tau$, by dividing $\sigma$ by the solute’s average linear velocity, $\overline{u}$, which is equivalent to dividing the distance it travels, L, by its retention time, tr. $\tau=\frac{\sigma}{\overline{u}}=\frac{\sigma t_{r}}{L} \label{12.12}$ For a Gaussian peak shape, the width at the baseline, w, is four times its standard deviation, $\tau$. $w = 4 \tau \label{12.13}$ Combining Equation \ref{12.11}, Equation \ref{12.12}, and Equation \ref{12.13} defines the height of a theoretical plate in terms of the easily measured chromatographic parameters tr and w. $H=\frac{L w^{2}}{16 t_\text{r}^{2}} \label{12.14}$ Combing Equation \ref{12.14} and Equation \ref{12.10} gives the number of theoretical plates. $N=16 \frac{t_{\mathrm{r}}^{2}}{w^{2}}=16\left(\frac{t_{\mathrm{r}}}{w}\right)^{2} \label{12.15}$ Example 12.2.4 A chromatographic analysis for the chlorinated pesticide Dieldrin gives a peak with a retention time of 8.68 min and a baseline width of 0.29 min. Calculate the number of theoretical plates? Given that the column is 2.0 m long, what is the height of a theoretical plate in mm? Solution Using Equation \ref{12.15}, the number of theoretical plates is $N=16 \frac{t_{\mathrm{r}}^{2}}{w^{2}}=16 \times \frac{(8.68 \text{ min})^{2}}{(0.29 \text{ min})^{2}}=14300 \text{ plates} \nonumber$ Solving Equation \ref{12.10} for H gives the average height of a theoretical plate as $H=\frac{L}{N}=\frac{2.00 \text{ m}}{14300 \text{ plates}} \times \frac{1000 \text{ mm}}{\mathrm{m}}=0.14 \text{ mm} / \mathrm{plate} \nonumber$ Exercise 12.2.4 For each solute in the chromatogram for Exercise 12.2.2 , calculate the number of theoretical plates and the average height of a theoretical plate. The column is 0.5 m long. Answer Because the relationship between elution time and distance is proportional, we can measure tr,1, tr,2, w1, and w2 using a ruler. My measurements are 40.2 mm, 51.5 mm, 8.0 mm, and 13.5 mm, respectively. Using these values, the number of theoretical plates for each solute is $N_{1}=16 \frac{t_{r,1}^{2}}{w_{1}^{2}}=16 \times \frac{(40.2 \text{ mm})^{2}}{(8.0 \text{ mm})^{2}}=400 \text { theoretical plates } \nonumber$ $N_{2}=16 \frac{t_{r,2}^{2}}{w_{2}^{2}}=16 \times \frac{(51.5 \text{ mm})^{2}}{(13.5 \text{ mm})^{2}}=233 \text { theoretical plates } \nonumber$ The height of a theoretical plate for each solute is $H_{1}=\frac{L}{N_{1}}=\frac{0.500 \text{ m}}{400 \text { plates }} \times \frac{1000 \text{ mm}}{\mathrm{m}}=1.2 \text{ mm} / \mathrm{plate} \nonumber$ $H_{2}=\frac{L}{N_{2}}=\frac{0.500 \text{ m}}{233 \text { plates }} \times \frac{1000 \text{ mm}}{\mathrm{m}}=2.15 \text{ mm} / \mathrm{plate} \nonumber$ Your measurements for tr,1, tr,2, w1, and w2 will depend on the relative size of your monitor or printout; however, your values for N and for H should be similar to the answer above. It is important to remember that a theoretical plate is an artificial construct and that a chromatographic column does not contain physical plates. In fact, the number of theoretical plates depends on both the properties of the column and the solute. As a result, the number of theoretical plates for a column may vary from solute to solute. Peak Capacity One advantage of improving column efficiency is that we can separate more solutes with baseline resolution. One estimate of the number of solutes that we can separate is $n_{c}=1+\frac{\sqrt{N}}{4} \ln \frac{V_{\max }}{V_{\min }} \label{12.16}$ where nc is the column’s peak capacity, and Vmin and Vmax are the smallest and the largest volumes of mobile phase in which we can elute and detect a solute [Giddings, J. C. Unified Separation Science, Wiley-Interscience: New York, 1991]. A column with 10 000 theoretical plates, for example, can resolve no more than $n_{c}=1+\frac{\sqrt{10000}}{4} \ln \frac{30 \mathrm{mL}}{1 \mathrm{mL}}=86 \text { solutes } \nonumber$ if Vmin and Vmax are 1 mL and 30 mL, respectively. This estimate provides an upper bound on the number of solutes and may help us exclude from consideration a column that does not have enough theoretical plates to separate a complex mixture. Just because a column’s theoretical peak capacity is larger than the number of solutes, however, does not mean that a separation is feasible. In most situations the practical peak capacity is less than the theoretical peak capacity because the retention characteristics of some solutes are so similar that a separation is impossible. Nevertheless, columns with more theoretical plates, or with a greater range of possible elution volumes, are more likely to separate a complex mixture. The smallest volume we can use is the column’s void volume. The largest volume is determined either by our patience—the maximum analysis time we can tolerate—or by our inability to detect solutes because there is too much band broadening. Asymmetric Peaks Our treatment of chromatography in this section assumes that a solute elutes as a symmetrical Gaussian peak, such as that shown in Figure 12.2.4 . This ideal behavior occurs when the solute’s partition coefficient, KD $K_{\mathrm{D}}=\frac{[S_\text{s}]}{\left[S_\text{m}\right]} \nonumber$ is the same for all concentrations of solute. If this is not the case, then the chromatographic peak has an asymmetric peak shape similar to those shown in Figure 12.2.9 . The chromatographic peak in Figure 12.2.9 a is an example of peak tailing, which occurs when some sites on the stationary phase retain the solute more strongly than other sites. Figure 12.2.9 b, which is an example of peak fronting most often is the result of overloading the column with sample. As shown in Figure 12.2.9 a, we can report a peak’s asymmetry by drawing a horizontal line at 10% of the peak’s maximum height and measuring the distance from each side of the peak to a line drawn vertically through the peak’s maximum. The asymmetry factor, T, is defined as $T=\frac{b}{a} \nonumber$ The number of theoretical plates for an asymmetric peak shape is approximately $N \approx \frac{41.7 \times \frac{t_{r}^{2}}{\left(w_{0.1}\right)^{2}}}{T+1.25}=\frac{41.7 \times \frac{t_{r}^{2}}{(a+b)^{2}}}{T+1.25} \nonumber$ where w0.1 is the width at 10% of the peak’s height [Foley, J. P.; Dorsey, J. G. Anal. Chem. 1983, 55, 730–737]. Asymmetric peaks have fewer theoretical plates, and the more asymmetric the peak the smaller the number of theoretical plates. For example, the following table gives values for N for a solute eluting with a retention time of 10.0 min and a peak width of 1.00 min. b a T N 0.5 0.5 1.00 1850 0.6 0.4 1.50 1520 0.7 0.3 2.33 1160 0.8 0.2 4.00 790
textbooks/chem/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/12%3A_Chromatographic_and_Electrophoretic_Methods/12.02%3A_General_Theory_of_Column_Chromatography.txt
Now that we have defined the solute retention factor, selectivity, and column efficiency we are able to consider how they affect the resolution of two closely eluting peaks. Because the two peaks have similar retention times, it is reasonable to assume that their peak widths are nearly identical. If the number of theoretical plates is the same for all solutes—not strictly true, but not a bad assumption—then from equation 12.2.15, the ratio tr/w is a constant. If two solutes have similar retention times, then their peak widths must be similar. Equation 12.2.1, therefore, becomes $R_{A B}=\frac{t_{r, B}-t_{r, A}}{0.5\left(w_{B}+w_{A}\right)} \approx \frac{t_{r, B}-t_{r, A}}{0.5\left(2 w_{B}\right)}=\frac{t_{r, B}-t_{r, A}}{w_{B}} \label{12.1}$ where B is the later eluting of the two solutes. Solving equation 12.2.15 for wB and substituting into Equation \ref{12.1} leaves us with the following result. $R_{A B}=\frac{\sqrt{N_{B}}}{4} \times \frac{t_{r, B}-t_{r, A}}{t_{r, B}} \label{12.2}$ Rearranging equation 12.2.8 provides us with the following equations for the retention times of solutes A and B. $t_{r, A}=k_{A} t_{\mathrm{m}}+t_{\mathrm{m}} \quad \text { and } \quad t_{\mathrm{r}, B}=k_{B} t_{\mathrm{m}}+t_{\mathrm{m}} \nonumber$ After substituting these equations into Equation \ref{12.2} and simplifying, we have $R_{A B}=\frac{\sqrt{N_{B}}}{4} \times \frac{k_{B}-k_{A}}{1+k_{B}} \nonumber$ Finally, we can eliminate solute A’s retention factor by substituting in equation 12.2.9. After rearranging, we end up with the following equation for the resolution between the chromatographic peaks for solutes A and B. $R_{A B}=\frac{\sqrt{N_{B}}}{4} \times \frac{\alpha-1}{\alpha} \times \frac{k_{B}}{1+k_{B}} \label{12.3}$ In addition to resolution, another important factor in chromatography is the amount of time needed to elute a pair of solutes, which we can approximate using the retention time for solute B. $t_{r, s}=\frac{16 R_{AB}^{2} H}{u} \times\left(\frac{\alpha}{\alpha-1}\right)^{2} \times \frac{\left(1+k_{B}\right)^{3}}{k_{B}^{2}} \label{12.4}$ where u is the mobile phase’s velocity. Although Equation \ref{12.3} is useful for considering how a change in N, $\alpha$, or k qualitatively affects resolution—which suits our purpose here—it is less useful for making accurate quantitative predictions of resolution, particularly for smaller values of N and for larger values of R. For more accurate predictions use the equation $R_{A B}=\frac{\sqrt{N}}{4} \times(\alpha-1) \times \frac{k_{B}}{1+k_{\mathrm{avg}}} \nonumber$ where kavg is (kA + kB)/2. For a derivation of this equation and for a deeper discussion of resolution in column chromatography, see Foley, J. P. “Resolution Equations for Column Chromatography,” Analyst, 1991, 116, 1275-1279. Equation \ref{12.3} and Equation \ref{12.4} contain terms that correspond to column efficiency, selectivity, and the solute retention factor. We can vary these terms, more or less independently, to improve resolution and analysis time. The first term, which is a function of the number of theoretical plates (for Equation \ref{12.3}) or the height of a theoretical plate (for Equation \ref{12.4}), accounts for the effect of column efficiency. The second term is a function of $\alpha$ and accounts for the influence of column selectivity. Finally, the third term in both equations is a function of kB and accounts for the effect of solute B’s retention factor. A discussion of how we can use these parameters to improve resolution is the subject of the remainder of this section. Using the Retention Factor to Optimize Resolution One of the simplest ways to improve resolution is to adjust the retention factor for solute B. If all other terms in Equation \ref{12.3} remain constant, an increase in kB will improve resolution. As shown by the green curve in Figure 12.3.1 , however, the improvement is greatest if the initial value of kB is small. Once kB exceeds a value of approximately 10, a further increase produces only a marginal improvement in resolution. For example, if the original value of kB is 1, increasing its value to 10 gives an 82% improvement in resolution; a further increase to 15 provides a net improvement in resolution of only 87.5%. Any improvement in resolution from increasing the value of kB generally comes at the cost of a longer analysis time. The red curve in Figure 12.3.1 shows the relative change in the retention time for solute B as a function of its retention factor. Note that the minimum retention time is for kB = 2. Increasing kB from 2 to 10, for example, approximately doubles solute B’s retention time. The relationship between retention factor and analysis time in Figure 12.3.1 works to our advantage if a separation produces an acceptable resolution with a large kB. In this case we may be able to decrease kB with little loss in resolution and with a significantly shorter analysis time. To increase kB without changing selectivity, $\alpha$, any change to the chromatographic conditions must result in a general, nonselective increase in the retention factor for both solutes. In gas chromatography, we can accomplish this by decreasing the column’s temperature. Because a solute’s vapor pressure is smaller at lower temperatures, it spends more time in the stationary phase and takes longer to elute. In liquid chromatography, the easiest way to increase a solute’s retention factor is to use a mobile phase that is a weaker solvent. When the mobile phase has a lower solvent strength, solutes spend proportionally more time in the stationary phase and take longer to elute. Adjusting the retention factor to improve the resolution between one pair of solutes may lead to unacceptably long retention times for other solutes. For example, suppose we need to analyze a four-component mixture with baseline resolution and with a run-time of less than 20 min. Our initial choice of conditions gives the chromatogram in Figure 12.3.2 a. Although we successfully separate components 3 and 4 within 15 min, we fail to separate components 1 and 2. Adjusting conditions to improve the resolution for the first two components by increasing k2 provides a good separation of all four components, but the run-time is too long (Figure 12.3.2 b). This problem of finding a single set of acceptable operating conditions is known as the general elution problem. One solution to the general elution problem is to make incremental adjustments to the retention factor as the separation takes place. At the beginning of the separation we set the initial chromatographic conditions to optimize the resolution for early eluting solutes. As the separation progresses, we adjust the chromatographic conditions to decrease the retention factor—and, therefore, to decrease the retention time—for each of the later eluting solutes (Figure 12.3.2 c). In gas chromatography this is accomplished by temperature programming. The column’s initial temperature is selected such that the first solutes to elute are resolved fully. The temperature is then increased, either continuously or in steps, to bring off later eluting components with both an acceptable resolution and a reasonable analysis time. In liquid chromatography the same effect is obtained by increasing the solvent’s eluting strength. This is known as a gradient elution. We will have more to say about each of these in later sections of this chapter. Using Selectivity to Optimize Resolution A second approach to improving resolution is to adjust the selectivity, $\alpha$. In fact, for $\alpha \approx 1$ usually it is not possible to improve resolution by adjusting the solute retention factor, kB, or the column efficiency, N. A change in $\alpha$ often has a more dramatic effect on resolution than a change in kB. For example, changing $\alpha$ from 1.1 to 1.5, while holding constant all other terms, improves resolution by 267%. In gas chromatography, we adjust $\alpha$ by changing the stationary phase; in liquid chromatography, we change the composition of the mobile phase to adjust $\alpha$. To change $\alpha$ we need to selectively adjust individual solute retention factors. Figure 12.3.3 shows one possible approach for the liquid chromatographic separation of a mixture of substituted benzoic acids. Because the retention time of a compound’s weak acid form and its weak base form are different, its retention time will vary with the pH of the mobile phase, as shown in Figure 12.3.3 a. The intersections of the curves in Figure 12.3.3 a show pH values where two solutes co-elute. For example, at a pH of 3.8 terephthalic acid and p-hydroxybenzoic acid elute as a single chromatographic peak. Figure 12.3.3 a shows that there are many pH values where some separation is possible. To find the optimum separation, we plot a for each pair of solutes. The red, green, and orange curves in Figure 12.3.3 b show the variation in a with pH for the three pairs of solutes that are hardest to separate (for all other pairs of solutes, $\alpha$ > 2 at all pH levels). The blue shading shows windows of pH values in which at least a partial separation is possible—this figure is sometimes called a window diagram—and the highest point in each window gives the optimum pH within that range. The best overall separation is the highest point in any window, which, for this example, is a pH of 3.5. Because the analysis time at this pH is more than 40 min (Figure 12.3.3 a), choosing a pH between 4.1–4.4 might produce an acceptable separation with a much shorter analysis time. Let’s use benzoic acid, C6H5COOH, to explain why pH can affect a solute’s retention time. The separation uses an aqueous mobile phase and a nonpolar stationary phase. At lower pHs, benzoic acid predominately is in its weak acid form, C6H5COOH, and partitions easily into the nonpolar stationary phase. At more basic pHs, however, benzoic acid is in its weak base form, C6H5COO. Because it now carries a charge, its solubility in the mobile phase increases and its solubility in the nonpolar stationary phase decreases. As a result, it spends more time in the mobile phase and has a shorter retention time. Although the usual way to adjust pH is to change the concentration of buffering agents, it also is possible to adjust pH by changing the column’s temperature because a solute’s pKa value is pH-dependent; for a review, see Gagliardi, L. G.; Tascon, M.; Castells, C. B. “Effect of Temperature on Acid–Base Equilibria in Separation Techniques: A Review,” Anal. Chim. Acta, 2015, 889, 35–57. Using Column Efficiency to Optimize Resolution A third approach to improving resolution is to adjust the column’s efficiency by increasing the number of theoretical plates, N. If we have values for kB and $\alpha$, then we can use Equation \ref{12.3} to calculate the number of theoretical plates for any resolution. Table 12.3.1 provides some representative values. For example, if $\alpha$ = 1.05 and kB = 2.0, a resolution of 1.25 requires approximately 24 800 theoretical plates. If our column provides only 12 400 plates, half of what is needed, then a separation is not possible. How can we double the number of theoretical plates? The easiest way is to double the length of the column, although this also doubles the analysis time. A better approach is to cut the height of a theoretical plate, H, in half, providing the desired resolution without changing the analysis time. Even better, if we can decrease H by more than 50%, it may be possible to achieve the desired resolution with an even shorter analysis time by also decreasing kB or $\alpha$. Table 12.3.1 . Minimum Number of Theoretical Plates to Achieve Desired Resolution for Selected Values of kB and $\alpha$ RAB = 1.00 RAB = 1.25 RAB = 1.50 kB $\alpha = 1.05$ $\alpha = 1.10$ $\alpha = 1.05$ $\alpha = 1.10$ $\alpha = 1.05$ $\alpha = 1.10$ 0.5 63500 17400 99200 27200 143000 39200 1.0 28200 7740 44100 12100 63500 17400 1.5 19600 5380 30600 8400 44100 12100 2.0 15900 4360 24800 6810 35700 9800 3.0 12500 3440 19600 5380 28200 7740 5.0 10200 2790 15900 4360 22900 6270 10.0 8540 2340 13300 3660 19200 5270 To decrease the height of a theoretical plate we need to understand the experimental factors that affect band broadening. There are several theoretical treatments of band broadening. We will consider one approach that considers four contributions: variations in path lengths, longitudinal diffusion, mass transfer in the stationary phase, and mass transfer in the mobile phase. Multiple Paths: Variations in Path Length As solute molecules pass through the column they travel paths that differ in length. Because of this difference in path length, two solute molecules that enter the column at the same time will exit the column at different times. The result, as shown in Figure 12.3.4 , is a broadening of the solute’s profile on the column. The contribution of multiple paths to the height of a theoretical plate, Hp, is $H_{p}=2 \lambda d_{p} \label{12.5}$ where dp is the average diameter of the particulate packing material and $\lambda$ is a constant that accounts for the consistency of the packing. A smaller range of particle sizes and a more consistent packing produce a smaller value for $\lambda$. For a column without packing material, Hp is zero and there is no contribution to band broadening from multiple paths. An inconsistent packing creates channels that allow some solute molecules to travel quickly through the column. It also can creates pockets that temporarily trap some solute molecules, slowing their progress through the column. A more uniform packing minimizes these problems. Longitudinal Diffusion The second contribution to band broadening is the result of the solute’s longitudinal diffusion in the mobile phase. Solute molecules are in constant motion, diffusing from regions of higher solute concentration to regions where the concentration of solute is smaller. The result is an increase in the solute’s band width (Figure 12.3.5 ). The contribution of longitudinal diffusion to the height of a theoretical plate, Hd, is $H_{d}=\frac{2 \gamma D_{m}}{u} \label{12.6}$ where Dm is the solute’s diffusion coefficient in the mobile phase, u is the mobile phase’s velocity, and $\gamma$ is a constant related to the efficiency of column packing. Note that the effect of Hd on band broadening is inversely proportional to the mobile phase velocity: a higher velocity provides less time for longitudinal diffusion. Because a solute’s diffusion coefficient is larger in the gas phase than in a liquid phase, longitudinal diffusion is a more serious problem in gas chromatography. Mass Transfer As the solute passes through the column it moves between the mobile phase and the stationary phase. We call this movement between phases mass transfer. As shown in Figure 12.3.6 , band broadening occurs if the solute’s movement within the mobile phase or within the stationary phase is not fast enough to maintain an equilibrium in its concentration between the two phases. On average, a solute molecule in the mobile phase moves down the column before it passes into the stationary phase. A solute molecule in the stationary phase, on the other hand, takes longer than expected to move back into the mobile phase. The contributions of mass transfer in the stationary phase, Hs, and mass transfer in the mobile phase, Hm, are given by the following equations $H_{s}=\frac{q k d_{f}^{2}}{(1+k)^{2} D_{s}} u \label{12.7}$ $H_{m}=\frac{f n\left(d_{p}^{2}, d_{c}^{2}\right)}{D_{m}} u \label{12.8}$ where df is the thickness of the stationary phase, dc is the diameter of the column, Ds and Dm are the diffusion coefficients for the solute in the stationary phase and the mobile phase, k is the solute’s retention factor, and q is a constant related to the column packing material. Although the exact form of Hm is not known, it is a function of particle size and column diameter. Note that the effect of Hs and Hm on band broadening is directly proportional to the mobile phase velocity because a smaller velocity provides more time for mass transfer. The abbreviation fn in Equation \ref{12.7} means “is a function of.” Putting It All Together The height of a theoretical plate is a summation of the contributions from each of the terms affecting band broadening. $H=H_{p}+H_{d}+H_{s}+H_{m} \label{12.9}$ An alternative form of this equation is the van Deemter equation $H=A+\frac{B}{u}+C u \label{12.10}$ which emphasizes the importance of the mobile phase’s velocity. In the van Deemter equation, A accounts for the contribution of multiple paths (Hp), B/u accounts for the contribution of longitudinal diffusion (Hd), and Cu accounts for the combined contribution of mass transfer in the stationary phase and in the mobile phase (Hs and Hm). There is some disagreement on the best equation for describing the relationship between plate height and mobile phase velocity [Hawkes, S. J. J. Chem. Educ. 1983, 60, 393–398]. In addition to the van Deemter equation, other equations include $H=\frac{B}{u}+\left(C_s+C_{m}\right) u \nonumber$ where Cs and Cm are the mass transfer terms for the stationary phase and the mobile phase and $H=A u^{1 / 3}+\frac{B}{u}+C u \nonumber$ All three equations, and others, have been used to characterize chromatographic systems, with no single equation providing the best explanation in every case [Kennedy, R. T.; Jorgenson, J. W. Anal. Chem. 1989, 61, 1128–1135]. To increase the number of theoretical plates without increasing the length of the column, we need to decrease one or more of the terms in Equation \ref{12.9}. The easiest way to decrease H is to adjust the velocity of the mobile phase. For smaller mobile phase velocities, column efficiency is limited by longitudinal diffusion, and for higher mobile phase velocities efficiency is limited by the two mass transfer terms. As shown in Figure 12.3.7 —which uses the van Deemter equation—the optimum mobile phase velocity is the minimum in a plot of H as a function of u. The remaining parameters that affect the terms in Equation \ref{12.9} are functions of the column’s properties and suggest other possible approaches to improving column efficiency. For example, both Hp and Hm are a function of the size of the particles used to pack the column. Decreasing particle size, therefore, is another useful method for improving efficiency. For a more detailed discussion of ways to assess the quality of a column, see Desmet, G.; Caooter, D.; Broeckhaven, K. “Graphical Data Represenation Methods to Assess the Quality of LC Columns,” Anal. Chem. 2015, 87, 8593–8602. Perhaps the most important advancement in chromatography columns is the development of open-tubular, or capillary columns. These columns have very small diameters (dc ≈ 50–500 μm) and contain no packing material (dp = 0). Instead, the capillary column’s interior wall is coated with a thin film of the stationary phase. Plate height is reduced because the contribution to H from Hp (Equation \ref{12.5}) disappears and the contribution from Hm (Equation \ref{12.8}) becomes smaller. Because the column does not contain any solid packing material, it takes less pressure to move the mobile phase through the column, which allows for longer columns. The combination of a longer column and a smaller height for a theoretical plate increases the number of theoretical plates by approximately $100 \times$. Capillary columns are not without disadvantages. Because they are much narrower than packed columns, they require a significantly smaller amount of sample, which may be difficult to inject reproducibly. Another approach to improving resolution is to use thin films of stationary phase, which decreases the contribution to H from Hs (Equation \ref{12.7}). The smaller the particles, the more pressure is needed to push the mobile phase through the column. As a result, for any form of chromatography there is a practical limit to particle size.
textbooks/chem/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/12%3A_Chromatographic_and_Electrophoretic_Methods/12.03%3A_Optimizing_Chromatographic_Separations.txt
In gas chromatography (GC) we inject the sample, which may be a gas or a liquid, into an gaseous mobile phase (often called the carrier gas). The mobile phase carries the sample through a packed or a capillary column that separates the sample’s components based on their ability to partition between the mobile phase and the stationary phase. Figure 12.4.1 shows an example of a typical gas chromatograph, which consists of several key components: a supply of compressed gas for the mobile phase; a heated injector, which rapidly volatilizes the components in a liquid sample; a column, which is placed within an oven whose temperature we can control during the separation; and a detector to monitor the eluent as it comes off the column. Let’s consider each of these components. Mobile Phase The most common mobile phases for gas chromatography are He, Ar, and N2, which have the advantage of being chemically inert toward both the sample and the stationary phase. The choice of carrier gas often is determined by the needs of instrument’s detector. For a packed column the mobile phase velocity usually is 25–150 mL/min. The typical flow rate for a capillary column is 1–25 mL/min. Chromatographic Columns There are two broad classes of chromatographic columns: packed columns and capillary columns. In general, a packed column can handle larger samples and a capillary column can separate more complex mixtures. Packed Columns Packed columns are constructed from glass, stainless steel, copper, or aluminum, and typically are 2–6 m in length with internal diameters of 2–4 mm. The column is filled with a particulate solid support, with particle diameters ranging from 37–44 μm to 250–354 μm. Figure 12.4.2 shows a typical example of a packed column. The most widely used particulate support is diatomaceous earth, which is composed of the silica skeletons of diatoms. These particles are very porous, with surface areas ranging from 0.5–7.5 m2/g, which provides ample contact between the mobile phase and the stationary phase. When hydrolyzed, the surface of a diatomaceous earth contains silanol groups (–SiOH), that serve as active sites for absorbing solute molecules in gas-solid chromatography (GSC). In gas-liquid chromatography (GLC), we coat the packing material with a liquid mobile phase. To prevent uncoated packing material from adsorbing solutes, which degrades the quality of the separation, surface silanols are deactivated by reacting them with dimethyldichlorosilane and rinsing with an alcohol—typically methanol—before coating the particles with stationary phase. Figure 12.4.2 , for example, has approximately 1800 plates/m, or a total of approximately 3600 theoretical plates. If we assume a Vmax/Vmin ≈ 50, then it has a peak capacity (equation 12.2.16) of $n_{c}=1+\frac{\sqrt{3600}}{4} \ln (50) \approx 60 \nonumber$ Capillary Columns A capillary, or open tubular column is constructed from fused silica and is coated with a protective polymer coating. Columns range from 15–100 m in length with an internal diameter of approximately 150–300 μm. Figure 12.4.3 shows an example of a typical capillary column. Capillary columns are of three principal types. In a wall-coated open tubular column (WCOT) a thin layer of stationary phase, typically 0.25 nm thick, is coated on the capillary’s inner wall. In a porous-layer open tubular column (PLOT), a porous solid support—alumina, silica gel, and molecular sieves are typical examples—is attached to the capillary’s inner wall. A support-coated open tubular column (SCOT) is a PLOT column that includes a liquid stationary phase. Figure 12.4.4 shows the differences between these types of capillary columns. A capillary column provides a significant improvement in separation efficiency because it has more theoretical plates per meter and is longer than a packed column. For example, the capillary column in Figure 12.4.3 has almost 4300 plates/m, or a total of 129 000 theoretical plates. If we assume a Vmax/Vmin ≈ 50, then it has a peak capacity of approximately 350. On the other hand, a packed column can handle a larger sample. Because of its smaller diameter, a capillary column requires a smaller sample, typically less than 10–2 μL. Stationary Phases for Gas-Liquid Chromatography Elution order in gas–liquid chromatography depends on two factors: the boiling point of the solutes, and the interaction between the solutes and the stationary phase. If a mixture’s components have significantly different boiling points, then the choice of stationary phase is less critical. If two solutes have similar boiling points, then a separation is possible only if the stationary phase selectively interacts with one of the solutes. As a general rule, nonpolar solutes are separated more easily when using a nonpolar stationary phase, and polar solutes are easier to separate when using a polar stationary phase. There are several important criteria for choosing a stationary phase: it must not react with the solutes, it must be thermally stable, it must have a low volatility, and it must have a polarity that is appropriate for the sample’s components. Table 12.4.1 summarizes the properties of several popular stationary phases. Table 12.4.1 . Selected Examples of Stationary Phases for Gas-Liquid Chromatography stationary phase polarity trade name temperature limit (oC) representative applications squalane nonpolar Squalane 150 low-boiling aliphatics hydrocarbons Apezion L nonpolar Apezion L 300 amides, fatty acid methyl esters, terpenoids polydimethyl siloxane slightly polar SE-30 300–350 alkaloids, amino acid derivatives, drugs, pesticides, phenols, steroids phenylmethyl polysiloxane (50% phenyl, 50% methyl) moderately polar OV-17 375 alkaloids, drugs, pesticides, polyaromatic hydrocarbons, polychlorinated biphenyls trifluoropropylmethyl polysiloxane (50% trifluoropropyl, 50% methyl) moderately polar OV-210 275 alkaloids, amino acid derivatives, drugs, halogenated compounds, ketones cyanopropylphenylmethyl polysiloxane (50%cyanopropyl, 50% phenylmethyl) polar OV-225 275 nitriles, pesticides, steroids polyethylene glycol polar Carbowax 20M 225 aldehydes, esters, ethers, phenols Many stationary phases have the general structure shown in Figure 12.4.5 a. A stationary phase of polydimethyl siloxane, in which all the –R groups are methyl groups, –CH3, is nonpolar and often makes a good first choice for a new separation. The order of elution when using polydimethyl siloxane usually follows the boiling points of the solutes, with lower boiling solutes eluting first. Replacing some of the methyl groups with other substituents increases the stationary phase’s polarity and provides greater selectivity. For example, replacing 50% of the –CH3 groups with phenyl groups, –C6H5, produces a slightly polar stationary phase. Increasing polarity is provided by substituting trifluoropropyl, –C3H6CF, and cyanopropyl, –C3H6CN, functional groups, or by using a stationary phase of polyethylene glycol (Figure 12.4.5 b). An important problem with all liquid stationary phases is their tendency to elute, or bleed from the column when it is heated. The temperature limits in Table 12.4.1 minimize this loss of stationary phase. Capillary columns with bonded or cross-linked stationary phases provide superior stability. A bonded stationary phase is attached chemically to the capillary’s silica surface. Cross-linking, which is done after the stationary phase is in the capillary column, links together separate polymer chains to provide greater stability. Another important consideration is the thickness of the stationary phase. From equation 12.3.7 we know that separation efficiency improves with thinner films of stationary phase. The most common thickness is 0.25 μm, although a thicker films is useful for highly volatile solutes, such as gases, because it has a greater capacity for retaining such solutes. Thinner films are used when separating low volatility solutes, such as steroids. A few stationary phases take advantage of chemical selectivity. The most notable are stationary phases that contain chiral functional groups, which are used to separate enantiomers [Hinshaw, J. V. LC .GC 1993, 11, 644–648]. Sample Introduction Three factors determine how we introduce a sample to the gas chromatograph. First, all of the sample’s constituents must be volatile. Second, the analytes must be present at an appropriate concentration. Finally, the physical process of injecting the sample must not degrade the separation. Each of these needs is considered in this section. Preparing a Volatile Sample Not every sample can be injected directly into a gas chromatograph. To move through the column, the sample’s constituents must be sufficiently volatile. A solute of low volatility, for example, may be retained by the column and continue to elute during the analysis of subsequent samples. A nonvolatile solute will condense at the top of the column, degrading the column’s performance. We can separate a sample’s volatile analytes from its nonvolatile components using any of the extraction techniques described in Chapter 7. A liquid–liquid extraction of analytes from an aqueous matrix into methylene chloride or another organic solvent is a common choice. Solid-phase extractions also are used to remove a sample’s nonvolatile components. An attractive approach to isolating analytes is a solid-phase microextraction (SPME). In one approach, which is illustrated in Figure 12.4.6 , a fused-silica fiber is placed inside a syringe needle. The fiber, which is coated with a thin film of an adsorbent material, such as polydimethyl siloxane, is lowered into the sample by depressing a plunger and is exposed to the sample for a predetermined time. After withdrawing the fiber into the needle, it is transferred to the gas chromatograph for analysis. Two additional methods for isolating volatile analytes are a purge-and-trap and headspace sampling. In a purge-and-trap, we bubble an inert gas, such as He or N2, through the sample, releasing—or purging—the volatile compounds. These compounds are carried by the purge gas through a trap that contains an absorbent material, such as Tenax, where they are retained. Heating the trap and back-flushing with carrier gas transfers the volatile compounds to the gas chromatograph. In headspace sampling we place the sample in a closed vial with an overlying air space. After allowing time for the volatile analytes to equilibrate between the sample and the overlying air, we use a syringe to extract a portion of the vapor phase and inject it into the gas chromatograph. Alternatively, we can sample the headspace with an SPME. Thermal desorption is a useful method for releasing volatile analytes from solids. We place a portion of the solid in a glass-lined, stainless steel tube. After purging with carrier gas to remove any O2 that might be present, we heat the sample. Volatile analytes are swept from the tube by an inert gas and carried to the GC. Because volatilization is not a rapid process, the volatile analytes often are concentrated at the top of the column by cooling the column inlet below room temperature, a process known as cryogenic focusing. Once volatilization is complete, the column inlet is heated rapidly, releasing the analytes to travel through the column. The reason for removing O2 is to prevent the sample from undergoing an oxidation reaction when it is heated. To analyze a nonvolatile analyte we must convert it to a volatile form. For example, amino acids are not sufficiently volatile to analyze directly by gas chromatography. Reacting an amino acid, such as valine, with 1-butanol and acetyl chloride produces an esterified amino acid. Subsequent treatment with trifluoroacetic acid gives the amino acid’s volatile N-trifluoroacetyl-n-butyl ester derivative. Adjusting the Analyte's Concentration In an analyte’s concentration is too small to give an adequate signal, then we must concentrate the analyte before we inject the sample into the gas chromatograph. A side benefit of many extraction methods is that they often concentrate the analytes. Volatile organic materials isolated from an aqueous sample by a purge-and-trap, for example, are concentrated by as much as $1000 \times$. If an analyte is too concentrated, it is easy to overload the column, resulting in peak fronting (see Figure 12.2.7) and a poor separation. In addition, the analyte’s concentration may exceed the detector’s linear response. Injecting less sample or diluting the sample with a volatile solvent, such as methylene chloride, are two possible solutions to this problem. Injecting the Sample In Chapter 12.3 we examined several explanations for why a solute’s band increases in width as it passes through the column, a process we called band broadening. We also introduce an additional source of band broadening if we fail to inject the sample into the minimum possible volume of mobile phase. There are two principal sources of this precolumn band broadening: injecting the sample into a moving stream of mobile phase and injecting a liquid sample instead of a gaseous sample. The design of a gas chromatograph’s injector helps minimize these problems. An example of a simple injection port for a packed column is shown in Figure 12.4.7 . The top of the column fits within a heated injector block, with carrier gas entering from the bottom. The sample is injected through a rubber septum using a microliter syringe, such as the one shown in in Figure 12.4.8 . Injecting the sample directly into the column minimizes band broadening because it mixes the sample with the smallest possible amount of carrier gas. The injector block is heated to a temperature at least 50oC above the boiling point of the least volatile solute, which ensures a rapid vaporization of the sample’s components. Because a capillary column’s volume is significantly smaller than that for a packed column, it requires a different style of injector to avoid overloading the column with sample. Figure 12.4.9 shows a schematic diagram of a typical split/splitless injector for use with a capillary column. In a split injection we inject the sample through a rubber septum using a microliter syringe. Instead of injecting the sample directly into the column, it is injected into a glass liner where it mixes with the carrier gas. At the split point, a small fraction of the carrier gas and sample enters the capillary column with the remainder exiting through the split vent. By controlling the flow rate of the carrier gas as it enters the injector, and its flow rate through the septum purge and the split vent, we can control the fraction of sample that enters the capillary column, typically 0.1–10%. For example, if the carrier gas flow rate is 50 mL/min, and the flow rates for the septum purge and the split vent are 2 mL/min and 47 mL/min, respectively, then the flow rate through the column is 1 mL/min (= 50 – 2 – 47). The ratio of sample entering the column is 1/50, or 2%. In a splitless injection, which is useful for trace analysis, we close the split vent and allow all the carrier gas that passes through the glass liner to enter the column—this allows virtually all the sample to enters the column. Because the flow rate through the injector is low, significant precolumn band broadening is a problem. Holding the column’s temperature approximately 20–25oC below the solvent’s boiling point allows the solvent to condense at the entry to the capillary column, forming a barrier that traps the solutes. After allowing the solutes to concentrate, the column’s temperature is increased and the separation begins. For samples that decompose easily, an on-column injection may be necessary. In this method the sample is injected directly into the column without heating. The column temperature is then increased, volatilizing the sample with as low a temperature as is practical. Temperature Control Control of the column’s temperature is critical to attaining a good separation when using gas chromatography. For this reason the column is placed inside a thermostated oven (see Figure 12.4.1 ). In an isothermal separation we maintain the column at a constant temperature. To increase the interaction between the solutes and the stationary phase, the temperature usually is set slightly below that of the lowest-boiling solute. One difficulty with an isothermal separation is that a temperature that favors the separation of a low-boiling solute may lead to an unacceptably long retention time for a higher-boiling solute. Temperature programming provides a solution to this problem. At the beginning of the analysis we set the column’s initial temperature below that for the lowest-boiling solute. As the separation progresses, we slowly increase the temperature at either a uniform rate or in a series of steps. Detectors for Gas Chromatography The final part of a gas chromatograph is the detector. The ideal detector has several desirable features: a low detection limit, a linear response over a wide range of solute concentrations (which makes quantitative work easier), sensitivity for all solutes or selectivity for a specific class of solutes, and an insensitivity to a change in flow rate or temperature. Thermal Conductivity Detector (TCD) One of the earliest gas chromatography detectors takes advantage of the mobile phase’s thermal conductivity. As the mobile phase exits the column it passes over a tungsten-rhenium wire filament (see Figure 12.4.10 . The filament’s electrical resistance depends on its temperature, which, in turn, depends on the thermal conductivity of the mobile phase. Because of its high thermal conductivity, helium is the mobile phase of choice when using a thermal conductivity detector (TCD). Thermal conductivity, as the name suggests, is a measure of how easily a substance conducts heat. A gas with a high thermal conductivity moves heat away from the filament—and, thus, cools the filament—more quickly than does a gas with a low thermal conductivity. When a solute elutes from the column, the thermal conductivity of the mobile phase in the TCD cell decreases and the temperature of the wire filament, and thus it resistance, increases. A reference cell, through which only the mobile phase passes, corrects for any time-dependent variations in flow rate, pressure, or electrical power, all of which affect the filament’s resistance. Because all solutes affect the mobile phase’s thermal conductivity, the thermal conductivity detector is a universal detector. Another advantage is the TCD’s linear response over a concentration range spanning 104–105 orders of magnitude. The detector also is non-destructive, which allows us to recover analytes using a postdetector cold trap. One significant disadvantage of the TCD detector is its poor detection limit for most analytes. Flame Ionization Detector (FID) The combustion of an organic compound in an H2/air flame results in a flame that contains electrons and organic cations, presumably CHO+. Applying a potential of approximately 300 volts across the flame creates a small current of roughly 10–9 to 10–12 amps. When amplified, this current provides a useful analytical signal. This is the basis of the popular flame ionization detector, a schematic diagram of which is shown in Figure 12.4.11 . Most carbon atoms—except those in carbonyl and carboxylic groups—generate a signal, which makes the FID an almost universal detector for organic compounds. Most inorganic compounds and many gases, such as H2O and CO2, are not detected, which makes the FID detector a useful detector for the analysis of organic analytes in atmospheric and aqueous environmental samples. Advantages of the FID include a detection limit that is approximately two to three orders of magnitude smaller than that for a thermal conductivity detector, and a linear response over 106–107 orders of magnitude in the amount of analyte injected. The sample, of course, is destroyed when using a flame ionization detector. Electron Capture Detector (ECD) The electron capture detector is an example of a selective detector. As shown in Figure 12.4.12 , the detector consists of a $\beta$-emitter, such as 63Ni. The emitted electrons ionize the mobile phase, usually N2, generating a standing current between a pair of electrodes. When a solute with a high affinity for capturing electrons elutes from the column, the current decreases, which serves as the signal. The ECD is highly selective toward solutes with electronegative functional groups, such as halogens and nitro groups, and is relatively insensitive to amines, alcohols, and hydrocarbons. Although its detection limit is excellent, its linear range extends over only about two orders of magnitude. A $\beta$-particle is an electron. Mass Spectrometer (MS) A mass spectrometer is an instrument that ionizes a gaseous molecule using sufficient energy that the resulting ion breaks apart into smaller ions. Because these ions have different mass-to-charge ratios, it is possible to separate them using a magnetic field or an electrical field. The resulting mass spectrum contains both quantitative and qualitative information about the analyte. Figure 12.4.13 shows a mass spectrum for toluene. Figure 12.4.14 shows a block diagram of a typical gas chromatography-mass spectrometer (GC–MS) instrument. The effluent from the column enters the mass spectrometer’s ion source in a manner that eliminates the majority of the carrier gas. In the ionization chamber the remaining molecules—a mixture of carrier gas, solvent, and solutes—undergo ionization and fragmentation. The mass spectrometer’s mass analyzer separates the ions by their mass-to-charge ratio and a detector counts the ions and displays the mass spectrum. There are several options for monitoring a chromatogram when using a mass spectrometer as the detector. The most common method is to continuously scan the entire mass spectrum and report the total signal for all ions that reach the detector during each scan. This total ion scan provides universal detection for all analytes. We can achieve some degree of selectivity by monitoring one or more specific mass-to-charge ratios, a process called selective-ion monitoring. A mass spectrometer provides excellent detection limits, typically 25 fg to 100 pg, with a linear range of 105 orders of magnitude. Because we continuously record the mass spectrum of the column’s eluent, we can go back and examine the mass spectrum for any time increment. This is a distinct advantage for GC–MS because we can use the mass spectrum to help identify a mixture’s components. For more details on mass spectrometry see Introduction to Mass Spectrometry by Michael Samide and Olujide Akinbo, a resource that is part of the Analytical Sciences Digital Library. Other Detectors Two additional detectors are similar in design to a flame ionization detector. In the flame photometric detector, optical emission from phosphorous and sulfur provides a detector selective for compounds that contain these elements. The thermionic detector responds to compounds that contain nitrogen or phosphorous. A Fourier transform infrared spectrophotometer (FT–IR) also can serve as a detector. In GC–FT–IR, effluent from the column flows through an optical cell constructed from a 10–40 cm Pyrex tube with an internal diameter of 1–3 mm. The cell’s interior surface is coated with a reflecting layer of gold. Multiple reflections of the source radiation as it is transmit- ted through the cell increase the optical path length through the sample. As is the case with GC–MS, an FT–IR detector continuously records the column eluent’s spectrum, which allows us to examine the IR spectrum for any time increment. See Section 10.3 for a discussion of FT-IR spectroscopy and instrumentation. Quantitative Applications Gas chromatography is widely used for the analysis of a diverse array of samples in environmental, clinical, pharmaceutical, biochemical, forensic, food science and petrochemical laboratories. Table 12.4.2 provides some representative examples of applications. Table 12.4.2 . Representative Applications of Gas Chromatography area applications environmental analysis green house gases (CO2, CH4, NOx) in air pesticides in water, wastewater, and soil vehicle emissions trihalomethanes in drinking water clinical analysis drugs blood alcohols forensic analysis analysis of arson accelerants detection of explosives consumer products volatile organics in spices and fragrances trace organics in whiskey monomers in latex paint petrochemical and chemical industry purity of solvents refinery gas composition of gasoline Quantitative Calculations In a GC analysis the area under the peak is proportional to the amount of analyte injected onto the column. A peak’s area is determined by integration, which usually is handled by the instrument’s computer or by an electronic integrating recorder. If two peak are resolved fully, the determination of their respective areas is straightforward. Before electronic integrating recorders and computers, two methods were used to find the area under a curve. One method used a manual planimeter; as you use the planimeter to trace an object’s perimeter, it records the area. A second approach for finding a peak’s area is the cut-and-weigh method. The chromatogram is recorded on a piece of paper and each peak of interest is cut out and weighed. Assuming the paper is uniform in thickness and density of fibers, the ratio of weights for two peaks is the same as the ratio of areas. Of course, this approach destroys your chromatogram. Overlapping peaks, however, require a choice between one of several options for dividing up the area shared by the two peaks (Figure 12.4.15 ). Which method we use depends on the relative size of the two peaks and their resolution. In some cases, the use of peak heights provides more accurate results [(a) Bicking, M. K. L. Chromatography Online, April 2006; (b) Bicking, M. K. L. Chromatography Online, June 2006]. For quantitative work we need to establish a calibration curve that relates the detector’s response to the analyte’s concentration. If the injection volume is identical for every standard and sample, then an external standardization provides both accurate and precise results. Unfortunately,even under the best conditions the relative precision for replicate injections may differ by 5%; often it is substantially worse. For quantitative work that requires high accuracy and precision, the use of internal standards is recommended. To review the method of internal standards, see Chapter 5.3. Example 12.4.1 Marriott and Carpenter report the following data for five replicate injections of a mixture that contains 1% v/v methyl isobutyl ketone and 1% v/v p-xylene in dichloromethane [Marriott, P. J.; Carpenter, P. D. J. Chem. Educ. 1996, 73, 96–99]. injection peak peak area (arb. units) I 1 48075 2 78112 II 1 85829 2 135404 III 1 84136 2 132332 IV 1 71681 2 112889 V 1 58054 2 91287 Assume that p-xylene (peak 2) is the analyte, and that methyl isobutyl ketone (peak 1) is the internal standard. Determine the 95% confidence interval for a single-point standardization with and without using the internal standard. Solution For a single-point external standardization we ignore the internal standard and determine the relationship between the peak area for p-xylene, A2, and the concentration, C2, of p-xylene. $A_{2}=k C_{2} \nonumber$ Substituting the known concentration for p-xylene (1% v/v) and the appropriate peak areas, gives the following values for the constant k. $78112 \quad 135404 \quad 132332 \quad 112889 \quad 91287 \nonumber$ The average value for k is 110 000 with a standard deviation of 25 100 (a relative standard deviation of 22.8%). The 95% confidence interval is $\mu=\overline{X} \pm \frac{t s}{\sqrt{n}}=111000 \pm \frac{(2.78)(25100)}{\sqrt{5}}=111000 \pm 31200 \nonumber$ For an internal standardization, the relationship between the analyte’s peak area, A2, the internal standard’s peak area, A1, and their respective concentrations, C2 and C1, is $\frac{A_{2}}{A_{1}}=k \frac{C_{2}}{C_{1}} \nonumber$ Substituting in the known concentrations and the appropriate peak areas gives the following values for the constant k. $1.5917 \quad 1.5776 \quad 1.5728 \quad 1.5749 \quad 1.5724 \nonumber$ The average value for k is 1.5779 with a standard deviation of 0.0080 (a relative standard deviation of 0.507%). The 95% confidence interval is $\mu=\overline{X} \pm \frac{t s}{\sqrt{n}}=1.5779 \pm \frac{(2.78)(0.0080)}{\sqrt{5}}=1.5779 \pm 0.0099 \nonumber$ Although there is a substantial variation in the individual peak areas for this set of replicate injections, the internal standard compensates for these variations, providing a more accurate and precise calibration. Exercise 12.4.1 Figure 12.4.16 shows chromatograms for five standards and for one sample. Each standard and sample contains the same concentration of an internal standard, which is 2.50 mg/mL. For the five standards, the concentrations of analyte are 0.20 mg/mL, 0.40 mg/mL, 0.60 mg/mL, 0.80 mg/mL, and 1.00 mg/mL, respectively. Determine the concentration of analyte in the sample by (a) ignoring the internal standards and creating an external standards calibration curve, and by (b) creating an internal standard calibration curve. For each approach, report the analyte’s concentration and the 95% confidence interval. Use peak heights instead of peak areas. Answer The following table summarizes my measurements of the peak heights for each standard and the sample, and their ratio (although your absolute values for peak heights will differ from mine, depending on the size of your monitor or printout, your relative peak height ratios should be similar to mine). [standard] (mg/mL) peak height of standard (mm) peak height of analyte (mm) peak height ratio 0.20 35 7 0.20 0.40 41 16 0.39 0.60 44 27 0.61 0.80 48 39 0.81 1.00 41 41 1.00 sample 39 21 0.54 Figure (a) shows the calibration curve and the calibration equation when we ignore the internal standard. Substituting the sample’s peak height into the calibration equation gives the analyte’s concentration in the sample as 0.49 mg/mL. The 95% confidence interval is ±0.24 mg/mL. The calibration curve shows quite a bit of scatter in the data because of uncertainty in the injection volumes. Figure (b) shows the calibration curve and the calibration equation when we include the internal standard. Substituting the sample’s peak height ratio into the calibration equation gives the analyte’s concentration in the sample as 0.54 mg/mL. The 95% confidence interval is ±0.04 mg/mL. To review the use of Excel or R for regression calculations and confidence intervals, see Chapter 5.5. The data for this exercise were created so that the analyte’s actual concentration is 0.55 mg/mL. Given the resolution of my ruler’s scale, my answer is pretty reasonable. Your measurements may be slightly different, but your answers should be close to the actual values. Qualitative Applications In addition to a quantitative analysis, we also can use chromatography to identify the components of a mixture. As noted earlier, when using an FT–IR or a mass spectrometer as the detector we have access to the eluent’s full spectrum for any retention time. By interpreting the spectrum or by searching against a library of spectra, we can identify the analyte responsible for each chromatographic peak. In addition to identifying the component responsible for a particular chromatographic peak, we also can use the saved spectra to evaluate peak purity. If only one component is responsible for a chromatographic peak, then the spectra should be identical throughout the peak’s elution. If a spectrum at the beginning of the peak’s elution is different from a spectrum taken near the end of the peak’s elution, then at least two components are co-eluting. When using a nonspectroscopic detector, such as a flame ionization detector, we must find another approach if we wish to identify the components of a mixture. One approach is to spike a sample with the suspected compound and look for an increase in peak height. We also can compare a peak’s retention time to the retention time for a known compound if we use identical operating conditions. Because a compound’s retention times on two identical columns are not likely to be the same—differences in packing efficiency, for example, will affect a solute’s retention time on a packed column—creating a table of standard retention times is not possible. Kovat’s retention index provides one solution to the problem of matching retention times. Under isothermal conditions, the adjusted retention times for normal alkanes increase logarithmically. Kovat defined the retention index, I, for a normal alkane as 100 times the number of carbon atoms. For example, the retention index is 400 for butane, C4H10, and 500 for pentane, C5H12. To determine the a compound’s retention index, Icpd, we use the following formula $I_{cpd} = 100 \times \frac {\log t_{r,cpd}^{\prime} - \log t_{r,x}^{\prime}} {\log t_{r, x+1}^{\prime} - \log t_{r,x}^{\prime}} + I_x \label{12.1}$ where $t_{r,cpd}^{\prime}$ is the compound’s adjusted retention time, $t_{r,x}^{\prime}$ and $t_{r,x+1}^{\prime}$ are the adjusted retention times for the normal alkanes that elute immediately before the compound and immediately after the compound, respectively, and Ix is the retention index for the normal alkane that elutes immediately before the compound. A compound’s retention index for a particular set of chromatographic conditions—stationary phase, mobile phase, column type, column length, temperature, etc.—is reasonably consistent from day- to-day and between different columns and instruments. Tables of Kovat’s retention indices are available; see, for example, the NIST Chemistry Webbook. A search for toluene returns 341 values of I for over 20 different stationary phases, and for both packed columns and capillary columns. Example 12.4.2 In a separation of a mixture of hydrocarbons the following adjusted retention times are measured: 2.23 min for propane, 5.71 min for isobutane, and 6.67 min for butane. What is the Kovat’s retention index for each of these hydrocarbons? Solution Kovat’s retention index for a normal alkane is 100 times the number of carbons; thus, for propane, I = 300 and for butane, I = 400. To find Kovat’s retention index for isobutane we use Equation \ref{12.1}. $I_\text{isobutane} =100 \times \frac{\log (5.71)-\log (2.23)}{\log (6.67)-\log (2.23)}+300=386 \nonumber$ Exercise 12.4.2 When using a column with the same stationary phase as in Example 12.4.2 , you find that the retention times for propane and butane are 4.78 min and 6.86 min, respectively. What is the expected retention time for isobutane? Answer Because we are using the same column we can assume that isobutane’s retention index of 386 remains unchanged. Using Equation \ref{12.1}, we have $386=100 \times \frac{\log x-\log (4.78)}{\log (6.86)-\log (4.78)}+300 \nonumber$ where x is the retention time for isobutane. Solving for x, we find that $0.86=\frac{\log x-\log (4.78)}{\log (6.86)-\log (4.78)} \nonumber$ $0.135=\log x-0.679 \nonumber$ $0.814=\log x \nonumber$ $x=6.52 \nonumber$ the retention time for isobutane is 6.5 min. The best way to appreciate the theoretical and the practical details discussed in this section is to carefully examine a typical analytical method. Although each method is unique, the following description of the determination of trihalomethanes in drinking water provides an instructive example of a typical procedure. The description here is based on a Method 6232B in Standard Methods for the Examination of Water and Wastewater, 20th Ed., American Public Health Association: Washing- ton, DC, 1998. Representative Method 12.4.1: Determination of Trihalomethanes in Drinking Water Description of Method Trihalomethanes, such as chloroform, CHCl3, and bromoform, CHBr3, are found in most chlorinated waters. Because chloroform is a suspected carcinogen, the determination of trihalomethanes in public drinking water supplies is of considerable importance. In this method the trihalomethanes CHCl3, CHBrCl2, CHBr2Cl, and CHBr3 are isolated using a liquid–liquid extraction with pentane and determined using a gas chromatograph equipped with an electron capture detector. Procedure Collect the sample in a 40-mL glass vial equipped with a screw-cap lined with a TFE-faced septum. Fill the vial until it overflows, ensuring that there are no air bubbles. Add 25 mg of ascorbic acid as a reducing agent to quench the further production of trihalomethanes. Seal the vial and store the sample at 4oC for no longer than 14 days. Prepare a standard stock solution for each trihalomethane by placing 9.8 mL of methanol in a 10-mL volumetric flask. Let the flask stand for 10 min, or until all surfaces wetted with methanol are dry. Weigh the flask to the nearest ±0.1 mg. Using a 100-μL syringe, add 2 or more drops of trihalomethane to the volumetric flask, allowing each drop to fall directly into the methanol. Reweigh the flask before diluting to volume and mixing. Transfer the solution to a 40-mL glass vial equipped with a TFE-lined screw-top and report the concentration in μg/mL. Store the stock solutions at –10 to –20oC and away from the light. Prepare a multicomponent working standard from the stock standards by making appropriate dilutions of the stock solution with methanol in a volumetric flask. Choose concentrations so that calibration standards (see below) require no more than 20 μL of working standard per 100 mL of water. Using the multicomponent working standard, prepare at least three, but preferably 5–7 calibration standards. At least one standard must be near the detection limit and the standards must bracket the expected concentration of trihalomethanes in the samples. Using an appropriate volumetric flask, prepare the standards by injecting at least 10 μL of the working standard below the surface of the water and dilute to volume. Gently mix each standard three times only. Discard the solution in the neck of the volumetric flask and then transfer the remaining solution to a 40-mL glass vial with a TFE-lined screw-top. If the standard has a headspace, it must be analyzed within 1 hr; standards without a headspace may be held for up to 24 hr. Prepare an internal standard by dissolving 1,2-dibromopentane in hexane. Add a sufficient amount of this solution to pentane to give a final concentration of 30 μg 1,2-dibromopentane/L. To prepare the calibration standards and samples for analysis, open the screw top vial and remove 5 mL of the solution. Recap the vial and weigh to the nearest ±0.1 mg. Add 2.00 mL of pentane (with the internal standard) to each vial and shake vigorously for 1 min. Allow the two phases to separate for 2 min and then use a glass pipet to transfer at least 1 mL of the pentane (the upper phase) to a 1.8-mL screw top sample vial equipped with a TFE septum, and store at 4oC until you are ready to inject them into the GC. After emptying, rinsing, and drying the sample’s original vial, weigh it to the nearest ±0.1 mg and calculate the sample’s weight to ±0.1 g. If the density is 1.0 g/mL, then the sample’s weight is equivalent to its volume. Inject a 1–5 μL aliquot of the pentane extracts into a GC equipped with a 2-mm ID, 2-m long glass column packed with a stationary phase of 10% squalane on a packing material of 80/100 mesh Chromosorb WAW. Operate the column at 67oC and a flow rate of 25 mL/min. A variety of other columns can be used. Another option, for example, is a 30-m fused silica column with an internal diameter of 0.32 mm and a 1 µm coating of the stationary phase DB-1. A linear flow rate of 20 cm/s is used with the following temperature program: hold for 5 min at 35oC; increase to 70oC at 10oC/min; increase to 200oC at 20oC/min. Questions 1. A simple liquid–liquid extraction rarely extracts 100% of the analyte. How does this method account for incomplete extractions? Because we use the same extraction procedure for the samples and the standards, we reasonably expect that the extraction efficiency is the same for all samples and standards; thus, the relative amount of analyte in any two samples or standards is unaffected by an incomplete extraction. 2. Water samples are likely to contain trace amounts of other organic compounds, many of which will extract into pentane along with the trihalomethanes. A short, packed column, such as the one used in this method, generally does not do a particularly good job of resolving chromatographic peaks. Why do we not need to worry about these other compounds? An electron capture detector responds only to compounds, such as the trihalomethanes, that have electronegative functional groups. Because an electron capture detector will not respond to most of the potential interfering compounds, the chromatogram will have relatively few peaks other than those for the trihalomethanes and the internal standard. 3. Predict the order in which the four analytes elute from the GC column. Retention time should follow the compound’s boiling points, eluting from the lowest boiling point to the highest boiling points. The expected elution order is CHCl3 (61.2oC), CHCl2Br (90oC), CHClBr2 (119oC), and CHBr3 (149.1oC). 4. Although chloroform is an analyte, it also is an interferent because it is present at trace levels in the air. Any chloroform present in the laboratory air, for example, may enter the sample by diffusing through the sample vial’s silicon septum. How can we determine whether samples are contaminated in this manner? A sample blank of trihalomethane-free water is kept with the samples at all times. If the sample blank shows no evidence for chloroform, then we can safely assume that the samples also are free from contamination. 5. Why is it necessary to collect samples without a headspace (a layer of air that overlays the liquid) in the sample vial? Because trihalomethanes are volatile, the presence of a headspace allows for the loss of analyte from the sample to the headspace, resulting in a negative determinate error. 6. In preparing the stock solution for each trihalomethane, the procedure specifies that we add two or more drops of the pure compound by dropping them into a volumetric flask that contains methanol. When preparing the calibration standards, however, the working standard must be injected below the surface of the methanol. Explain the reason for this difference. When preparing a stock solution, the potential loss of the volatile trihalomethane is unimportant because we determine its concentration by weight after adding it to the methanol and diluting to volume. When we prepare the calibration standard, however, we must ensure that the addition of trihalomethane is quantitative; thus, we inject it below the surface to avoid the potential loss of analyte. Evaluation Scale of Operation Gas chromatography is used to analyze analytes present at levels ranging from major to ultratrace components. Depending on the detector, samples with major and minor analytes may need to be diluted before analysis. The thermal conductivity and flame ionization detectors can handle larger amounts of analyte; other detectors, such as an electron capture detector or a mass spectrometer, require substantially smaller amounts of analyte. Although the injection volume for gas chromatography is quite small—typically about a microliter—the amount of available sample must be sufficient that the injection is a representative subsample. For a trace analyte, the actual amount of injected analyte is often in the picogram range. Using Representative Method 12.4.1 as an example, a 3.0-μL injection of 1 μg/L CHCl3 is equivalent to 15 pg of CHCl3, assuming a 100% extraction efficiency. Accuracy The accuracy of a gas chromatographic method varies substantially from sample-to-sample. For routine samples, accuracies of 1–5% are common. For analytes present at very low concentration levels, for samples with complex matrices, or for samples that require significant processing before analysis, accuracy may be substantially poorer. In the analysis for trihalomethanes described in Representative Method 12.4.1, for example, determinate errors as large as ±25% are possible. Precision The precision of a gas chromatographic analysis includes contributions from sampling, sample preparation, and the instrument. The relative standard deviation due to the instrument typically is 1–5%, although it can be significantly higher. The principal limitations are detector noise, which affects the determination of peak area, and the reproducibility of injection volumes. In quantitative work, the use of an internal standard compensates for any variability in injection volumes. Sensitivity In a gas chromatographic analysis, sensitivity is determined by the detector’s characteristics. Of particular importance for quantitative work is the detector’s linear range; that is, the range of concentrations over which a calibration curve is linear. Detectors with a wide linear range, such as the thermal conductivity detector and the flame ionization detector, can be used to analyze samples over a wide range of concentrations without adjusting operating conditions. Other detectors, such as the electron capture detector, have a much narrower linear range. Selectivity Because it combines separation with analysis, chromatographic methods provide excellent selectivity. By adjusting conditions it usually is possible to design a separation so that the analytes elute by themselves, even when the mixture is complex. Additional selectivity is obtained by using a detector, such as the electron capture detector, that does not respond to all compounds. Time, Cost, and Equipment Analysis time can vary from several minutes for samples that contain only a few constituents, to more than an hour for more complex samples. Preliminary sample preparation may substantially increase the analysis time. Instrumentation for gas chromatography ranges in price from inexpensive (a few thousand dollars) to expensive (>$50,000). The more expensive models are designed for capillary columns, include a variety of injection options, and use more sophisticated detectors, such as a mass spectrometer, or include multiple detectors. Packed columns typically cost <$200, and the cost of a capillary column is typically $300–$1000.
textbooks/chem/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/12%3A_Chromatographic_and_Electrophoretic_Methods/12.04%3A_Gas_Chromatography.txt
In high-performance liquid chromatography (HPLC) we inject the sample, which is in solution form, into a liquid mobile phase. The mobile phase carries the sample through a packed or capillary column that separates the sample’s components based on their ability to partition between the mobile phase and the stationary phase. Figure 12.5.1 shows an example of a typical HPLC instrument, which has several key components: reservoirs that store the mobile phase; a pump for pushing the mobile phase through the system; an injector for introducing the sample; a column for separating the sample into its component parts; and a detector for monitoring the eluent as it comes off the column. Let’s consider each of these components. A solute’s retention time in HPLC is determined by its interaction with the stationary phase and the mobile phase. There are several different types of solute/stationary phase interactions, including liquid–solid adsorption, liquid–liquid partitioning, ion-exchange, and size-exclusion. This chapter deals exclusively with HPLC separations based on liquid–liquid partitioning. Other forms of liquid chromatography receive consideration in Chapter 12.6. HPLC Columns An HPLC typically includes two columns: an analytical column, which is responsible for the separation, and a guard column that is placed before the analytical column to protect it from contamination. Analytical Columns The most common type of HPLC column is a stainless steel tube with an internal diameter between 2.1 mm and 4.6 mm and a length between 30 mm and 300 mm (Figure 12.5.2 ). The column is packed with 3–10 µm porous silica particles with either an irregular or a spherical shape. Typical column efficiencies are 40000–60000 theoretical plates/m. Assuming a Vmax/Vmin of approximately 50, a 25-cm column with 50 000 plates/m has 12 500 theoretical plates and a peak capacity of 110. Capillary columns use less solvent and, because the sample is diluted to a lesser extent, produce larger signals at the detector. These columns are made from fused silica capillaries with internal diameters from 44–200 μm and lengths of 50–250 mm. Capillary columns packed with 3–5 μm particles have been prepared with column efficiencies of up to 250 000 theoretical plates [Novotony, M. Science, 1989, 246, 51–57]. One limitation to a packed capillary column is the back pressure that develops when pumping the mobile phase through the small interstitial spaces between the particulate micron-sized packing material (Figure 12.5.3 ). Because the tubing and fittings that carry the mobile phase have pressure limits, a higher back pressure requires a lower flow rate and a longer analysis time. Monolithic columns, in which the solid support is a single, porous rod, offer column efficiencies equivalent to a packed capillary column while allowing for faster flow rates. A monolithic column—which usually is similar in size to a conventional packed column, although smaller, capillary columns also are available—is prepared by forming the mono- lithic rod in a mold and covering it with PTFE tubing or a polymer resin. Monolithic rods made of a silica-gel polymer typically have macropores with diameters of approximately 2 μm and mesopores—pores within the macropores—with diameters of approximately 13 nm [Cabrera, K. Chromatography Online, April 1, 2008]. Guard Columns Two problems tend to shorten the lifetime of an analytical column. First, solutes that bind irreversibly to the stationary phase degrade the column’s performance by decreasing the amount of stationary phase available for effecting a separation. Second, particulate material injected with the sample may clog the analytical column. To minimize these problems we place a guard column before the analytical column. A Guard column usually contains the same particulate packing material and stationary phase as the analytical column, but is significantly shorter and less expensive—a length of 7.5 mm and a cost one-tenth of that for the corresponding analytical column is typical. Because they are intended to be sacrificial, guard columns are replaced regularly. If you look closely at Figure 12.5.1 , you will see the small guard column just above the analytical column. Stationary Phases for Liquid-Liquid Chromatography In liquid–liquid chromatography the stationary phase is a liquid film coated on a packing material, typically 3–10 μm porous silica particles. Because the stationary phase may be partially soluble in the mobile phase, it may elute, or bleed from the column over time. To prevent the loss of stationary phase, which shortens the column’s lifetime, it is bound covalently to the silica particles. Bonded stationary phases are created by reacting the silica particles with an organochlorosilane of the general form Si(CH3)2RCl, where R is an alkyl or substituted alkyl group. To prevent unwanted interactions between the solutes and any remaining –SiOH groups, Si(CH3)3Cl is used to convert unreacted sites to $–\text{SiOSi(CH}_3)_3$; such columns are designated as end-capped. The properties of a stationary phase depend on the organosilane’s alkyl group. If R is a polar functional group, then the stationary phase is polar. Examples of polar stationary phases include those where R contains a cyano (–C2H4CN), a diol (–C3H6OCH2CHOHCH2OH), or an amino (–C3H6NH2) functional group. Because the stationary phase is polar, the mobile phase is a nonpolar or a moderately polar solvent. The combination of a polar stationary phase and a nonpolar mobile phase is called normal- phase chromatography. In reversed-phase chromatography, which is the more common form of HPLC, the stationary phase is nonpolar and the mobile phase is polar. The most common nonpolar stationary phases use an organochlorosilane where the R group is an n-octyl (C8) or n-octyldecyl (C18) hydrocarbon chain. Most reversed-phase separations are carried out using a buffered aqueous solution as a polar mobile phase, or using other polar solvents, such as methanol and acetonitrile. Because the silica substrate may undergo hydrolysis in basic solutions, the pH of the mobile phase must be less than 7.5. It seems odd that the more common form of liquid chromatography is identified as reverse-phase instead of normal phase. You might recall that one of the earliest examples of chromatography was Mikhail Tswett’s separation of plant pigments using a polar column of calcium carbonate and a nonpolar mobile phase of petroleum ether. The assignment of normal and reversed, therefore, is all about precedence. Mobile Phases The elution order of solutes in HPLC is governed by polarity. For a normal-phase separation, a solute of lower polarity spends proportionally less time in the polar stationary phase and elutes before a solute that is more polar. Given a particular stationary phase, retention times in normal-phase HPLC are controlled by adjusting the mobile phase’s properties. For example, if the resolution between two solutes is poor, switching to a less polar mobile phase keeps the solutes on the column for a longer time and provides more opportunity for their separation. In reversed-phase HPLC the order of elution is the opposite that in a normal-phase separation, with more polar solutes eluting first. Increasing the polarity of the mobile phase leads to longer retention times. Shorter retention times require a mobile phase of lower polarity. Choosing a Mobile Phase: Using the Polarity Index There are several indices that help in selecting a mobile phase, one of which is the polarity index [Snyder, L. R.; Glajch, J. L.; Kirkland, J. J. Practical HPLC Method Development, Wiley-Inter- science: New York, 1988]. Table 12.5.1 provides values of the polarity index, $P^{\prime}$, for several common mobile phases, where larger values of $P^{\prime}$ correspond to more polar solvents. Mixing together two or more mobile phases—assuming they are miscible—creates a mobile phase of intermediate polarity. For example, a binary mobile phase made by combining solvent A and solvent B has a polarity index, $P_{AB}^{\prime}$, of $P_{A B}^{\prime}=\Phi_{A} P_{A}^{\prime}+\Phi_{B} P_{B}^{\prime} \label{12.1}$ where $P_A^{\prime}$ and $P_B^{\prime}$ are the polarity indices for solvents A and B, and $\Phi_A$ and $\Phi_B$ are the volume fractions for the two solvents. Table 12.5.1 . Properties of HPLC Mobile Phases mobile phase polarity index ($P^{\prime}$) UV cutoff (nm) cyclohexane 0.04 210 n-hexane 0.1 210 carbon tetrachloride 1.6 265 i-propyl ether 2.4 220 toluene 2.4 286 diethyl ether 2.8 218 tetrahydrofuran 4.0 220 ethanol 4.3 210 ethyl acetate 4.4 255 dioxane 4.8 215 methanol 5.1 210 acetonitrile 5.8 190 water 10.2 Example 12.5.1 A reversed-phase HPLC separation is carried out using a mobile phase of 60% v/v water and 40% v/v methanol. What is the mobile phase’s polarity index? Solution Using Equation \ref{12.1} and the values in Table 12.5.1 , the polarity index for a 60:40 water–methanol mixture is $P_{A B}^{\prime}=\Phi_\text{water} P_\text{water}^{\prime}+\Phi_\text{methanol} P_\text{methanol}^{\prime} \nonumber$ $P_{A B}^{\prime}=0.60 \times 10.2+0.40 \times 5.1=8.2 \nonumber$ Exercise 12.5.1 Suppose you need a mobile phase with a polarity index of 7.5. Explain how you can prepare this mobile phase using methanol and water. Answer If we let x be the fraction of water in the mobile phase, then 1 – x is the fraction of methanol. Substituting these values into Equation \ref{12.1} and solving for x $7.5=10.2 x+5.1(1-x) \nonumber$ $7.5=10.2 x+5.1-5.1 x \nonumber$ $2.4=5.1 x \nonumber$ gives x as 0.47. The mobile phase is 47% v/v water and 53% v/v methanol. As a general rule, a two unit change in the polarity index corresponds to an approximately 10-fold change in a solute’s retention factor. Here is a simple example. If a solute’s retention factor, k, is 22 when using water as a mobile phase ($P^{\prime}$ = 10.2), then switching to a mobile phase of 60:40 water–methanol ($P^{\prime}$ = 8.2) decreases k to approximately 2.2. Note that the retention factor becomes smaller because we are switching from a more polar mobile phase to a less polar mobile phase in a reversed-phase separation. Choosing a Mobile Phase: Adjusting Selectivity Changing the mobile phase’s polarity index changes a solute’s retention factor. As we learned in Chapter 12.3, however, a change in k is not an effective way to improve resolution when the initial value of k is greater than 10. To effect a better separation between two solutes we must improve the selectivity factor, $\alpha$. There are two common methods for increasing $\alpha$: adding a reagent to the mobile phase that reacts with the solutes in a secondary equilibrium reaction or switching to a different mobile phase. Taking advantage of a secondary equilibrium reaction is a useful strategy for improving a separation [(a) Foley, J. P. Chromatography, 1987, 7, 118–128; (b) Foley, J. P.; May, W. E. Anal. Chem. 1987, 59, 102–109; (c) Foley, J. P.; May, W. E. Anal. Chem. 1987, 59, 110–115]. Figure 12.3.3, which we considered earlier in this chapter, shows the reversed-phase separation of four weak acids—benzoic acid, terephthalic acid, p-aminobenzoic acid, and p-hydroxybenzoic acid—on a nonpolar C18 column using an aqueous buffer of acetic acid and sodium acetate as the mobile phase. The retention times for these weak acids are shorter when using a less acidic mobile phase because each solute is present in an anionic, weak base form that is less soluble in the nonpolar stationary phase. If the mobile phase’s pH is sufficiently acidic, the solutes are present as neutral weak acids that are more soluble in the stationary phase and take longer to elute. Because the weak acid solutes do not have identical pKa values, the pH of the mobile phase has a different effect on each solute’s retention time, allowing us to find the optimum pH for effecting a complete separation of the four solutes. Acid–base chemistry is not the only example of a secondary equilibrium reaction. Other examples include ion-pairing, complexation, and the interaction of solutes with micelles. We will consider the last of these in Chapter 12.7 when we discuss micellar electrokinetic capillary chromatography. In Example 12.5.1 we learned how to adjust the mobile phase’s polarity by blending together two solvents. A polarity index, however, is just a guide, and binary mobile phase mixtures with identical polarity indices may not resolve equally a pair of solutes. Table 12.5.2 , for example, shows retention times for four weak acids in two mobile phases with nearly identical values for $P^{\prime}$. Although the order of elution is the same for both mobile phases, each solute’s retention time is affected differently by the choice of organic solvent. If we switch from using acetonitrile to tetrahydrofuran, for example, we find that benzoic acid elutes more quickly and that p-hydroxybenzoic acid elutes more slowly. Although we can resolve fully these two solutes using mobile phase that is 16% v/v acetonitrile, we cannot resolve them if the mobile phase is 10% tetrahydrofuran. Table 12.5.2 . Retention Times for Four Weak Acids in Mobile Phases With Similar Polarity Indexes retention time (min) 16% acetonitrile (CH3CN) 84% pH 4.11 aqueous buffer ($P^{\prime}$ = 9.5) 10% tetrahydrofuran (THF) 90% pH 4.11 aqueous buffer ($P^{\prime}$ = 9.6) $t_\text{r, BA}$ 5.18 4.01 $t_\text{r, PH}$ 1.67 2.91 $t_\text{r, PA}$ 1.21 1.05 $t_\text{r, TP}$ 0.23 0.54 Key: BA is benzoic acid; PH is p-hydroxybenzoic acid; PA is p-aminobenzoic acid; TP is terephthalic acid Source: Harvey, D. T.; Byerly, S.; Bowman, A.; Tomlin, J. “Optimization of HPLC and GC Separations Using Re- sponse Surfaces,” J. Chem. Educ. 1991, 68, 162–168. One strategy for finding the best mobile phase is to use the solvent triangle shown in Figure 12.5.4 , which allows us to explore a broad range of mobile phases with only seven experiments. We begin by adjusting the amount of acetonitrile in the mobile phase to produce the best possible separation within the desired analysis time. Next, we use Table 12.5.3 to estimate the composition of methanol/H2O and tetrahydrofuran/H2O mobile phases that will produce similar analysis times. Four additional mobile phases are prepared using the binary and ternary mobile phases shown in Figure 12.5.4 . When we examine the chromatograms from these seven mobile phases we may find that one or more provides an adequate separation, or we may identify a region within the solvent triangle where a separation is feasible. Figure 12.5.5 shows a resolution map for the reversed-phase separation of benzoic acid, terephthalic acid, p-aminobenzoic acid, and p-hydroxybenzoic acid on a nonpolar C18 column in which the maximum desired analysis time is set to 6 min [Harvey, D. T.; Byerly, S.; Bowman, A.; Tomlin, J. J. Chem. Educ. 1991, 68, 162–168]. The areas in blue, green, and red show mobile phase compositions that do not provide baseline resolution. The unshaded area represents mobile phase compositions where a separation is possible. The choice to start with acetonitrile is arbitrary—we can just as easily choose to begin with methanol or with tetrahydrofuran. Table 12.5.3 . Composition of Mobile Phases With Approximately Equal Solvent Strengths %v/v CH3OH % v/v CH3CN %v/v THF 0 0 0 10 6 4 20 14 10 30 22 16 40 32 24 50 40 30 6 50 36 70 60 44 80 72 52 90 87 62 100 99 71 Choosing a Mobile Phase: Isocratic and Gradient Elutions A separation using a mobile phase that has a fixed composition is an isocratic elution. One difficulty with an isocratic elution is that an appropriate mobile phase strength for resolving early-eluting solutes may lead to unacceptably long retention times for late-eluting solutes. Optimizing the mobile phase for late-eluting solutes, on the other hand, may provide an inadequate separation of early-eluting solutes. Changing the mobile phase’s composition as the separation progresses is one solution to this problem. For a reversed-phase separation we use an initial mobile phase that is more polar. As the separation progresses, we adjust the composition of mobile phase so that it becomes less polar (see Figure 12.5.6 ). Such separations are called gradient elutions. HPLC Plumbing In a gas chromatograph the pressure from a compressed gas cylinder is sufficient to push the mobile phase through the column. Pushing a liquid mobile phase through a column, however, takes a great deal more effort, generating pressures in excess of several hundred atmospheres. In this section we consider the basic plumbing needed to move the mobile phase through the column and to inject the sample into the mobile phase. Moving the Mobile Phase A typical HPLC includes between 1–4 reservoirs for storing mobile phase solvents. The instrument in Figure 12.5.1 , for example, has two mobile phase reservoirs that are used for an isocratic elution or a gradient elution by drawing solvents from one or both reservoirs. Before using a mobile phase solvent we must remove dissolved gases, such as N2 and O2, and small particulate matter, such as dust. Because there is a large drop in pressure across the column—the pressure at the column’s entrance is as much as several hundred atmospheres, but it is atmospheric pressure at the column’s exit—gases dissolved in the mobile phase are released as gas bubbles that may interfere with the detector’s response. Degassing is accomplished in several ways, but the most common are the use of a vacuum pump or sparging with an inert gas, such as He, which has a low solubility in the mobile phase. Particulate materials, which may clog the HPLC tubing or column, are removed by filtering the solvents. Bubbling an inert gas through the mobile phase releases volatile dissolved gases. This process is called sparging. The mobile phase solvents are pulled from their reservoirs by the action of one or more pumps. Figure 12.5.7 shows a close-up view of the pumps for the instrument in Figure 12.5.1 . The working pump and the equilibrating pump each have a piston whose back and forth movement maintains a constant flow rate of up to several mL/min and provides the high output pressure needed to push the mobile phase through the chromatographic column. In this particular instrument, each pump sends its mobile phase to a mixing chamber where they combine to form the final mobile phase. The relative speed of the two pumps determines the mobile phase’s final composition. The back and forth movement of a reciprocating pump creates a pulsed flow that contributes noise to the chromatogram. To minimize these pulses, each pump in Figure 12.5.7 has two cylinders. During the working cylinder’s forward stoke it fills the equilibrating cylinder and establishes flow through the column. When the working cylinder is on its reverse stroke, the flow is maintained by the piston in the equilibrating cylinder. The result is a pulse-free flow. There are other possible ways to control the mobile phase’s composition and flow rate. For example, instead of the two pumps in Figure 12.5.7 , we can place a solvent proportioning valve before a single pump. The solvent proportioning value connects two or more solvent reservoirs to the pump and determines how much of each solvent is pulled during each of the pump’s cycles. Another approach for eliminating a pulsed flow is to include a pulse damper between the pump and the column. A pulse damper is a chamber filled with an easily compressed fluid and a flexible diaphragm. During the piston’s forward stroke the fluid in the pulse damper is compressed. When the piston withdraws to refill the pump, pressure from the expanding fluid in the pulse damper maintains the flow rate. Injecting the Sample The operating pressure within an HPLC is sufficiently high that we cannot inject the sample into the mobile phase by inserting a syringe through a septum, as is possible in gas chromatography. Instead, we inject the sample using a loop injector, a diagram of which is shown in Figure 12.5.8 . In the load position a sample loop—which is available in a variety of sizes ranging from 0.5 μL to 5 mL—is isolated from the mobile phase and open to the atmosphere. The sample loop is filled using a syringe with a capacity several times that of the sample loop, with excess sample exiting through the waste line. After loading the sample, the injector is turned to the inject position, which redirects the mobile phase through the sample loop and onto the column. The instrument in Figure 12.5.1 uses an autosampler to inject samples. Instead of using a syringe to push the sample into the sample loop, the syringe draws sample into the sample loop. Detectors for HPLC Many different types of detectors have been use to monitor HPLC separations, most of which use the spectroscopic techniques from Chapter 10 or the electrochemical techniques from Chapter 11. Spectroscopic Detectors The most popular HPLC detectors take advantage of an analyte’s UV/Vis absorption spectrum. These detectors range from simple designs, in which the analytical wavelength is selected using appropriate filters, to a modified spectrophotometer in which the sample compartment includes a flow cell. Figure 12.5.9 shows the design of a typical flow cell when using a diode array spectrometer as the detector. The flow cell has a volume of 1–10 μL and a path length of 0.2–1 cm. To review the details of how we measure absorbance, see Chapter 10.2. More information about different types of instruments, including the diode array spectrometer, is in Chapter 10.3. When using a UV/Vis detector the resulting chromatogram is a plot of absorbance as a function of elution time (see Figure 12.5.10 ). If the detector is a diode array spectrometer, then we also can display the result as a three-dimensional chromatogram that shows absorbance as a function of wavelength and elution time. One limitation to using absorbance is that the mobile phase cannot absorb at the wavelengths we wish to monitor. Table 12.5.1 lists the minimum useful UV wavelength for several common HPLC solvents. Absorbance detectors provide detection limits of as little as 100 pg–1 ng of injected analyte. If an analyte is fluorescent, we can place the flow cell in a spectrofluorimeter. As shown in Figure 12.5.11 , a fluorescence detector provides additional selectivity because only a few of a sample’s components are fluorescent. Detection limits are as little as 1–10 pg of injected analyte. See Chapter 10.6 for a review of fluorescence spectroscopy and spectrofluorimeters. Electrochemical Detectors Another common group of HPLC detectors are those based on electrochemical measurements such as amperometry, voltammetry, coulometry, and conductivity. Figure 12.5.12 , for example, shows an amperometric flow cell. Effluent from the column passes over the working electrode—held at a constant potential relative to a downstream reference electrode—that completely oxidizes or reduces the analytes. The current flowing between the working electrode and the auxiliary electrode serves as the analytical signal. Detection limits for amperometric electrochemical detection are from 10 pg–1 ng of injected analyte. See Chapter 11.4 for a review of amperometry. Other Detectors Several other detectors have been used in HPLC. Measuring a change in the mobile phase’s refractive index is analogous to monitoring the mobile phase’s thermal conductivity in gas chromatography. A refractive index detector is nearly universal, responding to almost all compounds, but has a relatively poor detection limit of 0.1–1 μg of injected analyte. An additional limitation of a refractive index detector is that it cannot be used for a gradient elution unless the mobile phase components have identical refractive indexes. Another useful detector is a mass spectrometer. Figure 12.5.13 shows a block diagram of a typical HPLC–MS instrument. The effluent from the column enters the mass spectrometer’s ion source using an interface the removes most of the mobile phase, an essential need because of the incompatibility between the liquid mobile phase and the mass spectrometer’s high vacuum environment. In the ionization chamber the remaining molecules—a mixture of the mobile phase components and solutes—undergo ionization and fragmentation. The mass spectrometer’s mass analyzer separates the ions by their mass-to-charge ratio (m/z). A detector counts the ions and displays the mass spectrum. There are several options for monitoring the chromatogram when using a mass spectrometer as the detector. The most common method is to continuously scan the entire mass spectrum and report the total signal for all ions reaching the detector during each scan. This total ion scan provides universal detection for all analytes. As seen in Figure 12.5.14 , we can achieve some degree of selectivity by monitoring only specific mass-to-charge ratios, a process called selective-ion monitoring. The advantages of using a mass spectrometer in HPLC are the same as for gas chromatography. Detection limits are very good, typically 0.1–1 ng of injected analyte, with values as low as 1–10 pg for some samples. In addition, a mass spectrometer provides qualitative, structural information that can help to identify the analytes. The interface between the HPLC and the mass spectrometer is technically more difficult than that in a GC–MS because of the incompatibility of a liquid mobile phase with the mass spectrometer’s high vacuum requirement. For more details on mass spectrometry see Introduction to Mass Spectrometry by Michael Samide and Olujide Akinbo, a resource that is part of the Analytical Sciences Digital Library. Quantitative Applications High-performance liquid chromatography is used routinely for both qualitative and quantitative analyses of environmental, pharmaceutical, industrial, forensic, clinical, and consumer product samples. Preparing Samples for Analysis Samples in liquid form are injected into the HPLC after a suitable clean-up to remove any particulate materials, or after a suitable extraction to remove matrix interferents. In determining polyaromatic hydrocarbons (PAH) in wastewater, for example, an extraction with CH2Cl2 serves the dual purpose of concentrating the analytes and isolating them from matrix interferents. Solid samples are first dissolved in a suitable solvent or the analytes of interest brought into solution by extraction. For example, an HPLC analysis for the active ingredients and the degradation products in a pharmaceutical tablet often begins by extracting the powdered tablet with a portion of mobile phase. Gas samples are collected by bubbling them through a trap that contains a suitable solvent. Organic isocyanates in industrial atmospheres are collected by bubbling the air through a solution of 1-(2-methoxyphenyl)piperazine in toluene. The reaction between the isocyanates and 1-(2-methoxyphenyl)piperazine both stabilizes them against degradation before the HPLC analysis and converts them to a chemical form that can be monitored by UV absorption. Quantitative Calculations A quantitative HPLC analysis is often easier than a quantitative GC analysis because a fixed volume sample loop provides a more precise and accurate injection. As a result, most quantitative HPLC methods do not need an internal standard and, instead, use external standards and a normal calibration curve. An internal standard is necessary when using HPLC–MS because the interface between the HPLC and the mass spectrometer does not allow for a reproducible transfer of the column’s eluent into the MS’s ionization chamber. Example 12.5.2 The concentration of polynuclear aromatic hydrocarbons (PAH) in soil is determined by first extracting the PAHs with methylene chloride. The extract is diluted, if necessary, and the PAHs separated by HPLC using a UV/Vis or fluorescence detector. Calibration is achieved using one or more external standards. In a typical analysis a 2.013-g sample of dried soil is extracted with 20.00 mL of methylene chloride. After filtering to remove the soil, a 1.00-mL portion of the extract is removed and diluted to 10.00 mL with acetonitrile. Injecting 5 μL of the diluted extract into an HPLC gives a signal of 0.217 (arbitrary units) for the PAH fluoranthene. When 5 μL of a 20.0-ppm fluoranthene standard is analyzed using the same conditions, a signal of 0.258 is measured. Report the parts per million of fluoranthene in the soil. Solution For a single-point external standard, the relationship between the signal, S, and the concentration, C, of fluoranthene is $S = kC \nonumber$ Substituting in values for the standard’s signal and concentration gives the value of k as $k=\frac{S}{C}=\frac{0.258}{20.0 \text{ ppm}}=0.0129 \text{ ppm}^{-1} \nonumber$ Using this value for k and the sample’s HPLC signal gives a fluoranthene concentration of $C=\frac{S}{k}=\frac{0.217}{0.0129 \text{ ppm}^{-1}}=16.8 \text{ ppm} \nonumber$ for the extracted and diluted soil sample. The concentration of fluoranthene in the soil is $\frac{16.8 \text{ g} / \mathrm{mL} \times \frac{10.00 \text{ mL}}{1.00 \text{ mL}} \times 20.00 \text{ mL}}{2.013 \text{ g} \text { sample }}=1670 \text{ ppm} \text { fluoranthene } \nonumber$ Exercise 12.5.2 The concentration of caffeine in beverages is determined by a reversed-phase HPLC separation using a mobile phase of 20% acetonitrile and 80% water, and using a nonpolar C8 column. Results for a series of 10-μL injections of caffeine standards are in the following table. [caffeine] (mg/L) peak area (arb. units) 50.0 226724 100.0 453762 125.0 559443 250.0 1093637 What is the concentration of caffeine in a sample if a 10-μL injection gives a peak area of 424195? The data in this problem comes from Kusch, P.; Knupp, G. “Simultaneous Determination of Caffeine in Cola Drinks and Other Beverages by Reversed-Phase HPTLC and Reversed-Phase HPLC,” Chem. Educator, 2003, 8, 201–205. Answer The figure below shows the calibration curve and calibration equation for the set of external standards. Substituting the sample’s peak area into the calibration equation gives the concentration of caffeine in the sample as 94.4 mg/L. The best way to appreciate the theoretical and the practical details discussed in this section is to carefully examine a typical analytical method. Although each method is unique, the following description of the determination of fluoxetine in serum provides an instructive example of a typical procedure. The description here is based on Smyth, W. F. Analytical Chemistry of Complex Matricies, Wiley Teubner: Chichester, England, 1996, pp. 187–189. Representative Method 12.5.1: Determination of Fluoxetine in Serum Description of Method Fluoxetine is another name for the antidepressant drug Prozac. The determination of fluoxetine in serum is an important part of monitoring its therapeutic use. The analysis is complicated by the complex matrix of serum samples. A solid-phase extraction followed by an HPLC analysis using a fluorescence detector provides the necessary selectivity and detection limits. Procedure Add a known amount of the antidepressant protriptyline, which serves as an internal standard, to each serum sample and to each external standard. To remove matrix interferents, pass a 0.5-mL aliquot of each serum sample or standard through a C18 solid-phase extraction cartridge. After washing the cartridge to remove the interferents, elute the remaining constituents, including the analyte and the internal standard, by washing the cartridge with 0.25 mL of a 25:75 v/v mixture of 0.1 M HClO4 and acetonitrile. Inject a 20-μL aliquot onto a 15-cm $\times$ 4.6-mm column packed with a 5 μm C8-bonded stationary phase. The isocratic mobile phase is 37.5:62.5 v/v acetonitrile and water (that contains 1.5 g of tetramethylammonium perchlorate and 0.1 mL of 70% v/v HClO4). Monitor the chromatogram using a fluorescence detector set to an excitation wave- length of 235 nm and an emission wavelength of 310 nm. Questions 1. The solid-phase extraction is important because it removes constitutions in the serum that might interfere with the analysis. What types of interferences are possible? Blood serum, which is a complex mixture of compounds, is approximately 92% water, 6–8% soluble proteins, and less than 1% each of various salts, lipids, and glucose. A direct injection of serum is not advisable for three reasons. First, any particulate materials in the serum will clog the column and restrict the flow of mobile phase. Second, some of the compounds in the serum may absorb too strongly to the stationary phase, degrading the column’s performance. Finally, although an HPLC can separate and analyze complex mixtures, an analysis is difficult if the number of constituents exceeds the column’s peak capacity. 2. One advantage of an HPLC analysis is that a loop injector often eliminates the need for an internal standard. Why is an internal standard used in this analysis? What assumption(s) must we make when using the internal standard? An internal standard is necessary because of uncertainties introduced during the solid-phase extraction. For example, the volume of serum transferred to the solid-phase extraction cartridge, 0.5 mL, and the volume of solvent used to remove the analyte and internal standard, 0.25 mL, are very small. The precision and accuracy with which we can measure these volumes is not as good as when we use larger volumes. For example, if we extract the analyte into a volume of 0.24 mL instead of a volume of 0.25 mL, then the analyte’s concentration increases by slightly more than 4%. In addition, the concentration of eluted analytes may vary from trial-to-trial due to variations in the amount of solution held up by the cartridge. Using an internal standard compensates for these variation. To be useful we must assume that the analyte and the internal standard are retained completely during the initial loading, that they are not lost when the cartridge is washed, and that they are extracted completely during the final elution. 3. Why does the procedure monitor fluorescence instead of monitoring UV absorption? Fluorescence is a more selective technique for detecting analytes. Many other commonly prescribed antidepressants (and their metabolites) elute with retention times similar to that of fluoxetine. These compounds, however, either do not fluoresce or are only weakly fluorescent. 4. If the peaks for fluoxetine and protriptyline are resolved insufficiently, how might you alter the mobile phase to improve their separation? Decreasing the amount of acetonitrile and increasing the amount of water in the mobile will increase retention times, providing more time to effect a separation. Evaluation With a few exceptions, the scale of operation, accuracy, precision, sensitivity, selectivity, analysis time, and cost for an HPLC method are similar to GC methods. Injection volumes for an HPLC method usually are larger than for a GC method because HPLC columns have a greater capacity. Because it uses a loop injection, the precision of an HPLC method often is better than a GC method. HPLC is not limited to volatile analytes, which means we can analyze a broader range of compounds. Capillary GC columns, on the other hand, have more theoretical plates, and can separate more complex mixtures.
textbooks/chem/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/12%3A_Chromatographic_and_Electrophoretic_Methods/12.05%3A_High-Performance_Liquid_Chromatography.txt
At the beginning of Section 12.5, we noted that there are several different types of solute/stationary phase interactions in liquid chromatography, but limited our discussion to liquid–liquid chromatography. In this section we turn our attention to liquid chromatography techniques in which partitioning occurs by liquid–solid adsorption, ion-exchange, and size exclusion. Liquid-Solid Chromatography In liquid–solid adsorption chromatography (LSC) the column packing also serves as the stationary phase. In Tswett’s original work the stationary phase was finely divided CaCO3, but modern columns employ porous 3–10 μm particles of silica or alumina. Because the stationary phase is polar, the mobile phase usually is a nonpolar or a moderately polar solvent. Typical mobile phases include hexane, isooctane, and methylene chloride. The usual order of elution—from shorter to longer retention times—is olefins < aromatic hydrocarbons < ethers < esters, aldehydes, ketones < alcohols, amines < amide < carboxylic acids Nonpolar stationary phases, such as charcoal-based absorbents, also are used. For most samples, liquid–solid chromatography does not offer any special advantages over liquid–liquid chromatography. One exception is the analysis of isomers, where LSC excels. Ion-Exchange Chromatography In ion-exchange chromatography (IEC) the stationary phase is a cross-linked polymer resin, usually divinylbenzene cross-linked polystyrene, with covalently attached ionic functional groups (see Figure 12.6.1 and Table 12.6.1 ). The counterions to these fixed charges are mobile and are displaced by ions that compete more favorably for the exchange sites. Ion-exchange resins are divided into four categories: strong acid cation exchangers; weak acid cation exchangers; strong base anion exchangers; and weak base anion exchangers. Figure 12.6.1 . Structures of styrene, divinylbenzene, and a styrene–divinylbenzene co-polymer modified for use as an ion-exchange resin are shown on the left. The ion-exchange sites, indicated by R and shown in blue, are mostly in the para position and are not necessarily bound to all styrene units. The cross-linking is shown in red. The photo on the right shows an example of the polymer beads. These beads are approximately 0.30–0.85 mm in diameter. Resins for use in ion-exchange chromatography typically are 5–11 μm in diameter. Table 12.6.1 . Examples of Common Ion-Exchange Resins type functional group examples strong acid cation exchanger sulfonic acid $-\text{SO}_3^-$ $-\text{CH}_2\text{CH}_2\text{SO}_3^-$ weak acid cation exchanger carboxylic acid $-\text{COO}^-$ $-\text{CH}_2\text{COO}^-$ strong base anion exchanger quaternary amine $-\text{CH}_2\text{N(CH}_3)_3^+$ $-\text{CH}_2\text{CH}_2\text{N(CH}_2\text{CH}_3)_3^+$ weak base anion exchanger amine $-\text{NH}_4^+$ $-\text{CH}_2\text{CH}_2\text{NH(CH}_2\text{CH}_3)_3^+$ Strong acid cation exchangers include a sulfonic acid functional group that retains it anionic form—and thus its capacity for ion-exchange—in strongly acidic solutions. The functional groups for a weak acid cation exchanger, on the other hand, are fully protonated at pH levels less then 4 and lose their exchange capacity. The strong base anion exchangers include a quaternary amine, which retains a positive charge even in strongly basic solutions. Weak base anion exchangers remain protonated only at pH levels that are moderately basic. Under more basic conditions a weak base anion exchanger loses a proton and its exchange capacity. The ion-exchange reaction of a monovalent cation, M+, exchange site is $-\mathrm{SO}_{3}^{-} \mathrm{H}^{+}(s)+\mathrm{M}^{+}(a q)\rightleftharpoons-\mathrm{SO}_{3}^{-} \mathrm{M}^{+}(s)+\mathrm{H}^{+}(a q) \nonumber$ The equilibrium constant for this ion-exchange reaction, which we call the selectivity coefficient, K, is $K=\frac{\left\{-\mathrm{SO}_{3}^{-} \mathrm{M}^{+}\right\}\left[\mathrm{H}^{+}\right]}{\left\{-\mathrm{SO}_{3}^{-} \mathrm{H}^{+}\right\}\left[\mathrm{M}^{+}\right]} \label{12.1}$ where we use curly brackets, { }, to indicate a surface concentration instead of a solution concentration. We don’t usually think about a solid’s concentration. There is a good reason for this. In most cases, a solid’s concentration is a constant. If you break a piece of chalk into two parts, for example, the mass and the volume of each piece retains the same proportional relationship as in the original piece of chalk. When we consider an ion binding to a reactive site on the solid’s surface, however, the fraction of sites that are bound, and thus the concentration of bound sites, can take on any value between 0 and some maximum value that is proportional to the density of reactive sites. Rearranging Equation \ref{12.1} shows us that the distribution ratio, D, for the exchange reaction $D=\frac{\text { amount of } \mathrm{M}^{+} \text { in the stationary phase }}{\text { amount of } \mathrm{M}^{+} \text { in the mobile phase }} \nonumber$ $D=\frac{\left\{-\mathrm{SO}_{3}^{-} \mathrm{M}^{+}\right\}}{\left[\mathrm{M}^{+}\right]}=K \times \frac{\left\{-\mathrm{SO}_{3}^{-} \mathrm{H}^{+}\right\}}{\left[\mathrm{H}^{+}\right]} \label{12.2}$ is a function of the concentration of H+ and, therefore, the pH of the mobile phase. An ion-exchange resin’s selectivity is somewhat dependent on whether it includes strong or weak exchange sites and on the extent of cross-linking. The latter is particularly important as it controls the resin’s permeability, and, therefore, the accessibility of exchange sites. An approximate order of selectivity for a typical strong acid cation exchange resin, in order of decreasing D, is Al3+ > Ba2+ > Pb2+ > Ca2+ > Ni2+ > Cd2+ > Cu2+ > Co2+ > Zn2+ > Mg2+ > Ag+ > K+ > $\text{NH}_4^+$ > Na+ > H+ > Li+ Note that highly charged cations bind more strongly than cations of lower charge, and that for cations of similar charge, those with a smaller hydrated radius (see Table 6.9.1 in Chapter 6), or that are more polarizable, bind more strongly. For a strong base anion exchanger the general elution order is $\text{SO}_4^{2-}$ > I > $\text{HSO}_4^-$ > $\text{NO}_3^-$ > Br > $\text{NO}_2^-$ > Cl > $\text{HCO}_3^-$ > CH3COO> OH > F Anions of higher charge and of smaller hydrated radius bind more strongly than anions with a lower charge and a larger hydrated radius. The mobile phase in IEC usually is an aqueous buffer, the pH and ionic composition of which determines a solute’s retention time. Gradient elutions are possible in which the mobile phase’s ionic strength or pH is changed with time. For example, an IEC separation of cations might use a dilute solution of HCl as the mobile phase. Increasing the concentration of HCl speeds the elution rate for more strongly retained cations because the higher concentration of H+ allows it to compete more successfully for the ion-exchange sites. From Equation \ref{12.2}, a cation’s distribution ratio, D, becomes smaller when the concentration of H+ in the mobile phase increases. An ion-exchange resin is incorporated into an HPLC column either as 5–11 μm porous polymer beads or by coating the resin on porous silica particles. Columns typically are 250 mm in length with internal diameters ranging from 2–5 mm. Measuring the conductivity of the mobile phase as it elutes from the column serves as a universal detector for cationic and anionic analytes. Because the mobile phase contains a high concentration of ions—a mobile phase of dilute HCl, for example, contains significant concentrations of H+ and Cl—we need a method for detecting the analytes in the presence of a significant background conductivity. To minimize the mobile phase’s contribution to conductivity, an ion-suppressor column is placed between the analytical column and the detector. This column selectively removes mobile phase ions without removing solute ions. For example, in cation-exchange chromatography using a dilute solution of HCl as the mobile phase, the suppressor column contains a strong base anion-exchange resin. The exchange reaction $\mathrm{H}^{+}(a q)+\mathrm{Cl}^{-}(a q)+\mathrm{Resin}^{+} \mathrm{OH}^{-}(s)\rightleftharpoons\operatorname{Resin}^{+} \mathrm{Cl}^{-}(s)+\mathrm{H}_{2} \mathrm{O}(l ) \nonumber$ replaces the mobile phase ions H+ and Cl with H2O. A similar process is used in anion-exchange chromatography where the suppressor column contains a cation-exchange resin. If the mobile phase is a solution of Na2CO3, the exchange reaction $2 \mathrm{Na}^{+}(a q)+\mathrm{CO}_{3}^{2-}(a q)+2 \operatorname{Resin}^{-} \mathrm{H}^{+}(s)\rightleftharpoons2 \operatorname{Resin}^{-} \mathrm{Na}^{+}(s)+\mathrm{H}_{2} \mathrm{CO}_{3}(a q) \nonumber$ replaces a strong electrolyte, Na2CO3, with a weak electrolyte, H2CO3. Ion-suppression is necessary when the mobile phase contains a high concentration of ions. Single-column ion chromatography, in which an ion-suppressor column is not needed, is possible if the concentration of ions in the mobile phase is small. Typically the stationary phase is a resin with a low capacity for ion-exchange and the mobile phase is a very dilute solution of methane sulfonic acid for cationic analytes, or potassium benzoate or potassium hydrogen phthalate for anionic analytes. Because the background conductivity is sufficiently small, it is possible to monitor a change in conductivity as the analytes elute from the column. A UV/Vis absorbance detector can be used if the analytes absorb ultraviolet or visible radiation. Alternatively, we can detect indirectly analytes that do not absorb in the UV/Vis if the mobile phase contains a UV/Vis absorbing species. In this case, when a solute band passes through the detector, a decrease in absorbance is measured at the detector. Ion-exchange chromatography is an important technique for the analysis of anions and cations in water. For example, an ion-exchange chromatographic analysis for the anions F, Cl, Br, $\text{NO}_2^-$, $\text{NO}_3^-$, $\text{PO}_4^{3-}$, and $\text{SO}_4^{2-}$ takes approximately 15 minutes (Figure 12.6.2 ). A complete analysis of the same set of anions by a combination of potentiometry and spectrophotometry requires 1–2 days. Ion-exchange chromatography also is used for the analysis of proteins, amino acids, sugars, nucleotides, pharmaceuticals, consumer products, and clinical samples. Size-Exclusion Chromatography We have considered two classes of micron-sized stationary phases in this section: silica particles and cross-linked polymer resin beads. Both materials are porous, with pore sizes ranging from approximately 5–400 nm for silica particles, and from 5 nm to 100 μm for divinylbenzene cross-linked polystyrene resins. In size-exclusion chromatography—which also is known by the terms molecular-exclusion or gel permeation chromatography—the separation of solutes depends upon their ability to enter into the pores of the stationary phase. Smaller solutes spend proportionally more time within the pores and take longer to elute from the column. A stationary phase’s size selectivity extends over a finite range. All solutes significantly smaller than the pores move through the column’s entire volume and elute simultaneously, with a retention volume, Vr, of $V_{r}=V_{i}+V_{o} \label{12.3}$ where Vi is the volume of mobile phase occupying the stationary phase’s pore space and Vo is volume of mobile phase in the remainder of the column. The largest solute for which Equation \ref{12.3} holds is the column’s inclusion limit, or permeation limit. Those solutes too large to enter the pores elute simultaneously with an retention volume of $V_{r} = V_{o} \label{12.4}$ Equation \ref{12.4} defines the column’s exclusion limit. For a solute whose size is between the inclusion limit and the exclusion limit, the amount of time it spends in the stationary phase’s pores is proportional to its size. The retention volume for these solutes is $V_{r}=DV_{i}+V_{o} \label{12.5}$ where D is the solute’s distribution ratio, which ranges from 0 at the exclusion limit to 1 at the inclusion limit. Equation \ref{12.5} assumes that size-exclusion is the only interaction between the solute and the stationary phase that affects the separation. For this reason, stationary phases using silica particles are deactivated as described earlier, and polymer resins are synthesized without exchange sites. Size-exclusion chromatography provides a rapid means for separating larger molecules, including polymers and biomolecules. A stationary phase for proteins that consists of particles with 30 nm pores has an inclusion limit of 7500 g/mol and an exclusion limit of $1.2 \times 10^6$ g/mol. Mixtures of proteins that span a wider range of molecular weights are separated by joining together in series several columns with different inclusion and exclusion limits. Another important application of size-exclusion chromatography is the estimation of a solute’s molecular weight (MW). Calibration curves are prepared using a series of standards of known molecular weight and measuring each standard’s retention volume. As shown in Figure 12.6.3 , a plot of log(MW) versus Vr is roughly linear between the exclusion limit and the inclusion limit. Because a solute’s retention volume is influenced by both its size and its shape, a reasonably accurate estimation of molecular weight is possible only if the standards are chosen carefully to minimize the effect of shape. Size-exclusion chromatography is carried out using conventional HPLC instrumentation, replacing the HPLC column with an appropriate size-exclusion column. A UV/Vis detector is the most common means for obtaining the chromatogram. Supercritical Fluid Chromatography Although there are many analytical applications of gas chromatography and liquid chromatography, they can not separate and analyze all types of samples. Capillary column GC separates complex mixtures with excellent resolution and short analysis times. Its application is limited, however, to volatile analytes or to analytes made volatile by a suitable derivatization reaction. Liquid chromatography separates a wider range of solutes than GC, but the most common detectors—UV, fluorescence, and electrochemical— have poorer detection limits and smaller linear ranges than GC detectors, and are not as universal in their selectivity. For some samples, supercritical fluid chromatography (SFC) provides a useful alternative to gas chromatography and liquid chromatography. The mobile phase in supercritical fluid chromatography is a gas held at a temperature and pressure that exceeds its critical point (Figure 12.6.4 ). Under these conditions the mobile phase is neither a gas nor a liquid. Instead, the mobile phase is a supercritical fluid. Some properties of a supercritical fluid, as shown in Table 12.6.2 , are similar to a gas; other properties, however, are similar to a liquid. The viscosity of a supercritical fluid, for example, is similar to a gas, which means we can move a supercritical fluid through a capillary column or a packed column without the high pressures needed in HPLC. Analysis time and resolution, although not as good as in GC, usually are better than in conventional HPLC. The density of a supercritical fluid, on the other hand, is much closer to that of a liquid, which explains why supercritical fluids are good solvents. In terms of its separation power, a mobile phase in SFC behaves more like the liquid mobile phase in HPLC than the gaseous mobile phase in GC. Table 12.6.2 . Typical Properties of Gases, Liquids, and Supercritical Fluids phase density (g/cm3) viscosity (g cm-1 s-1) diffusion coefficient (cm2 s-1) gas $\approx 10^{-3}$ $\approx 10^{-4}$ $\approx 0.1$ supercritical fluid $\approx 0.1 - 1$ $\approx 10^{-4} - 10^{-3}$ $\approx 10^{-4} - 10^{-3}$ liquid $\approx 1$ $\approx 10^{-2}$ $\approx 10^{-3}$ The most common mobile phase for supercritical fluid chromatography is CO2. Its low critical temperature of 31.1oC and its low critical pressure of 72.9 atm are relatively easy to achieve and maintain. Although supercritical CO2 is a good solvent for nonpolar organics, it is less useful for polar solutes. The addition of an organic modifier, such as methanol, improves the mobile phase’s elution strength. Other common mobile phases and their critical temperatures and pressures are listed in Table 12.6.3 . Table 12.6.3 . Critical Points for Selected Supercritical Fluids compound critical temperature (oC) critical pressure (atm) carbon dioxide 31.3 72.9 ethane 32.4 48.3 nitrous oxide 36.5 71.4 ammonia 132.3 111.3 diethyl ether 193.6 36.3 isopropanol 235.3 47.0 methanol 240.5 78.9 ethanol 243.4 63.0 water 374.4 226.8 The instrumentation for supercritical fluid chromatography essentially is the same as that for a standard HPLC. The only important additions are a heated oven for the column and a pressure restrictor downstream from the column to maintain the critical pressure. Gradient elutions are accomplished by changing the applied pressure over time. The resulting change in the mobile phase’s density affects its solvent strength. Detection is accomplished using standard GC detectors or HPLC detectors. Supercritical fluid chromatography has many applications in the analysis of polymers, fossil fuels, waxes, drugs, and food products.
textbooks/chem/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/12%3A_Chromatographic_and_Electrophoretic_Methods/12.06%3A_Other_Forms_of_Chromatography.txt
Electrophoresis is a class of separation techniques in which we separate analytes by their ability to move through a conductive medium—usually an aqueous buffer—in response to an applied electric field. In the absence of other effects, cations migrate toward the electric field’s negatively charged cathode. Cations with larger charge-to-size ratios—which favors ions of greater charge and of smaller size—migrate at a faster rate than larger cat- ions with smaller charges. Anions migrate toward the positively charged anode and neutral species do not experience the electrical field and remain stationary. As we will see shortly, under normal conditions even neutral species and anions migrate toward the cathode. There are several forms of electrophoresis. In slab gel electrophoresis the conducting buffer is retained within a porous gel of agarose or polyacrylamide. Slabs are formed by pouring the gel between two glass plates separated by spacers. Typical thicknesses are 0.25–1 mm. Gel electrophoresis is an important technique in biochemistry where it frequently is used to separate DNA fragments and proteins. Although it is a powerful tool for the qualitative analysis of complex mixtures, it is less useful for quantitative work. In capillary electrophoresis the conducting buffer is retained within a capillary tube with an inner diameter that typically is 25–75 μm. The sample is injected into one end of the capillary tube, and as it migrates through the capillary the sample’s components separate and elute from the column at different times. The resulting electropherogram looks similar to a GC or an HPLC chromatogram, and provides both qualitative and quantitative information. Only capillary electrophoretic methods receive further consideration in this section. Theory of Electrophoresis In capillary electrophoresis we inject the sample into a buffered solution retained within a capillary tube. When an electric field is applied across the capillary tube, the sample’s components migrate as the result of two types of actions: electrophoretic mobility and electroosmotic mobility. Electrophoretic mobility is the solute’s response to the applied electrical field in which cations move toward the negatively charged cathode, anions move toward the positively charged anode, and neutral species remain stationary. The other contribution to a solute’s migration is electroosmotic flow, which occurs when the buffer moves through the capillary in response to the applied electrical field. Under normal conditions the buffer moves toward the cathode, sweeping most solutes, including the anions and neutral species, toward the negatively charged cathode. Electrophoretic Mobility The velocity with which a solute moves in response to the applied electric field is called its electrophoretic velocity, $\nu_{ep}$; it is defined as $\nu_{ep}=\mu_{ep} E \label{12.1}$ where $\mu_{ep}$ is the solute’s electrophoretic mobility, and E is the magnitude of the applied electrical field. A solute’s electrophoretic mobility is defined as $\mu_{ep}=\frac{q}{6 \pi \eta r} \label{12.2}$ where q is the solute’s charge, $\eta$ is the buffer’s viscosity, and r is the solute’s radius. Using Equation \ref{12.1} and Equation \ref{12.2} we can make several important conclusions about a solute’s electrophoretic velocity. Electrophoretic mobility and, therefore, electrophoretic velocity, increases for more highly charged solutes and for solutes of smaller size. Because q is positive for a cation and negative for an anion, these species migrate in opposite directions. A neutral species, for which q is zero, has an electrophoretic velocity of zero. Electroosmotic Mobility When an electric field is applied to a capillary filled with an aqueous buffer we expect the buffer’s ions to migrate in response to their electrophoretic mobility. Because the solvent, H2O, is neutral we might reasonably expect it to remain stationary. What we observe under normal conditions, however, is that the buffer moves toward the cathode. This phenomenon is called the electroosmotic flow. Electroosmotic flow occurs because the walls of the capillary tubing carry a charge. The surface of a silica capillary contains large numbers of silanol groups (–SiOH). At a pH level greater than approximately 2 or 3, the silanol groups ionize to form negatively charged silanate ions (–SiO). Cations from the buffer are attracted to the silanate ions. As shown in Figure 12.7.1 , some of these cations bind tightly to the silanate ions, forming a fixed layer. Because the cations in the fixed layer only partially neutralize the negative charge on the capillary walls, the solution adjacent to the fixed layer—which is called the diffuse layer—contains more cations than anions. Together these two layers are known as the double layer. Cations in the diffuse layer migrate toward the cathode. Because these cations are solvated, the solution also is pulled along, producing the electroosmotic flow. The anions in the diffuse layer, which also are solvated, try to move toward the anode. Because there are more cations than anions, however, the cations win out and the electroosmotic flow moves in the direction of the cathode. The rate at which the buffer moves through the capillary, what we call its electroosmotic flow velocity, $\nu_{eof}$, is a function of the applied electric field, E, and the buffer’s electroosmotic mobility, $\mu_{eof}$. $\nu_{eof}=\mu_{e o f} E \label{12.3}$ Electroosmotic mobility is defined as $\mu_{eof}=\frac{\varepsilon \zeta}{4 \pi \eta} \label{12.4}$ where $\epsilon$ is the buffer dielectric constant, $\zeta$ is the zeta potential, and $\eta$ is the buffer’s viscosity. The zeta potential—the potential of the diffuse layer at a finite distance from the capillary wall—plays an important role in determining the electroosmotic flow velocity. Two factors determine the zeta potential’s value. First, the zeta potential is directly proportional to the charge on the capillary walls, with a greater density of silanate ions corresponding to a larger zeta potential. Below a pH of 2 there are few silanate ions and the zeta potential and the electroosmotic flow velocity approach zero. As the pH increases, both the zeta potential and the electroosmotic flow velocity increase. Second, the zeta potential is directly proportional to the thickness of the double layer. Increasing the buffer’s ionic strength provides a higher concentration of cations, which decreases the thickness of the double layer and decreases the electroosmotic flow. The definition of zeta potential given here admittedly is a bit fuzzy. For a more detailed explanation see Delgado, A. V.; González-Caballero, F.; Hunter, R. J.; Koopal, L. K.; Lyklema, J. “Measurement and Interpretation of Electrokinetic Phenomena,” Pure. Appl. Chem. 2005, 77, 1753–1805. Although this is a very technical report, Sections 1.3–1.5 provide a good introduction to the difficulty of defining the zeta potential and of measuring its value. The electroosmotic flow profile is very different from that of a fluid moving under forced pressure. Figure 12.7.2 compares the electroosmotic flow profile with the hydrodynamic flow profile in gas chromatography and liquid chromatography. The uniform, flat profile for electroosmosis helps minimize band broadening in capillary electrophoresis, improving separation efficiency. Total Mobility A solute’s total velocity, $\nu_{tot}$, as it moves through the capillary is the sum of its electrophoretic velocity and the electroosmotic flow velocity. $\nu_{t o t}=\nu_{ep}+\nu_{eof} \nonumber$ As shown in Figure 12.7.3 , under normal conditions the following general relationships hold true. $(\nu_{tot})_{cations} > \nu_{eof} \nonumber$ $(\nu_{tot})_{neutrals} = \nu_{eof} \nonumber$ $(\nu_{tot})_{anions} < \nu_{eof} \nonumber$ Cations elute first in an order that corresponds to their electrophoretic mobilities, with small, highly charged cations eluting before larger cations of lower charge. Neutral species elute as a single band with an elution rate equal to the electroosmotic flow velocity. Finally, anions are the last components to elute, with smaller, highly charged anions having the longest elution time. Migration Time Another way to express a solute’s velocity is to divide the distance it travels by the elapsed time $\nu_{tot}=\frac{l}{t_{m}} \label{12.5}$ where l is the distance between the point of injection and the detector, and tm is the solute’s migration time. To understand the experimental variables that affect migration time, we begin by noting that $\nu_{tot} = \mu_{tot}E = (\mu_{ep} + \mu_{eof})E \label{12.6}$ Combining Equation \ref{12.5} and Equation \ref{12.6} and solving for tm leaves us with $t_{\mathrm{m}}=\frac{l}{\left(\mu_{ep}+\mu_{eof}\right) E} \label{12.7}$ The magnitude of the electrical field is $E=\frac{V}{L} \label{12.8}$ where V is the applied potential and L is the length of the capillary tube. Finally, substituting Equation \ref{12.8} into Equation \ref{12.7} leaves us with the following equation for a solute’s migration time. $t_{\mathrm{m}}=\frac{lL}{\left(\mu_{ep}+\mu_{eof}\right) V} \label{12.9}$ To decrease a solute’s migration time—which shortens the analysis time—we can apply a higher voltage or use a shorter capillary tube. We can also shorten the migration time by increasing the electroosmotic flow, although this decreases resolution. Efficiency As we learned in Chapter 12.2, the efficiency of a separation is given by the number of theoretical plates, N. In capillary electrophoresis the number of theoretic plates is $N=\frac{l^{2}}{2 D t_{m}}=\frac{\left(\mu_{e p}+\mu_{eof}\right) E l}{2 D L} \label{12.10}$ where D is the solute’s diffusion coefficient. From Equation \ref{12.10}, the efficiency of a capillary electrophoretic separation increases with higher voltages. Increasing the electroosmotic flow velocity improves efficiency, but at the expense of resolution. Two additional observations deserve comment. First, solutes with larger electrophoretic mobilities—in the same direction as the electroosmotic flow—have greater efficiencies; thus, smaller, more highly charged cations are not only the first solutes to elute, but do so with greater efficiency. Second, efficiency in capillary electrophoresis is independent of the capillary’s length. Theoretical plate counts of approximately 100 000–200 000 are not unusual. It is possible to design an electrophoretic experiment so that anions elute before cations—more about this later—in which smaller, more highly charged anions elute with greater efficiencies. Selectivity In chromatography we defined the selectivity between two solutes as the ratio of their retention factors. In capillary electrophoresis the analogous expression for selectivity is $\alpha=\frac{\mu_{ep, 1}}{\mu_{ep, 2}} \nonumber$ where $\mu_{ep,1}$ and $\mu_{ep,2}$ are the electrophoretic mobilities for the two solutes, chosen such that $\alpha \ge 1$. We can often improve selectivity by adjusting the pH of the buffer solution. For example, $\text{NH}_4^+$ is a weak acid with a pKa of 9.75. At a pH of 9.75 the concentrations of $\text{NH}_4^+$ and NH3 are equal. Decreasing the pH below 9.75 increases its electrophoretic mobility because a greater fraction of the solute is present as the cation $\text{NH}_4^+$. On the other hand, raising the pH above 9.75 increases the proportion of neutral NH3, decreasing its electrophoretic mobility. Resolution The resolution between two solutes is $R = \frac {0.177(\mu_{ep,2} - \mu_{ep,1})\sqrt{V}} {\sqrt{D(\mu_{avg} + \mu_{eof}}} \label{12.11}$ where $\mu_{avg}$ is the average electrophoretic mobility for the two solutes. Increasing the applied voltage and decreasing the electroosmotic flow velocity improves resolution. The latter effect is particularly important. Although increasing electroosmotic flow improves analysis time and efficiency, it de- creases resolution. Instrumentation The basic instrumentation for capillary electrophoresis is shown in Figure 12.7.4 and includes a power supply for applying the electric field, anode and cathode compartments that contain reservoirs of the buffer solution, a sample vial that contains the sample, the capillary tube, and a detector. Each part of the instrument receives further consideration in this section. Capillary Tubes Figure 12.7.5 shows a cross-section of a typical capillary tube. Most capillary tubes are made from fused silica coated with a 15–35 μm layer of polyimide to give it mechanical strength. The inner diameter is typically 25–75 μm, which is smaller than the internal diameter of a capillary GC column, with an outer diameter of 200–375 μm. The capillary column’s narrow opening and the thickness of its walls are important. When an electric field is applied to the buffer solution, current flows through the capillary. This current leads to the release of heat, which we call Joule heating. The amount of heat released is proportional to the capillary’s radius and to the magnitude of the electrical field. Joule heating is a problem because it changes the buffer’s viscosity, with the solution at the center of the capillary being less viscous than that near the capillary walls. Because a solute’s electrophoretic mobility depends on its viscosity (see Equation \ref{12.2}), solute species in the center of the capillary migrate at a faster rate than those near the capillary walls. The result is an additional source of band broadening that degrades the separation. Capillaries with smaller inner diameters generate less Joule heating, and capillaries with larger outer diameters are more effective at dissipating the heat. Placing the capillary tube inside a thermostated jacket is another method for minimizing the effect of Joule heating; in this case a smaller outer diameter allows for a more rapid dissipation of thermal energy. Injecting the Sample There are two common methods for injecting a sample into a capillary electrophoresis column: hydrodynamic injection and electrokinetic injection. In both methods the capillary tube is filled with the buffer solution. One end of the capillary tube is placed in the destination reservoir and the other end is placed in the sample vial. Hydrodynamic injection uses pressure to force a small portion of sample into the capillary tubing. A difference in pressure is applied across the capillary either by pressurizing the sample vial or by applying a vacuum to the destination reservoir. The volume of sample injected, in liters, is given by the following equation $V_{\text {inj}}=\frac{\Delta P d^{4} \pi t}{128 \eta L} \times 10^{3} \label{12.12}$ where $\Delta P$ is the difference in pressure across the capillary in pascals, d is the capillary’s inner diameter in meters, t is the amount of time the pressure is applied in seconds, $\eta$ is the buffer’s viscosity in kg m–1 s–1, and L is the length of the capillary tubing in meters. The factor of 103 changes the units from cubic meters to liters. For a hydrodynamic injection we move the capillary from the source reservoir to the sample. The anode remains in the source reservoir. A hydrodynamic injection also is possible if we raise the sample vial above the destination reservoir and briefly insert the filled capillary. Example 12.7.1 In a hydrodynamic injection we apply a pressure difference of $2.5 \times 10^3$ Pa (a $\Delta P \approx 0.02 \text{ atm}$) for 2 s to a 75-cm long capillary tube with an internal diameter of 50 μm. Assuming the buffer’s viscosity is 10–3 kg m–1 s–1, what volume and length of sample did we inject? Solution Making appropriate substitutions into Equation \ref{12.12} gives the sample’s volume as $V_{inj}=\frac{\left(2.5 \times 10^{3} \text{ kg} \text{ m}^{-1} \text{ s}^{-2}\right)\left(50 \times 10^{-6} \text{ m}\right)^{4}(3.14)(2 \text{ s})}{(128)\left(0.001 \text{ kg} \text{ m}^{-1} \text{ s}^{-1}\right)(0.75 \text{ m})} \times 10^{3} \mathrm{L} / \mathrm{m}^{3} \nonumber$ $V_{inj} = 1 \times 10^{-9} \text{ L} = 1 \text{ nL} \nonumber$ Because the interior of the capillary is cylindrical, the length of the sample, l, is easy to calculate using the equation for the volume of a cylinder; thus $l=\frac{V_{\text {inj}}}{\pi r^{2}}=\frac{\left(1 \times 10^{-9} \text{ L}\right)\left(10^{-3} \text{ m}^{3} / \mathrm{L}\right)}{(3.14)\left(25 \times 10^{-6} \text{ m}\right)^{2}}=5 \times 10^{-4} \text{ m}=0.5 \text{ mm} \nonumber$ Exercise 12.7.1 Suppose you need to limit your injection to less than 0.20% of the capillary’s length. Using the information from Example 12.7.1 , what is the maximum injection time for a hydrodynamic injection? Answer The capillary is 75 cm long, which means that 0.20% of that sample’s maximum length is 0.15 cm. To convert this to the maximum volume of sample we use the equation for the volume of a cylinder. $V_{i n j}=l \pi r^{2}=(0.15 \text{ cm})(3.14)\left(25 \times 10^{-4} \text{ cm}\right)^{2}=2.94 \times 10^{-6} \text{ cm}^{3} \nonumber$ Given that 1 cm3 is equivalent to 1 mL, the maximum volume is $2.94 \times 10^{-6}$ mL or $2.94 \times 10^{-9}$ L. To find the maximum injection time, we first solve Equation \ref{12.12} for t $t=\frac{128 V_{inj} \eta L}{P d^{4} \pi} \times 10^{-3} \text{ m}^{3} / \mathrm{L} \nonumber$ and then make appropriate substitutions. $t=\frac{(128)\left(2.94 \times 10^{-9} \text{ L}\right)\left(0.001 \text{ kg } \text{ m}^{-1} \text{ s}^{-1}\right)(0.75 \text{ m})}{\left(2.5 \times 10^{3} \text{ kg } \mathrm{m}^{-1} \text{ s}^{-2}\right)\left(50 \times 10^{-6} \text{ m}\right)^{4}(3.14)} \times \frac{10^{-3} \text{ m}^{3}}{\mathrm{L}} = 5.8 \text{ s} \nonumber$ The maximum injection time, therefore, is 5.8 s. In an electrokinetic injection we place both the capillary and the anode into the sample and briefly apply an potential. The volume of injected sample is the product of the capillary’s cross sectional area and the length of the capillary occupied by the sample. In turn, this length is the product of the solute’s velocity (see Equation \ref{12.6}) and time; thus $V_{inj} = \pi r^2 L = \pi r^2 (\mu_{ep} + \mu_{eof})E^{\prime}t \label{12.13}$ where r is the capillary’s radius, L is the capillary’s length, and $E^{\prime}$ is the effective electric field in the sample. An important consequence of Equation \ref{12.13} is that an electrokinetic injection is biased toward solutes with larger electrophoretic mobilities. If two solutes have equal concentrations in a sample, we inject a larger volume—and thus more moles—of the solute with the larger $\mu_{ep}$. The electric field in the sample is different that the electric field in the rest of the capillary because the sample and the buffer have different ionic compositions. In general, the sample’s ionic strength is smaller, which makes its conductivity smaller. The effective electric field is $E^{\prime} = E \times \frac {\chi_\text{buffer}} {\chi_\text{sample}}\nonumber$ where $\chi_\text{buffer}$ and $\chi_{sample}$ are the conductivities of the buffer and the sample, respectively. When an analyte’s concentration is too small to detect reliably, it maybe possible to inject it in a manner that increases its concentration. This method of injection is called stacking. Stacking is accomplished by placing the sample in a solution whose ionic strength is significantly less than that of the buffer in the capillary tube. Because the sample plug has a lower concentration of buffer ions, the effective field strength across the sample plug, $E^{\prime}$, is larger than that in the rest of the capillary. We know from Equation \ref{12.1} that electrophoretic velocity is directly proportional to the electrical field. As a result, the cations in the sample plug migrate toward the cathode with a greater velocity, and the anions migrate more slowly—neutral species are unaffected and move with the electroosmotic flow. When the ions reach their respective boundaries between the sample plug and the buffer, the electrical field decreases and the electrophoretic velocity of the cations decreases and that for the anions increases. As shown in Figure 12.7.6 , the result is a stacking of cations and anions into separate, smaller sampling zones. Over time, the buffer within the capillary becomes more homogeneous and the separation proceeds without additional stacking. Applying the Electrical Field Migration in electrophoresis occurs in response to an applied electric field. The ability to apply a large electric field is important because higher voltages lead to shorter analysis times (Equation \ref{12.9}), more efficient separations (Equation \ref{12.10}), and better resolution (Equation \ref{12.11}). Because narrow bored capillary tubes dissipate Joule heating so efficiently, voltages of up to 40 kV are possible. Because of the high voltages, be sure to follow your instrument’s safety guidelines. Detectors Most of the detectors used in HPLC also find use in capillary electrophoresis. Among the more common detectors are those based on the absorption of UV/Vis radiation, fluorescence, conductivity, amperometry, and mass spectrometry. Whenever possible, detection is done “on-column” before the solutes elute from the capillary tube and additional band broadening occurs. UV/Vis detectors are among the most popular. Because absorbance is directly proportional to path length, the capillary tubing’s small diameter leads to signals that are smaller than those obtained in HPLC. Several approaches have been used to increase the pathlength, including a Z-shaped sample cell and multiple reflections (see Figure 12.7.7 ). Detection limits are about 10–7 M. Better detection limits are obtained using fluorescence, particularly when using a laser as an excitation source. When using fluorescence detection a small portion of the capillary’s protective coating is removed and the laser beam is focused on the inner portion of the capillary tubing. Emission is measured at an angle of 90o to the laser. Because the laser provides an intense source of radiation that can be focused to a narrow spot, detection limits are as low as 10–16 M. Solutes that do not absorb UV/Vis radiation or that do not undergo fluorescence can be detected by other detectors. Table 12.7.1 provides a list of detectors for capillary electrophoresis along with some of their important characteristics. Table 12.7.1 . Characteristics of Detectors for Capillary Electrophoresis detector selectivity (universal or analyte must ...) detection limited (moles injected) detection limit (molarity) on-column detection? UV/Vis absorbance have a UV/Vis chromophore $10^{-13} - 10^{-16}$ $10^{-5} - 10^{-7}$ yes indirect absorbancd universal $10^{-12} - 10^{-15}$ $10^{-4} - 10^{-6}$ yes fluoresence have a favorable quantum yield $10^{-13} - 10^{-17}$ $10^{-7} - 10^{-9}$ yes laser fluorescence have a favorable quantum yield $10^{-18} - 10^{-20}$ $10^{-13} - 10^{-16}$ yes mass spectrometer universal (total ion) selective (single ion) $10^{-16} - 10^{-17}$ $10^{-8} - 10^{-10}$ no amperometry undergo oxidation or reduction $10^{-18} - 10^{-19}$ $10^{-7} - 10^{-10}$ no conductivity universal $10^{-15} - 10^{-16}$ $10^{-7} - 10^{-9}$ no radiometric be radioactive $10^{-17} - 10^{-19}$ $10^{-10} - 10^{-12}$ yes Capillary Electrophoresis Methods There are several different forms of capillary electrophoresis, each of which has its particular advantages. Four of these methods are described briefly in this section. Capillary Zone Electrophoresis (CZE) The simplest form of capillary electrophoresis is capillary zone electrophoresis. In CZE we fill the capillary tube with a buffer and, after loading the sample, place the ends of the capillary tube in reservoirs that contain additional buffer. Usually the end of the capillary containing the sample is the anode and solutes migrate toward the cathode at a velocity determined by their respective electrophoretic mobilities and the electroosmotic flow. Cations elute first, with smaller, more highly charged cations eluting before larger cations with smaller charges. Neutral species elute as a single band. Anions are the last species to elute, with smaller, more negatively charged anions being the last to elute. We can reverse the direction of electroosmotic flow by adding an alkylammonium salt to the buffer solution. As shown in Figure 12.7.8 , the positively charged end of the alkyl ammonium ions bind to the negatively charged silanate ions on the capillary’s walls. The tail of the alkyl ammonium ion is hydrophobic and associates with the tail of another alkyl ammonium ion. The result is a layer of positive charges that attract anions in the buffer. The migration of these solvated anions toward the anode reverses the electroosmotic flow’s direction. The order of elution is exactly opposite that observed under normal conditions. Coating the capillary’s walls with a nonionic reagent eliminates the electroosmotic flow. In this form of CZE the cations migrate from the anode to the cathode. Anions elute into the source reservoir and neutral species remain stationary. Capillary zone electrophoresis provides effective separations of charged species, including inorganic anions and cations, organic acids and amines, and large biomolecules such as proteins. For example, CZE was used to separate a mixture of 36 inorganic and organic ions in less than three minutes [Jones, W. R.; Jandik, P. J. Chromatog. 1992, 608, 385–393]. A mixture of neutral species, of course, can not be resolved. Micellar Electrokinetic Capillary Chromatography (MEKC) One limitation to CZE is its inability to separate neutral species. Micellar electrokinetic capillary chromatography overcomes this limitation by adding a surfactant, such as sodium dodecylsulfate (Figure 12.7.9 a) to the buffer solution. Sodium dodecylsulfate, or SDS, consists of a long-chain hydrophobic tail and a negatively charged ionic functional group at its head. When the concentration of SDS is sufficiently large a micelle forms. A micelle consists of a spherical agglomeration of 40–100 surfactant molecules in which the hydrocarbon tails point inward and the negatively charged heads point outward (Figure 12.7.9 b). Because micelles have a negative charge, they migrate toward the cathode with a velocity less than the electroosmotic flow velocity. Neutral species partition themselves between the micelles and the buffer solution in a manner similar to the partitioning of solutes between the two liquid phases in HPLC. Because there is a partitioning between two phases, we include the descriptive term chromatography in the techniques name. Note that in MEKC both phases are mobile. The elution order for neutral species in MEKC depends on the extent to which each species partitions into the micelles. Hydrophilic neutrals are insoluble in the micelle’s hydrophobic inner environment and elute as a single band, as they would in CZE. Neutral solutes that are extremely hy- drophobic are completely soluble in the micelle, eluting with the micelles as a single band. Those neutral species that exist in a partition equilibrium between the buffer and the micelles elute between the completely hydro- philic and completely hydrophobic neutral species. Those neutral species that favor the buffer elute before those favoring the micelles. Micellar electrokinetic chromatography is used to separate a wide variety of samples, including mixtures of pharmaceutical compounds, vitamins, and explosives. Capillary Gel Electrophoresis (CGE) In capillary gel electrophoresis the capillary tubing is filled with a polymeric gel. Because the gel is porous, a solute migrates through the gel with a velocity determined both by its electrophoretic mobility and by its size. The ability to effect a separation using size is helpful when the solutes have similar electrophoretic mobilities. For example, fragments of DNA of varying length have similar charge-to-size ratios, making their separation by CZE difficult. Because the DNA fragments are of different size, a CGE separation is possible. The capillary used for CGE usually is treated to eliminate electroosmotic flow to prevent the gel from extruding from the capillary tubing. Samples are injected electrokinetically because the gel provides too much resistance for hydrodynamic sampling. The primary application of CGE is the separation of large biomolecules, including DNA fragments, proteins, and oligonucleotides. Capillary Electrochromatography (CEC) Another approach to separating neutral species is capillary electrochromatography. In CEC the capillary tubing is packed with 1.5–3 μm particles coated with a bonded stationary phase. Neutral species separate based on their ability to partition between the stationary phase and the buffer, which is moving as a result of the electroosmotic flow; Figure 12.7.10 provides a representative example for the separation of a mixture of hydrocarbons. A CEC separation is similar to the analogous HPLC separation, but without the need for high pressure pumps. Efficiency in CEC is better than in HPLC, and analysis times are shorter. The best way to appreciate the theoretical and the practical details discussed in this section is to carefully examine a typical analytical method. Although each method is unique, the following description of the determination of a vitamin B complex by capillary zone electrophoresis or by micellar electrokinetic capillary chromatography provides an instructive example of a typical procedure. The description here is based on Smyth, W. F. Analytical Chemistry of Complex Matrices, Wiley Teubner: Chichester, England, 1996, pp. 154–156. Representative Method 12.7.1: Determination of Vitamin B Complex by CZE or MEKC Description of Method The water soluble vitamins B1 (thiamine hydrochloride), B2 (riboflavin), B3 (niacinamide), and B6 (pyridoxine hydrochloride) are determined by CZE using a pH 9 sodium tetraborate-sodium dihydrogen phosphate buffer, or by MEKC using the same buffer with the addition of sodium dodecyl sulfate. Detection is by UV absorption at 200 nm. An internal standard of o-ethoxybenzamide is used to standardize the method. Procedure Crush a vitamin B complex tablet and place it in a beaker with 20.00 mL of a 50 % v/v methanol solution that is 20 mM in sodium tetraborate and 100.0 ppm in o-ethoxybenzamide. After mixing for 2 min to ensure that the B vitamins are dissolved, pass a 5.00-mL portion through a 0.45-μm filter to remove insoluble binders. Load an approximately 4 nL sample into a capillary column with an inner diameter of a 50 μm. For CZE the capillary column contains a 20 mM pH 9 sodium tetraborate-sodium dihydrogen phosphate buffer. For MEKC the buffer is also 150 mM in sodium dodecyl sulfate. Apply a 40 kV/m electrical field to effect both the CZE and MEKC separations. Questions 1. Methanol, which elutes at 4.69 min, is included as a neutral species to indicate the electroosmotic flow. When using standard solutions of each vitamin, CZE peaks are found at 3.41 min, 4.69 min, 6.31 min, and 8.31 min. Examine the structures and pKa information in Figure 12.7.11 and identify the order in which the four B vitamins elute. At a pH of 9, vitamin B1 is a cation and elutes before the neutral species methanol; thus it is the compound that elutes at 3.41 min. Vitamin B3 is a neutral species at a pH of 9 and elutes with methanol at 4.69 min. The remaining two B vitamins are weak acids that partially ionize to weak base anions at a pH of 9. Of the two, vitamin B6 is the stronger acid (a pKa of 9.0 versus a pKa of 9.7) and is present to a greater extent in its anionic form. Vitamin B6, therefore, is the last of the vitamins to elute. 2. The order of elution when using MEKC is vitamin B3 (5.58 min), vitamin B6 (6.59 min), vitamin B2 (8.81 min), and vitamin B1 (11.21 min). What conclusions can you make about the solubility of the B vitamins in the sodium dodecylsulfate micelles? The micelles elute at 17.7 min. The elution time for vitamin B1 shows the greatest change, increasing from 3.41 min to 11.21 minutes. Clearly vitamin B1 has the greatest solubility in the micelles. Vitamin B2 and vitamin B3 have a more limited solubility in the micelles, and show only slightly longer elution times in the presence of the micelles. Interestingly, the elution time for vitamin B6 decreases in the presence of the micelles. 3. For quantitative work an internal standard of o-ethoxybenzamide is added to all samples and standards. Why is an internal standard necessary? Although the method of injection is not specified, neither a hydrodynamic injection nor an electrokinetic injection is particularly reproducible. The use of an internal standard compensates for this limitation. Evaluation When compared to GC and HPLC, capillary electrophoresis provides similar levels of accuracy, precision, and sensitivity, and it provides a comparable degree of selectivity. The amount of material injected into a capillary electrophoretic column is significantly smaller than that for GC and HPLC—typically 1 nL versus 0.1 μL for capillary GC and 1–100 μL for HPLC. Detection limits for capillary electrophoresis, however, are 100–1000 times poorer than that for GC and HPLC. The most significant advantages of capillary electrophoresis are improvements in separation efficiency, time, and cost. Capillary electrophoretic columns contain substantially more theoretical plates ($\approx 10^6$ plates/m) than that found in HPLC ($\approx 10^5$ plates/m) and capillary GC columns ($\approx 10^3$ plates/m), providing unparalleled resolution and peak capacity. Separations in capillary electrophoresis are fast and efficient. Furthermore, the capillary column’s small volume means that a capillary electrophoresis separation requires only a few microliters of buffer, compared to 20–30 mL of mobile phase for a typical HPLC separation.
textbooks/chem/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/12%3A_Chromatographic_and_Electrophoretic_Methods/12.07%3A_Electrophoresis.txt
1. The following data were obtained for four compounds separated on a 20-m capillary column. compound tr (min) w (min) A 8.04 0.15 B 8.26 0.15 C 8.43 0.16 (a) Calculate the number of theoretical plates for each compound and the average number of theoretical plates for the column, in mm. (b) Calculate the average height of a theoretical plate. (c) Explain why it is possible for each compound to have a different number of theoretical plates. 2. Using the data from Problem 1, calculate the resolution and the selectivity factors for each pair of adjacent compounds. For resolution, use both equation 12.2.1 and equation 12.3.3, and compare your results. Discuss how you might improve the resolution between compounds B and C. The retention time for an nonretained solute is 1.19 min. 3. Use the chromatogram in Figure 12.8.1 , obtained using a 2-m column, to determine values for tr, w, $t_r^{\prime}$, k, N, and H. 4. Use the partial chromatogram in Figure 12.8.2 to determine the resolution between the two solute bands. 5. The chromatogram in Problem 4 was obtained on a 2-m column with a column dead time of 50 s. Suppose you want to increase the resolution between the two components to 1.5. Without changing the height of a theoretical plate, what length column do you need? What height of a theoretical plate do you need to achieve a resolution of 1.5 without increasing the column’s length? 6. Complete the following table. NB $\alpha$ kB R 100000 1.05 0.50 10000 1.10   1.50 10000   4.0 1.00 1.05 3.0 1.75 7. Moody studied the efficiency of a GC separation of 2-butanone on a dinonyl phthalate packed column [Moody, H. W. J. Chem. Educ. 1982, 59, 218–219]. Evaluating plate height as a function of flow rate gave a van Deemter equation for which A is 1.65 mm, B is 25.8 mm•mL min–1, and C is 0.0236 mm•min mL–1. (a) Prepare a graph of H versus u for flow rates between 5 –120 mL/min. (b) For what range of flow rates does each term in the Van Deemter equation have the greatest effect? (c) What is the optimum flow rate and the corresponding height of a theoretical plate? (d) For open-tubular columns the A term no longer is needed. If the B and C terms remain unchanged, what is the optimum flow rate and the corresponding height of a theoretical plate? (e) Compared to the packed column, how many more theoretical plates are in the open-tubular column? 8. Hsieh and Jorgenson prepared 12–33 μm inner diameter HPLC columns packed with 5.44-μm spherical stationary phase particles [Hsieh, S.; Jorgenson, J. W. Anal. Chem. 1996, 68, 1212–1217]. To evaluate these columns they measured reduced plate height, h, as a function of reduced flow rate, v, $b=\frac{H}{d_{p}} \quad v=\frac{u d_{p}}{D_{m}} \nonumber$ where dp is the particle diameter and Dm is the solute’s diffusion coefficient in the mobile phase. The data were analyzed using van Deemter plots. The following table contains a portion of their results for norepinephrine. internal diameter (µm) A B C 33 0.63 1.32 0.10 33 0.67 1.30 0.08 23 0.40 1.34 0.09 23 0.58 1.11 0.09 17 0.31 1.47 0.11 17 0.40 1.41 0.11 12 0.22 1.53 0.11 12 0.19 1.27 0.12 (a) Construct separate van Deemter plots using the data in the first row and in the last row for reduced flow rates in the range 0.7–15. Determine the optimum flow rate and plate height for each case given dp = 5.44 μm and Dm = $6.23 \times 10^{-6}$ cm2 s–1. (b) The A term in the van Deemter equation is strongly correlated with the column’s inner diameter, with smaller diameter columns providing smaller values of A. Offer an explanation for this observation. Hint: consider how many particles can fit across a capillary of each diameter. When comparing columns, chromatographers often use dimensionless, reduced parameters. By including particle size and the solute’s diffusion coefficient, the reduced plate height and reduced flow rate correct for differences between the packing material, the solute, and the mobile phase. 9. A mixture of n-heptane, tetrahydrofuran, 2-butanone, and n-propanol elutes in this order when using a polar stationary phase such as Carbowax. The elution order is exactly the opposite when using a nonpolar stationary phase such as polydimethyl siloxane. Explain the order of elution in each case. 10. The analysis of trihalomethanes in drinking water is described in Representative Method 12.4.1. A single standard that contains all four trihalomethanes gives the following results. compound concentration (ppb) peak area CHCl3 1.30 $1.35 \times 10^4$ CHCl2Br 0.90 $6.12 \times 10^4$ CHClBr2 4.00 $1.71 \times 10^4$ CHBr3 1.20 $1.52 \times 10^4$ Analysis of water collected from a drinking fountain gives areas of $1.56 \times 10^4$, $5.13 \times10^4$, $1.49 \times 10^4$, and $1.76 \times 10^4$ for, respectively, CHCl3, CHCl2Br, CHClBr2, and CHBr3. All peak areas were corrected for variations in injection volumes using an internal standard of 1,2-dibromopentane. Determine the concentration of each of the trihalomethanes in the sample of water. 11. Zhou and colleagues determined the %w/w H2O in methanol by capillary column GC using a polar stationary phase and a thermal conductivity detector [Zhou, X.; Hines, P. A.; White, K. C.; Borer, M. W. Anal. Chem. 1998, 70, 390–394]. A series of calibration standards gave the following results. %w/w H2O peak height (arb. units) 0.00 1.15 0.0145 2.74 0.0472 6.33 0.0951 11.58 0.1757 20.43 0.2901 32.97 (a) What is the %w/w H2O in a sample that has a peak height of 8.63? (b) The %w/w H2O in a freeze-dried antibiotic is determined in the following manner. A 0.175-g sample is placed in a vial along with 4.489 g of methanol. Water in the vial extracts into the methanol. Analysis of the sample gave a peak height of 13.66. What is the %w/w H2O in the antibiotic? 12. Loconto and co-workers describe a method for determining trace levels of water in soil [Loconto, P. R.; Pan, Y. L.; Voice, T. C. LC•GC 1996, 14, 128–132]. The method takes advantage of the reaction of water with calcium carbide, CaC2, to produce acetylene gas, C2H2. By carrying out the reaction in a sealed vial, the amount of acetylene produced is determined by sampling the headspace. In a typical analysis a sample of soil is placed in a sealed vial with CaC2. Analysis of the headspace gives a blank corrected signal of $2.70 \times 10^5$. A second sample is prepared in the same manner except that a standard addition of 5.0 mg H2O/g soil is added, giving a blank-corrected signal of $1.06 \times 10^6$. Determine the milligrams H2O/g soil in the soil sample. 13. Van Atta and Van Atta used gas chromatography to determine the %v/v methyl salicylate in rubbing alcohol [Van Atta, R. E.; Van Atta, R. L. J. Chem. Educ. 1980, 57, 230–231]. A set of standard additions was prepared by transferring 20.00 mL of rubbing alcohol to separate 25-mL volumetric flasks and pipeting 0.00 mL, 0.20 mL, and 0.50 mL of methyl salicylate to the flasks. All three flasks were diluted to volume using isopropanol. Analysis of the three samples gave peak heights for methyl salicylate of 57.00 mm, 88.5 mm, and 132.5 mm, respectively. Determine the %v/v methyl salicylate in the rubbing alcohol. 14. The amount of camphor in an analgesic ointment is determined by GC using the method of internal standards [Pant, S. K.; Gupta, P. N.; Thomas, K. M.; Maitin, B. K.; Jain, C. L. LC•GC 1990, 8, 322–325]. A standard sample is prepared by placing 45.2 mg of camphor and 2.00 mL of a 6.00 mg/mL internal standard solution of terpene hydrate in a 25-mL volumetric flask and diluting to volume with CCl4. When an approximately 2-μL sample of the standard is injected, the FID signals for the two components are measured (in arbitrary units) as 67.3 for camphor and 19.8 for terpene hydrate. A 53.6-mg sample of an analgesic ointment is prepared for analysis by placing it in a 50-mL Erlenmeyer flask along with 10 mL of CCl4. After heating to 50oC in a water bath, the sample is cooled to below room temperature and filtered. The residue is washed with two 5-mL portions of CCl4 and the combined filtrates are collected in a 25-mL volumetric flask. After adding 2.00 mL of the internal standard solution, the contents of the flask are diluted to volume with CCl4. Analysis of an approximately 2-μL sample gives FID signals of 13.5 for the terpene hydrate and 24.9 for the camphor. Report the %w/w camphor in the analgesic ointment. 15. The concentration of pesticide residues on agricultural products, such as oranges, is determined by GC-MS [Feigel, C. Varian GC/MS Application Note, Number 52]. Pesticide residues are extracted from the sample using methylene chloride and concentrated by evaporating the methylene chloride to a smaller volume. Calibration is accomplished using anthracene-d10 as an internal standard. In a study to determine the parts per billion heptachlor epoxide on oranges, a 50.0-g sample of orange rinds is chopped and extracted with 50.00 mL of methylene chloride. After removing any insoluble material by filtration, the methylene chloride is reduced in volume, spiked with a known amount of the internal standard and diluted to 10 mL in a volumetric flask. Analysis of the sample gives a peak–area ratio (Aanalyte/Aintstd) of 0.108. A series of calibration standards, each containing the same amount of anthracene-d10 as the sample, gives the following results. ppb heptachlor epoxide Aanalyte/Aintstd 20.0 0.065 60.0 0.153 200.0 0.637 500.0 1.554 1000.0 3.198 Report the nanograms per gram of heptachlor epoxide residue on the oranges. 16. The adjusted retention times for octane, toluene, and nonane on a particular GC column are 15.98 min, 17.73 min, and 20.42 min, respectively. What is the retention index for each compound? 17. The following data were collected for a series of normal alkanes using a stationary phase of Carbowax 20M. alkane $t_r^{\prime}$ (min) pentane 0.79 hexane 1.99 heptane 4.47 octane 14.12 nonane 33.11 What is the retention index for a compound whose adjusted retention time is 9.36 min? 18. The following data were reported for the gas chromatographic analysis of p-xylene and methylisobutylketone (MIBK) on a capillary column [Marriott, P. J.; Carpenter, P. D. J. Chem. Educ. 1996, 73, 96–99]. injection mode compound tr (min) peak area (arb. units) peak width (min) split MIBK 1.878 54285 0.028 p-xylene 5.234 123483 0.044 splitless MIBK 3.420 2493005 1.057 p-xylene 5.795 3396656 1.051 Explain the difference in the retention times, the peak areas, and the peak widths when switching from a split injection to a splitless injection. 19. Otto and Wegscheider report the following retention factors for the reversed-phase separation of 2-aminobenzoic acid on a C18 column when using 10% v/v methanol as a mobile phase [Otto, M.; Wegscheider, W. J. Chromatog. 1983, 258, 11–22]. pH k 2.0 10.5 3.0 16.7 4.0 15.8 5.0 8.0 6.0 2.2 7.0 1.8 Explain the effect of pH on the retention factor for 2-aminobenzene. 20. Haddad and associates report the following retention factors for the reversed-phase separation of salicylamide and caffeine [Haddad, P.; Hutchins, S.; Tuffy, M. J. Chem. Educ. 1983, 60, 166-168]. %v/v methanol 30% 35% 40% 45% 50% 55% ksal 2.4 1.6 1.6 1.0 0.7 0.7 kcaff 4.3 2.8 2.3 1.4 1.1 0.9 (a) Explain the trends in the retention factors for these compounds. (b) What is the advantage of using a mobile phase with a smaller %v/v methanol? Are there any disadvantages? 21. Suppose you need to separate a mixture of benzoic acid, aspartame, and caffeine in a diet soda. The following information is available. tr in aqueous mobile phase of pH compound 3.0 3.5 4.0 4.5 benzoic acid 7.4 7.0 6.9 4.4 aspartame 5.9 6.0 7.1 8.1 caffeine 3.6 3.7 4.1 4.4 (a) Explain the change in each compound’s retention time. (b) Prepare a single graph that shows retention time versus pH for each compound. Using your plot, identify a pH level that will yield an acceptable separation. 22. The composition of a multivitamin tablet is determined using an HPLC with a diode array UV/Vis detector. A 5-μL standard sample that contains 170 ppm vitamin C, 130 ppm niacin, 120 ppm niacinamide, 150 ppm pyridoxine, 60 ppm thiamine, 15 ppm folic acid, and 10 ppm riboflavin is injected into the HPLC, giving signals (in arbitrary units) of, respectively, 0.22, 1.35, 0.90, 1.37, 0.82, 0.36, and 0.29. The multivitamin tablet is prepared for analysis by grinding into a powder and transferring to a 125-mL Erlenmeyer flask that contains 10 mL of 1% v/v NH3 in dimethyl sulfoxide. After sonicating in an ultrasonic bath for 2 min, 90 mL of 2% acetic acid is added and the mixture is stirred for 1 min and sonicated at 40oC for 5 min. The extract is then filtered through a 0.45-μm membrane filter. Injection of a 5-μL sample into the HPLC gives signals of 0.87 for vitamin C, 0.00 for niacin, 1.40 for niacinamide, 0.22 for pyridoxine, 0.19 for thiamine, 0.11 for folic acid, and 0.44 for riboflavin. Report the milligrams of each vitamin present in the tablet. 23. The amount of caffeine in an analgesic tablet was determined by HPLC using a normal calibration curve. Standard solutions of caffeine were prepared and analyzed using a 10-μL fixed-volume injection loop. Results for the standards are summarized in the following table. concentration (ppm) signal (arb. units) 50.0 8.354 100.0 16925 150.0 25218 200.0 33584 250.0 42002 The sample is prepared by placing a single analgesic tablet in a small beaker and adding 10 mL of methanol. After allowing the sample to dissolve, the contents of the beaker, including the insoluble binder, are quantitatively transferred to a 25-mL volumetric flask and diluted to volume with methanol. The sample is then filtered, and a 1.00-mL aliquot transferred to a 10-mL volumetric flask and diluted to volume with methanol. When analyzed by HPLC, the signal for caffeine is found to be 21 469. Report the milligrams of caffeine in the analgesic tablet. 24. Kagel and Farwell report a reversed-phase HPLC method for determining the concentration of acetylsalicylic acid (ASA) and caffeine (CAF) in analgesic tablets using salicylic acid (SA) as an internal standard [Kagel, R. A.; Farwell, S. O. J. Chem. Educ. 1983, 60, 163–166]. A series of standards was prepared by adding known amounts of ace- tylsalicylic acid and caffeine to 250-mL Erlenmeyer flasks and adding 100 mL of methanol. A 10.00-mL aliquot of a standard solution of salicylic acid was then added to each. The following results were obtained for a typical set of standard solutions. milligrams of peak height ratios for standard ASA CAF ASA/SA CAF/SA 1 200.0 20.0 20.5 10.6 2 250.0 40.0 25.1 23.0 3 300.0 60.0 30.9 36.8 A sample of an analgesic tablet was placed in a 250-mL Erlenmeyer flask and dissolved in 100 mL of methanol. After adding a 10.00-mL portion of the internal standard, the solution was filtered. Analysis of the sample gave a peak height ratio of 23.2 for ASA and of 17.9 for CAF. (a) Determine the milligrams of ASA and CAF in the tablet. (b) Why is it necessary to filter the sample? (c) The directions indicate that approximately 100 mL of methanol is used to dissolve the standards and samples. Why is it not necessary to measure this volume more precisely? (d) In the presence of moisture, ASA decomposes to SA and acetic acid. What complication might this present for this analysis? How might you evaluate whether this is a problem? 25. Bohman and colleagues described a reversed-phase HPLC method for the quantitative analysis of vitamin A in food using the method of standard additions Bohman, O.; Engdahl, K. A.; Johnsson, H. J. Chem. Educ. 1982, 59, 251–252]. In a typical example, a 10.067-g sample of cereal is placed in a 250-mL Erlenmeyer flask along with 1 g of sodium ascorbate, 40 mL of ethanol, and 10 mL of 50% w/v KOH. After refluxing for 30 min, 60 mL of ethanol is added and the solution cooled to room temperature. Vitamin A is extracted using three 100-mL portions of hexane. The combined portions of hexane are evaporated and the residue containing vitamin A transferred to a 5-mL volumetric flask and diluted to volume with methanol. A standard addition is prepared in a similar manner using a 10.093-g sample of the cereal and spiking with 0.0200 mg of vitamin A. Injecting the sample and standard addition into the HPLC gives peak areas of, respectively, $6.77 \times10^3$ and $1.32 \times 10^4$. Report the vitamin A content of the sample in milligrams/100 g cereal. 26. Ohta and Tanaka reported on an ion-exchange chromatographic method for the simultaneous analysis of several inorganic anions and the cations Mg2+ and Ca2+ in water [Ohta, K.; Tanaka, K. Anal. Chim. Acta 1998, 373, 189–195]. The mobile phase includes the ligand 1,2,4-benzenetricarboxylate, which absorbs strongly at 270 nm. Indirect detection of the analytes is possible because its absorbance decreases when complexed with an anion. (a) The procedure also calls for adding the ligand EDTA to the mobile phase. What role does the EDTA play in this analysis? (b) A standard solution of 1.0 mM NaHCO3, 0.20 mM NaNO2, 0.20 mM MgSO4, 0.10 mM CaCl2, and 0.10 mM Ca(NO3)2 gives the following peak areas (arbitrary units). ion $\text{HCO}_3^-$ Cl $\text{NO}_2^-$ $\text{NO}_3^-$ peak area 373.5 322.5 264.8 262.7 ion Ca2+ Mg2+ $\text{SO}_4^{2-}$ peak area 458.9 352.0 341.3 Analysis of a river water sample (pH of 7.49) gives the following results. ion $\text{HCO}_3^-$ Cl $\text{NO}_2^-$ $\text{NO}_3^-$ peak area 310.0 403.1 3.97 157.6 ion Ca2+ Mg2+ $\text{SO}_4^{2-}$ peak area 734.3 193.6 324.3 Determine the concentration of each ion in the sample. (c) The detection of $\text{HCO}_3^-$ actually gives the total concentration of carbonate in solution ([$\text{CO}_3^{2-}$]+[$\text{HCO}_3^-$]+[H2CO3]). Given that the pH of the water is 7.49, what is the actual concentration of $\text{HCO}_3^-$? (d) An independent analysis gives the following additional concentrations for ions in the sample: [Na+] = 0.60 mM; [$\text{NH}_4^+$] = 0.014 mM; and [K+] = 0.046 mM. A solution’s ion balance is defined as the ratio of the total cation charge to the total anion charge. Determine the charge balance for this sample of water and comment on whether the result is reasonable. 27. The concentrations of Cl, $\text{NO}_2^-$, and $\text{SO}_4^{2-}$ are determined by ion chromatography. A 50-μL standard sample of 10.0 ppm Cl, 2.00 ppm $\text{NO}_2^-$, and 5.00 ppm $\text{SO}_4^{2-}$ gave signals (in arbitrary units) of 59.3, 16.1, and 6.08 respectively. A sample of effluent from a wastewater treatment plant is diluted tenfold and a 50-μL portion gives signals of 44.2 for Cl, 2.73 for $\text{NO}_2^-$, and 5.04 for $\text{SO}_4^{2-}$. Report the parts per million for each anion in the effluent sample. 28. A series of polyvinylpyridine standards of different molecular weight was analyzed by size-exclusion chromatography, yielding the following results. formula weight retention volume (mL) 600000 6.42 100000 7.98 30000 9.30 3000 10.94 When a preparation of polyvinylpyridine of unknown formula weight is analyzed, the retention volume is 8.45 mL. Report the average formula weight for the preparation. 29. Diet soft drinks contain appreciable quantities of aspartame, benzoic acid, and caffeine. What is the expected order of elution for these compounds in a capillary zone electrophoresis separation using a pH 9.4 buffer given that aspartame has pKa values of 2.964 and 7.37, benzoic acid has a pKa of 4.2, and the pKa for caffeine is less than 0. Figure 12.8.3 provides the structures of these compounds. 30. Janusa and coworkers describe the determination of chloride by CZE [Janusa, M. A.; Andermann, L. J.; Kliebert, N. M.; Nannie, M. H. J. Chem. Educ. 1998, 75, 1463–1465]. Analysis of a series of external standards gives the following calibration curve. $\text { area }=-883+5590 \times \mathrm{ppm} \text{ Cl}^{-} \nonumber$ A standard sample of 57.22% w/w Cl is analyzed by placing 0.1011-g portions in separate 100-mL volumetric flasks and diluting to volume. Three unknowns are prepared by pipeting 0.250 mL, 0.500 mL, an 0.750 mL of the bulk unknown in separate 50-mL volumetric flasks and diluting to volume. Analysis of the three unknowns gives areas of 15 310, 31 546, and 47 582, respectively. Evaluate the accuracy of this analysis. 31. The analysis of $\text{NO}_3^-$ in aquarium water is carried out by CZE using $\text{IO}_4^-$ as an internal standard. A standard solution of 15.0 ppm $\text{NO}_3^-$ and 10.0 ppm $\text{IO}_4^-$ gives peak heights (arbitrary units) of 95.0 and 100.1, respectively. A sample of water from an aquarium is diluted 1:100 and sufficient internal standard added to make its concentration 10.0 ppm in $\text{IO}_4^-$. Analysis gives signals of 29.2 and 105.8 for $\text{NO}_3^-$ and $\text{IO}_4^-$, respectively. Report the ppm $\text{NO}_3^-$ in the sample of aquarium water. 32. Suggest conditions to separate a mixture of 2-aminobenzoic acid (pKa1 = 2.08, pKa2 = 4.96), benzylamine (pKa = 9.35), and 4-methylphenol (pKa2 = 10.26) by capillary zone electrophoresis. Figure $PageIndex{4}$ provides the structures of these compounds. 33. McKillop and associates examined the electrophoretic separation of some alkylpyridines by CZE [McKillop, A. G.; Smith, R. M.; Rowe, R. C.; Wren, S. A. C. Anal. Chem. 1999, 71, 497–503]. Separations were carried out using either 50-μm or 75-μm inner diameter capillaries, with a total length of 57 cm and a length of 50 cm from the point of injection to the detector. The run buffer was a pH 2.5 lithium phosphate buffer. Separations were achieved using an applied voltage of 15 kV. The electroosmotic mobility, μeof, as measured using a neutral marker, was found to be $6.398 \times 10^{-5}$ cm2 V–1 s–1. The diffusion coefficient for alkylpyridines is $1.0 \times 10^{-5}$ cm2 s–1. (a) Calculate the electrophoretic mobility for 2-ethylpyridine given that its elution time is 8.20 min. (b) How many theoretical plates are there for 2-ethylpyridine? (c) The electrophoretic mobilities for 3-ethylpyridine and 4-ethylpyridine are $3.366 \times 10^{-4}$ cm2 V–1 s–1 and $3.397 \times 10^{-4} \text{ cm}^2 \text{ V}^{-1} \text{ s}^{-1}$, respectively. What is the expected resolution between these two alkylpyridines? (d) Explain the trends in electrophoretic mobility shown in the following table. alkylpyridine $\mu_{ep}$ (cm2 V–1 s–1) 2-methylpyridine $3.581 \times 10^{-4}$ 2-ethylpyridine $3.222 \times 10^{-4}$ 2-propylpyridine $2.923 \times 10^{-4}$ 2-pentylpyridine $2.534 \times 10^{-4}$ 2-hexylpyridine $2.391 \times 10^{-4}$ (e) Explain the trends in electrophoretic mobility shown in the following table. alkylpyridine $\mu_{ep}$ (cm2 V–1 s–1) 2-ethylpyridine $3.222 \times 10^{-4}$ 3-ethylpyridine $3.366 \times 10^{-4}$ 4-ethylpyridine $3.397 \times 10^{-4}$ (f) The pKa for pyridine is 5.229. At a pH of 2.5 the electrophoretic mobility of pyridine is $4.176 \times 10^{-4}$ cm2 V–1 s–1. What is the expected electrophoretic mobility if the run buffer’s pH is 7.5?
textbooks/chem/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/12%3A_Chromatographic_and_Electrophoretic_Methods/12.08%3A_Problems.txt
The following set of experiments introduce students to the applications of chromatography and electrophoresis. Experiments are grouped into five categories: gas chromatography, high-performance liquid chromatography, ion-exchange chromatography, size-exclusion chromatography, and electrophoresis. Gas Chromatography • Bishop, R. D., Jr. “Using GC–MS to Determine Relative Reactivity Ratios,” J. Chem. Educ. 1995, 72, 743–745. • Elderd, D. M.; Kildahl, N. K.; Berka, L. H. “Experiments for Modern Introductory Chemistry: Identification of Arson Accelerants by Gas Chromatography,” J. Chem. Educ. 1996, 73, 675–677. • Fleurat-Lessard, P.; Pointet, K.; Renou-Gonnord, M.-F. “Quantitative Determination of PAHs in Diesel Engine Exhausts by GC–MS,” J. Chem. Educ. 1999, 76, 962–965. • Galipo, R. C.; Canhoto, A. J.; Walla, M. D.; Morgan, S. L. “Analysis of Volatile Fragrance and Flavor Compounds by Headspace Solid Phase Microextraction and GC–MS,” J. Chem. Educ. 1999, 76, 245–248. • Graham, R. C.; Robertson, J. K. “Analysis of Trihalomethanes in Soft Drinks,” J. Chem. Educ. 1988, 65, 735–737. • Heinzen, H.; Moyan, P.; Grompone, A. “Gas Chromatographic Determination of Fatty Acid Compositions,” J. Chem. Educ. 1985, 62, 449–450. • Kegley, S. E.; Hansen, K. J.; Cunningham, K. L. “Determination of Polychlorinated Biphenyls (PCBs) in River and Bay Sediments,” J. Chem. Educ. 1996, 73, 558–562. • Kostecka, K. S.; Rabah, A.; Palmer, C. F., Jr. “GC/MS Analysis of the Aromatic Composition of Gasoline,” J. Chem. Educ. 1995, 72, 853–854. • Quach, D. T.; Ciszkowski, N. A.; Finlayson-Pitts, B. J. “A New GC-MS Experiment for the Undergraduate Instrumental Analysis Laboratory in Environmental Chemistry: Methyl-t-butyl Ether and Benzene in Gasoline,” J. Chem. Educ. 1998, 75, 1595–1598. • Ramachandran, B. R.; Allen, J. M.; Halpern, A. M. “Air–Water Partitioning of Environmentally Important Organic Compounds,” J. Chem. Educ. 1996, 73, 1058–1061. • Rice, G. W. “Determination of Impurities in Whiskey Using Internal Standard Techniques,” J. Chem. Educ. 1987, 64, 1055–1056. • Rubinson, J. F.; Neyer-Hilvert, J. “Integration of GC-MS Instrumentation into the Undergraduate Laboratory: Separation and Identification of Fatty Acids in Commercial Fats and Oils,” J. Chem. Educ. 1997, 74, 1106–1108. • Rudzinski, W. E.; Beu, S. “Gas Chromatographic Determination of Environmentally Significant Pesticides,” J. Chem. Educ. 1982, 59, 614–615. • Sobel, R. M.; Ballantine, D. S.; Ryzhov, V. “Quantitation of Phenol Levels in Oil of Wintergreen Using Gas Chromatography–Mass Spectrometry with Selected Ion Monitoring,” J. Chem. Educ. 2005, 82, 601–603. • Welch, W. C.; Greco, T. G. “An Experiment in Manual Multiple Headspace Extraction for Gas Chromatography,” J. Chem. Educ. 1993, 70, 333–335. • Williams, K. R.; Pierce, R. E. “The Analysis of Orange Oil and the Aqueous Solubility of d-Limone,” J. Chem. Educ. 1998, 75, 223–226. • Wong, J. W.; Ngim, K. K.; Shibamoto, T.; Mabury, S. A.; Eiserich, J. P.; Yeo, H. C. H. “Determination of Formaldehyde in Cigarette Smoke,” J. Chem. Educ. 1997, 74, 1100–1103. • Yang, M. J.; Orton, M. L., Pawliszyn, J. “Quantitative Determination of Caffeine in Beverages Using a Combined SPME-GC/MS Method,” J. Chem. Educ. 1997, 74, 1130–1132. High-Performance Liquid Chromatography • Batchelor, J. D.; Jones, B. T. “Determination of the Scoville Heat Value for Hot Sauces and Chilies: An HPLC Experiment,” J. Chem. Educ. 2000, 77, 266–267. • Beckers, J. L. “The Determination of Caffeine in Coffee: Sense or Nonsense?” J. Chem. Educ. 2004, 81, 90–93. • Betts, T. A. “Pungency Quantitation of Hot Pepper Sauces Using HPLC,” J. Chem. Educ. 1999, 76, 240–244. • Bidlingmeyer, B. A.; Schmitz, S. “The Analysis of Artificial Sweeteners and Additives in Beverages by HPLC,” J. Chem. Educ. 1991, 68, A195–A200. • Bohman, O.; Engdahl, K.-A.; Johnsson, H. “High Performance Liquid Chromatography of Vitamin A: A Quantitative Determination,” J. Chem. Educ. 1982, 59, 251–252. • Brenneman, C. A.; Ebeler, S. E. “Chromatographic Separations Using Solid-Phase Extraction Cartridges: Separation of Wine Phenolics,” J. Chem. Educ. 1999, 76, 1710–1711. • Cantwell, F. F.; Brown, D. W. “Liquid Chromatographic Determination of Nitroanilines,” J. Chem. Educ. 1981, 58, 820–823. • DiNunzio, J. E. “Determination of Caffeine in Beverages by High Performance Liquid Chromatography,” J. Chem. Educ. 1985, 62, 446–447. • Ferguson, G. K. “Quantitative HPLC Analysis of an Analgesic/Caffeine Formulation: Determination of Caffeine,” J. Chem. Educ. 1998, 75, 467–469. • Ferguson, G. K. “Quantitative HPLC Analysis of a Psychotherapeutic Medication: Simultaneous Determination of Amitriptyline Hydrochloride and Perphenazine,” J. Chem. Educ. 1998, 75, 1615–1618. • Goodney, D. E. “Analysis of Vitamin C by High-Pressure Liquid Chromatography,” J. Chem. Educ. 1987, 64, 187–188. • Guevremont, R.; Quigley, M. N. “Determination of Paralytic Shellfish Poisons Using Liquid Chromatography,” J. Chem. Educ. 1994, 71, 80–81. • Haddad, P.; Hutchins, S.; Tuffy, M. “High Performance Liquid Chromatography of Some Analgesic Compounds,” J. Chem. Educ. 1983, 60, 166–168. • Huang, J.; Mabury, S. A.; Sagebiel, J. C. “Hot Chili Peppers: Extraction, Cleanup, and Measurement of Capscaicin,” J. Chem. Educ. 2000, 77, 1630–1631. • Joeseph, S. M.; Palasota, J. A. “The Combined Effect of pH and Percent Methanol on the HPLC Separation of Benzoic Acid and Phenol,” J. Chem. Educ. 2001, 78, 1381–1383. • Lehame, S. “The Separation of Copper, Iron, and Cobalt Tetramethylene Dithiocarbamates by HPLC,” J. Chem. Educ. 1986, 63, 727–728. • Luo, P.; Luo, M. Z.; Baldwin, R. P. “Determination of Sugars in Food Products,” J. Chem. Educ. 1993, 70, 679–681. • Mueller, B. L.; Potts, L. W. “HPLC Analysis of an Asthma Medication,” J. Chem. Educ. 1988, 65, 905–906. • Munari, M.; Miurin, M.; Goi, G. “Didactic Application to Riboflavin HPLC Analysis,” J. Chem. Educ. 1991, 68, 78–79. • Orth, D. L. “HPLC Determination of Taurine in Sports Drinks,” J. Chem. Educ. 2001, 78, 791– 792. • Remcho, V. T.; McNair, H. M.; Rasmussen, H. T. “HPLC Method Development with the Photodiode Array Detector,” J. Chem. Educ. 1992, 69, A117–A119. • Richardson, W. W., III; Burns, L. “HPLC of the Polypeptides in a Hydrolyzate of Egg-White Lysozyme,” J. Chem. Educ. 1988, 65, 162–163. • Silveira, A., Jr.; Koehler, J. A.; Beadel, E. F., Jr.; Monore, P. A. “HPLC Analysis of Chlorophyll a, Chlorophyll b, and $\beta$-Carotene in Collard Greens,” J. Chem. Educ. 1984, 61, 264–265. • Siturmorang, M.; Lee, M. T. B.; Witzeman, L. K.; Heineman, W. R. “Liquid Chromatography with Electrochemical Detection (LC-EC): An Experiment Using 4-Aminophenol,” J. Chem. Educ. 1998, 75, 1035–1038. • Sottofattori, E.; Raggio, R.; Bruno, O. “Milk as a Drug Analysis Medium: HPLC Determination of Isoniazid,” J. Chem. Educ. 2003, 80, 547–549. • Strohl, A. N. “A Study of Colas: An HPLC Experiment,” J. Chem. Educ. 1985, 62, 447–448. • Tran, C. D.; Dotlich, M. “Enantiomeric Separation of Beta-Blockers by High Performance Liquid Chromatography,” J. Chem. Educ. 1995, 72, 71–73. • Van Arman, S. A.; Thomsen, M. W. “HPLC for Undergraduate Introductory Laboratories,” J. Chem. Educ. 1997, 74, 49–50. • Wingen, L. M.; Low, J. C.; Finlayson-Pitts, B. J. “Chromatography, Absorption, and Fluorescence: A New Instrumental Analysis Experiment on the Measurement of Polycyclic Aromatic Hydrocarbons in Cigarette Smoke,” J. Chem. Educ. 1998, 75, 1599–1603. Ion-Exchange Chromatography • Bello, M. A.; Gustavo González, A. “Determination of Phosphate in Cola Beverages Using Nonsuppressed Ion Chromatography,” J. Chem. Educ. 1996, 73, 1174–1176. • Kieber, R. J.; Jones, S. B. “An Undergraduate Laboratory for the Determination of Sodium, Potassium, and Chloride,” J. Chem. Educ. 1994, 71, A218–A222. • Koubek, E.; Stewart, A. E. “The Analysis of Sulfur in Coal,” J. Chem. Educ. 1992, 69, A146–A148. • Sinniah, K.; Piers, K. “Ion Chromatography: Analysis of Ions in Pond Water,” J. Chem. Educ. 2001, 78, 358–362. • Xia, K.; Pierzynski, G. “Competitive Sorption between Oxalate and Phosphate in Soil: An Environmental Chemistry Laboratory Using Ion Chromatography,” J. Chem. Educ. 2003, 80, 71–75. Size-Exchange Chromatography • Brunauer, L. S.; Davis, K. K. “Size Exclusion Chromatography: An Experiment for High School and Community College Chemistry and Biotechnology Laboratory Programs,” J. Chem. Educ. 2008, 85, 683–685. • Saiz, E.; Tarazona, M. P. “Size-Exclusion Chromatography Using Dual Detection,” Chem. Educator 2000, 5, 324–328. Electrophoresis • Almarez, R. T.; Kochis, M. “Microscale Capillary Electrophoresis: A Complete Instrumentation Experiment for Chemistry Students at the Undergraduate Junior or Senior Level,” J. Chem. Educ. 2003, 80, 316–319. • Beckers, J. L. “The Determination of Caffeine in Coffee: Sense or Nonsense?” J. Chem. Educ. 2004, 81, 90–93. • Beckers, J. L. “The Determination of Vanillin in a Vanilla Extract,” J. Chem. Educ. 2005, 82, 604– 606. • Boyce, M. “Separation and Quantification of Simple Ions by Capillary Zone Electrophoresis,” J. Chem. Educ. 1999, 76, 815–819. • Conradi, S.; Vogt, C.; Rohde, E. “Separation of Enatiomeric Barbiturates by Capillary Electrophoresis Using a Cyclodextrin-Containing Run Buffer,” J. Chem. Educ. 1997, 74, 1122–1125. • Conte, E. D.; Barry, E. F.; Rubinstein, H. “Determination of Caffeine in Beverages by Capillary Zone Electrophoresis,” J. Chem. Educ. 1996, 73, 1169–1170. • Demay, S.; Martin-Girardeau, A.; Gonnord, M.-F. “Capillary Electrophoretic Quantitative Analysis of Anions in Drinking Water,” J. Chem. Educ. 1999, 76, 812–815. • Emry, R.; Cutright, R. D.; Wright, J.; Markwell, J. “Candies to Dye for: Cooperative, Open-Ended Student Activities to Promote Understanding of Electrophoretic Fractionation,” J. Chem. Educ. 2000, 77, 1323–1324. • Gardner, W. P.; Girard, J. E. “Analysis of Common Household Cleaner-Disinfectants by Capillary Electrophoresis,” J. Chem. Educ. 2000, 77, 1335–1338. • Gruenhagen, J. A.; Delaware, D.; Ma, Y. “Quantitative Analysis of Non-UV-Absorbing Cations in Soil Samples by High-Performance Capillary Electrophoresis,” J. Chem. Educ. 2000, 77, 1613–1616. • Hage, D. S.; Chattopadhyay, A.; Wolfe, C. A. C.; Grundman, J.; Kelter, P. B. “Determination of Nitrate and Nitrite in Water by Capillary Electrophoresis,” J. Chem. Educ. 1998, 75, 1588–1590. • Herman, H. B.; Jezorek, J. R.; Tang, Z. “Analysis of Diet Tonic Water Using Capillary Electrophoresis,” J. Chem. Educ. 2000, 77, 743–744. • Janusa, M. A.; Andermann, L. J.; Kliebert, N. M.; Nannie, M. H. “Determination of Chloride Concentration Using Capillary Electrophoresis,” J. Chem. Educ. 1998, 75, 1463–1465. • McDevitt, V. L.; Rodríguez, A.; Williams, K. R. “Analysis of Soft Drinks: UV Spectrophotometry, Liquid Chromatography, and Capillary Electrophoresis,” J. Chem. Educ. 1998, 75, 625–629. • Palmer, C. P. “Demonstrating Chemical and Analytical Concepts in the Undergraduate Laboratory Using Capillary Electrophoresis and Micellar Electrokinetic Chromatography,” J. Chem. Educ. 1999, 76, 1542–1543. • Pursell, C. J.; Chandler, B.; Bushey, M. M. “Capillary Electrophoresis Analysis of Cations in Water Samples,” J. Chem. Educ. 2004, 81, 1783–1786. • Solow, M. “Weak Acid pKa Determination Using Capillary Zone Electrophoresis,” J. Chem. Educ. 2006, 83, 1194–1195. • Thompson, L.; Veening, H.; Strain, T. G. “Capillary Electrophoresis in the Undergraduate Instrumental Analysis Laboratory: Determination of Common Analgesic Formulations,” J. Chem. Educ. 1997, 74, 1117–1121. • Vogt, C.; Conradi, S.; Rhode, E. “Determination of Caffeine and Other Purine Compounds in Food and Pharmaceuticals by Micellar Electrokinetic Chromatography” J. Chem. Educ. 1997, 74, 1126– 1130. • Weber, P. L.; Buck, D. R. “Capillary Electrophoresis: A Fast and Simple Method for the Determination of the Amino Acid Composition of Proteins,” J. Chem. Educ. 1994, 71, 609–612. • Welder, F.; Colyer, C. L. “Using Capillary Electrophoresis to Determine the Purity of Acetylsalicylic Acid Synthesized in the Undergraduate Laboratory,” J. Chem. Educ. 2001, 78, 1525–1527. • Williams, K. R.; Adhyaru, B.; German, I.; Russell, T. “Determination of a Diffusion Coefficient by Capillary Electrophoresis,” J. Chem. Educ. 2002, 79, 1475–1476. The following texts provide a good introduction to the broad field of separations, including chromatography and electrophoresis. • Giddings, J. C. Unified Separation Science, Wiley-Interscience: New York 1991. • Karger, B. L.; Snyder, L. R.; Harvath, C. An Introduction to Separation Science, Wiley-Interscience: New York, 1973 • Miller, J. M. Separation Methods in Chemical Analysis, Wiley-Interscience: New York, 1975. • Poole, C. F. The Essence of Chromatography, Elsevier: Amsterdam, 2003. A more recent discussion of peak capacity is presented in the following papers. • Chester, T. L. “Further Considerations of Exact Equations for Peak Capacity in Isocratic Liquid Chromatography,” Anal. Chem. 2014, 86, 7239–7241. • Davis, J. M.; Stoll, D. R.; Carr, P. W. “Dependence of Effective Peak Capacity in Comprehensive Two-Dimensional Separations on the Distribution of Peak Capacity between the Two Dimensions,” Anal. Chem. 2008, 80, 8122–8134. • Li, X.; Stoll, D. R.; Carr, P. W. “Equation for Peak Capacity Estimation in Two-Dimensional Liquid Chromatography,” Anal. Chem. 2009, 81, 845–850. • Shen, Y.; Lee, M. “General Equation for Peak Capacity in Column Chromatography,” Anal. Chem. 1998, 70, 3853–3856. The following references may be consulted for more information on gas chromatography. • Grob, R. L., ed, Modern Practice of Gas Chromatography, Wiley-Interscience: New York, 1972. • Hinshaw, J. V. “A Compendium of GC Terms and Techniques,” LC•GC 1992, 10, 516–522. • Ioffe, B. V.; Vitenberg, A. G. Head-Space Analysis and Related Methods in Gas Chromatography, Wiley-Interscience: New York, 1982. • Kitson, F. G.; Larsen, B. S.; McEwen, C. N. Gas Chromatography and Mass Spectrometry: A Practical Guide, Academic Press: San Diego, 1996. • McMaster, M. C. GC/MS: A Practical User’s Guide, Wiley-Interscience: Hoboken, NJ, 2008. The following references provide more information on high-performance liquid chromatography. • Dorschel, C. A.; Ekmanis, J. L.; Oberholtzer, J. E.; Warren, Jr. F. V.; Bidlingmeyer, B. A. “LC Detectors,” Anal. Chem. 1989, 61, 951A–968A. • Ehlert, S.; Tallarek, U. “High-pressure liquid chromatography in lab-on-a-chip devices,” Anal. Bioanal. Chem. 2007, 388, 517–520. • Francois, I.; Sandra, K.; Sandra, P. “Comprehensive liquid chromatography: Fundamental aspects and practical considerations—A review,” Anal. Chim. Acta 2009, 641, 14–31. • Harris, C. M. “Shrinking the LC Landscape,” Anal. Chem. 2003, 75, 64A–69A. • Meyer, V. R. Pitfalls and Errors of HPLC in Pictures, Wiley-VCH: Weinheim, Germany, 2006. • Pozo, O. J.; Van Eenoo, P.; Deventer, K.; Delbeke, F. T. “Detection and characterization of anabolic steroids in doping analysis by LC–MS,” Trends Anal. Chem. 2008, 27, 657–671. • Scott, R. P. W. “Modern Liquid Chromatography,” Chem. Soc. Rev. 1992, 21, 137–145. • Simpson, C. F., ed. Techniques in Liquid Chromatography, Wiley-Hayden: Chichester, England; 1982. • Snyder, L. R.; Glajch, J. L.; Kirkland, J. J. Practical HPLC Method Development, Wiley-Interscience: New York,1988. • van de Merbel, N. C. “Quantitative determination of endogenous compounds in biological samples using chromatographic techniques,” Trends Anal. Chem. 2008, 27, 924–933. • Yeung, E. S. “Chromatographic Detectors: Current Status and Future Prospects,” LC•GC 1989, 7, 118–128. The following references may be consulted for more information on ion chromatography. • Shpigun, O. A.; Zolotov, Y. A. Ion Chromatography in Water Analysis, Ellis Horwood: Chichester, England, 1988. • Smith, F. C. Jr.; Chang, R. C. The Practice of Ion Chromatography, Wiley-Interscience: New York, 1983. The following references may be consulted for more information on supercritical fluid chromatography. • Palmieri, M. D. “An Introduction to Supercritical Fluid Chromatography. Part I: Principles and Applications,” J. Chem. Educ. 1988, 65, A254–A259. • Palmieri, M. D. “An Introduction to Supercritical Fluid Chromatography. Part II: Applications and Future Trends,” J. Chem. Educ. 1989, 66, A141–A147. The following references may be consulted for more information on capillary electrophoresis. • Baker, D. R. Capillary Electrophoresis, Wiley-Interscience: New York, 1995. • Copper, C. L. “Capillary Electrophoresis: Part I. Theoretical and Experimental Background,” J. Chem. Educ. 1998, 75, 343–347. • Copper, C. L.; Whitaker, K. W. “Capillary Electrophoresis: Part II. Applications,” J. Chem. Educ. 1998, 75, 347–351. • DeFrancesco, L. “Capillary Electrophoresis: Finding a Niche,” Today’s Chemist at Work, February 2002, 59–64. • Ekins, R. P. “Immunoassay, DNA Analysis, and Other Ligand Binding Assay Techniques: From Electropherograms to Multiplexed, Ultrasensative Microarrays on a Chip,” J. Chem. Educ. 1999, 76, 769– 780. • Revermann, T.; Götz, S.; Künnemeyer, J.; Karst, U. “Quantitative analysis by microchip capillary electrophoresis—current limitations and problem-solving strategies,” Analyst 2008, 133, 167–174. • Timerbaev, A. R. “Capillary electrophoresis coupled to mass spectrometry for biospeciation analysis: critical evaluation,” Trends Anal. Chem. 2009, 28, 416–425. • Unger, K. K.; Huber, M.; Hennessy, T. P.; Hearn, M. T. W.; Walhagen, K. “A Critical Appraisal of Capillary Electrochromatography,” Anal. Chem. 2002, 74, 200A–207A. • Varenne, A.; Descroix, S. “Recent strategies to improve resolution in capillary electrophoresis—A review,” Anal. Chim. Acta 2008, 628, 9–23. • Vetter, A. J.; McGowan, G. J. “The Escalator—An Analogy for Explaining Electroosmotic Flow,” J. Chem. Educ. 2001, 78, 209–211. • Xu, Y. “Tutorial: Capillary Electrophoresis,” Chem. Educator, 1996, 1(2), 1–14. The application of spreadsheets and computer programs for modeling chromatography is described in the following papers. • Abbay, G. N.; Barry, E. F.; Leepipatpiboon, S.; Ramstad, T.; Roman, M. C.; Siergiej, R. W.; Snyder, L. R.; Winniford, W. L. “Practical Applications of Computer Simulation for Gas Chromatography Method Development,” LC•GC 1991, 9, 100–114. • Drouen, A.; Dolan, J. W.; Snyder, L. R.; Poile, A.; Schoenmakers, P. J. “Software for Chromatographic Method Development,” LC•GC 1991, 9, 714–724. • Kevra, S. A.; Bergman, D. L.; Maloy, J. T. “A Computational Introduction to Chromatographic Bandshape Analysis,” J. Chem. Educ. 1994, 71, 1023–1028. • Rittenhouse, R. C. “HPLC for Windows: A Computer Simulation of High-Performance Liquid Chromatography,” J. Chem. Educ. 1995, 72, 1086–1087. • Shalliker, R. A.; Kayillo, S.; Dennis, G. R. “Optimizing Chromatographic Separations: An Experiment Using an HPLC Simulator,” J. Chem. Educ. 2008, 85, 1265–1268. • Sundheim, B. R. “Column Operations: A Spreadsheet Model,” J. Chem. Educ. 1992, 69, 1003– 1005. The following papers discuss column efficiency, peak shapes, and overlapping chromatographic peaks. • Bildingmeyer, B. A.; Warren, F. V., Jr. “Column Efficiency Measurement,” Anal. Chem. 1984, 56, 1583A–1596A. • Hawkes, S. J. “Distorted Chromatographic Peaks,” J. Chem. Educ. 1994, 71, 1032–1033. • Hinshaw, J. “Pinning Down Tailing Peaks,” LC•GC 1992, 10, 516–522. • Meyer, V. K. “Chromatographic Integration Errors: A Closer Look at a Small Peak,” LC•GC North America 2009, 27, 232–244. • Reid, V. R.; Synovec, R. E. “High-speed gas chromatography: The importance of instrumentation optimization and the elimination of extra-column band broadening,” Talanta 2008, 76, 703–717.
textbooks/chem/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/12%3A_Chromatographic_and_Electrophoretic_Methods/12.09%3A_Additional_Resources.txt
Chapter Summary Chromatography and electrophoresis are powerful analytical techniques that both separate a sample into its components and provide a means for determining each component’s concentration. Chromatographic separations utilize the selective partitioning of the sample’s components between a stationary phase that is immobilized within a column and a mobile phase that passes through the column. The effectiveness of a chromatographic separation is described by the resolution between two chromatographic bands and is a function of each component’s retention factor, the column’s efficiency, and the column’s selectivity. A solute’s retention factor is a measure of its partitioning into the stationary phase, with larger retention factors corresponding to more strongly retained solutes. The column’s selectivity for two solutes is the ratio of their retention factors, providing a relative measure of the column’s ability to retain the two solutes. Column efficiency accounts for those factors that cause a solute’s chromatographic band to increase in width during the separation. Column efficiency is defined in terms of the number of theoretical plates and the height of a theoretical plate, the later of which is a function of a number of parameters, most notably the mobile phase’s flow rate. Chromatographic separations are optimized by increasing the number of theoretical plates, by increasing the column’s selectivity, or by increasing the solute retention factor. In gas chromatography the mobile phase is an inert gas and the stationary phase is a nonpolar or polar organic liquid that either is coated on a particulate material and packed into a wide-bore column, or coated on the walls of a narrow-bore capillary column. Gas chromatography is useful for the analysis of volatile components. In high-performance liquid chromatography the mobile phase is either a nonpolar solvent (normal phase) or a polar solvent (reversed-phase). A stationary phase of opposite polarity, which is bonded to a particulate material, is packed into a wide-bore column. HPLC is applied to a wider range of samples than GC; however, the separation efficiency for HPLC is not as good as that for capillary GC. Together, GC and HPLC account for the largest number of chromatographic separations. Other separation techniques, however, find special- ized applications: of particular importance are ion-exchange chromatography for separating anions and cations; size-exclusion chromatography for separating large molecules; and supercritical fluid chromatography for the analysis of samples that are not easily analyzed by GC or HPLC. In capillary zone electrophoresis a sample’s components are separated based on their ability to move through a conductive medium under the influence of an applied electric field. Positively charged solutes elute first, with smaller, more highly charged cations eluting before larger cations of lower charge. Neutral species elute without undergoing further separation. Finally, anions elute last, with smaller, more negatively charged anions being the last to elute. By adding a surfactant, neutral species can be separated by micellar electrokinetic capillary chromatography. Electrophoretic separations also can take advantage of the ability of polymeric gels to separate solutes by size (capillary gel electrophoresis), and the ability of solutes to partition into a stationary phase (capillary electrochromatography). In comparison to GC and HPLC, capillary electrophoresis provides faster and more efficient separations. Key Terms adjusted retention time baseline width capillary column capillary gel electrophoresis chromatography cryogenic focusing electroosmotic flow velocity electrophoresis exclusion limit gas chromatography general elution problem headspace sampling inclusion limit isocratic elution Kovat’s retention index loop injector mass transfer mobile phase nonretained solutes open tubular column peak capacity porous-layer open tubular column retention factor selectivity factor split injection stationary phase tailing thermal conductivity detector wall-coated open-tubular column adsorption chromatography bleed capillary electrochromatography capillary zone electrophoresis column chromatography electrokinetic injection electron capture detector electrophoretic mobility flame ionization detector gas–liquid chromatography guard column high-performance liquid chromatography ion-exchange chromatography isothermal liquid–solid adsorption chromatography mass spectrometer micelle monolithic column normal-phase chromatography packed columns planar chromatography purge-and-trap retention time single-column ion chromatography splitless injection supercritical fluid chromatography temperature programming van Deemter equation zeta potential band broadening bonded stationary phase capillary electrophoresis chromatogram counter-current extraction electroosmotic flow electropherogram electrophoretic velocity fronting gas–solid chromatography gradient elution hydrodynamic injection ion suppressor column Joule heating longitudinal diffusion mass spectrum micellar electrokinetic capillary chromatography multiple paths on-column injection partition chromatography polarity index resolution reversed-phase chromatography solid-phase microextraction stacking support-coated open tubular column theoretical plate void time
textbooks/chem/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/12%3A_Chromatographic_and_Electrophoretic_Methods/12.10%3A_Chapter_Summary_and_Key_Terms.txt
There are many ways to categorize analytical techniques, several of which we introduced in earlier chapters. In Chapter 3 we classified techniques by whether the signal is proportional to the absolute amount of analyte or the relative amount of analyte. For example, precipitation gravimetry is a total analysis technique because the precipitate’s mass is proportional to the absolute amount, or moles, of analyte. UV/Vis absorption spectroscopy, on the other hand, is a concentration technique because absorbance is proportional to the relative amount, or concentration, of analyte. A second way to classify analytical techniques is to consider the source of the analytical signal. For example, gravimetry encompasses all techniques in which the analytical signal is a measurement of mass or a change in mass. Spectroscopy, on the other hand, includes those techniques in which we probe a sample with an energetic particle, such as the absorption of a photon. This is the classification scheme used in organizing Chapters 8–11. An additional way to classify analytical techniques is by whether the analyte’s concentration is determined under a state of equilibrium or by the kinetics of a chemical reaction or a physical process. The analytical methods described in Chapter 8–11 mostly involve measurements made on systems in which the analyte is at equilibrium. In this chapter we turn our attention to measurements made under nonequilibrium conditions. • 13.1: Kinetic Techniques versus Equilibrium Techniques In a kinetic method the analytical signal is determined by the rate of a reaction that involves the analyte or by a nonsteady-state process. As a result, the analyte’s concentration changes during the time in which we monitor the signal. • 13.2: Chemical Kinetics The earliest analytical methods based on chemical kinetics—which first appear in the late nineteenth century—took advantage of the catalytic activity of enzymes. Despite the diversity of chemical kinetic methods, by 1960 they no longer were in common use. By the 1980s, improvements in instrumentation and data analysis methods compensated for these limitations, ensuring the further development of chemical kinetic methods of analysis. • 13.3: Radiochemistry Atoms that have the same number of protons but a different number of neutrons are isotopes. Although an element’s different isotopes have the same chemical properties, their nuclear properties are not identical. The most important difference between isotopes is their stability. The nuclear configuration of a stable isotope remains constant with time. Unstable isotopes, however, disintegrate spontaneously, emitting radioactive particles as they transform into a more stable form. • 13.4: Flow Injection Analysis In this section we consider the technique of flow injection analysis in which we inject the sample into a flowing carrier stream that gives rise to a transient signal at the detector. Because the shape of this transient signal depends on the physical and chemical kinetic processes that take place in the carrier stream during the time between injection and detection, we include flow injection analysis in this chapter. • 13.5: Problems End-of-chapter problems to test your understanding of topics in this chapter. • 13.6: Additional Resources A compendium of resources to accompany topics in this chapter. • 13.7: Chapter Summary and Key Terms Summary of chapter's main topics and a list of key terms introduced in this chapter. Thumbnail: Determination of a reaction’s intermediate rate from the slope of a line tangent to a curve showing the change in the analyte’s concentration as a function of time. 13: Kinetic Methods In an equilibrium method the analytical signal is determined by an equilibrium reaction that involves the analyte or by a steady-state process that maintains the analyte’s concentration. When we determine the concentration of iron in water by measuring the absorbance of the orange-red $\text{Fe(phen)}_3^{2+}$ complex, the signal depends upon the concentration of $\text{Fe(phen)}_3^{2+}$, which, in turn, is determined by the complex’s formation constant. In the flame atomic absorption determination of Cu and Zn in tissue samples, the concentration of each metal in the flame remains constant because each step in the process of atomizing the sample is in a steady-state. In a kinetic method the analytical signal is determined by the rate of a reaction that involves the analyte or by a nonsteady-state process. As a result, the analyte’s concentration changes during the time in which we monitor the signal. In many cases we can choose to complete an analysis using either an equilibrium method or a kinetic method by changing when we measure the analytical signal. For example, one method for determining the concentration of nitrite, $\text{NO}_2^-$, in groundwater utilizes the two-step diazotization re-action shown in Figure 13.1.1 [Method 4500-NO2 B in Standard Methods for the Analysis of Waters and Wastewaters, American Public Health Association: Washington, DC, 20th Ed., 1998]. The final product, which is a reddish-purple azo dye, absorbs visible light at a wavelength of 543 nm. Because neither reaction in Figure 13.1.1 is rapid, the absorbance—which is directly proportional to the concentration of nitrite—is measured 10 min after we add the last reagent, a lapse of time that ensures that the concentration of the azo dyes reaches the steady-state value required of an equilibrium method. We can use the same set of reactions as the basis for a kinetic method if we measure the solution’s absorbance during this 10-min development period, obtaining information about the reaction’s rate. If the measured rate is a function of the concentration of $\text{NO}_2^-$, then we can use the rate to determine its concentration in the sample [Karayannis, M. I.; Piperaki, E. A.; Maniadaki, M. M. Anal. Lett. 1986, 19, 13–23]. There are many potential advantages to a kinetic method of analysis, perhaps the most important of which is the ability to use chemical reactions and systems that are slow to reach equilibrium. In this chapter we examine three techniques that rely on measurements made while the analytical system is under kinetic control: chemical kinetic techniques, in which we measure the rate of a chemical reaction; radiochemical techniques, in which we measure the decay of a radioactive element; and flow injection analysis, in which we inject the analyte into a continuously flowing carrier stream, where its mixes with and reacts with reagents in the stream under conditions controlled by the kinetic processes of convection and diffusion.
textbooks/chem/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/13%3A_Kinetic_Methods/13.01%3A_Kinetic_Techniques_versus__Equilibrium_Techniques.txt
The earliest analytical methods based on chemical kinetics—which first appear in the late nineteenth century—took advantage of the catalytic activity of enzymes. In a typical method of that era, an enzyme was added to a solution that contained a suitable substrate and their reaction was monitored for a fixed time. The enzyme’s activity was determined by the change in the substrate’s concentration. Enzymes also were used for the quantitative analysis of hydrogen peroxide and carbohydrates. The development of chemical kinetic methods continued in the first half of the twentieth century with the introduction of nonenzymatic catalysts and noncatalytic reactions. Despite the diversity of chemical kinetic methods, by 1960 they no longer were in common use. The principal limitation to their broader acceptance was a susceptibility to significant errors from uncontrolled or poorly controlled variables—temperature and pH are two such examples—and the presence of interferents that activate or inhibit catalytic reactions. By the 1980s, improvements in instrumentation and data analysis methods compensated for these limitations, ensuring the further development of chemical kinetic methods of analysis [Pardue, H. L. Anal. Chim. Acta 1989, 216, 69–107]. Theory and Practice Every chemical reaction occurs at a finite rate, which makes it a potential candidate for a chemical kinetic method of analysis. To be effective, however, the chemical reaction must meet three necessary conditions: (1) the reaction must not occur too quickly or too slowly; (2) we must know the reaction’s rate law; and (3) we must be able to monitor the change in concentration for at least one species. Let’s take a closer look at each of these requirements. The material in this section assumes some familiarity with chemical kinetics, which is part of most courses in general chemistry. For a review of reaction rates, rate laws, and integrated rate laws, see the material in Appendix 17. Reaction Rate The rate of the chemical reaction—how quickly the concentrations of reactants and products change during the reaction—must be fast enough that we can complete the analysis in a reasonable time, but also slow enough that the reaction does not reach equilibrium while the reagents are mixing. As a practical limit, it is not easy to study a reaction that reaches equilibrium within several seconds without the aid of special equipment for rapidly mixing the reactants. We will consider two examples of instrumentation for studying reactions with fast kinetics later in this chapter. Rate Law The second requirement is that we must know the reaction’s rate law—the mathematical equation that describes how the concentrations of reagents affect the rate—for the period in which we are making measurements. For example, the rate law for a reaction that is first order in the concentration of an analyte, A, is $\text { rate }=-\frac{d[A]}{d t}=k[A] \label{13.1}$ where k is the reaction’s rate constant. Because the concentration of A decreases during the reactions, d[A] is negative. The minus sign in Equation \ref{13.1} makes the rate positive. If we choose to follow a product, P, then d[P] is positive because the product’s concentration increases throughout the reaction. In this case we omit the minus sign. An integrated rate law often is a more useful form of the rate law because it is a function of the analyte’s initial concentration. For example, the integrated rate law for Equation \ref{13.1} is $\ln{[A]_t} = \ln{[A]_0} - kt \label{13.2}$ or $[A]_{t}=[A]_{0} e^{-k t} \label{13.3}$ where [A]0 is the analyte’s initial concentration and [A]t is the analyte’s concentration at time t. Unfortunately, most reactions of analytical interest do not follow a simple rate law. Consider, for example, the following reaction between an analyte, A, and a reagent, R, to form a single product, P $A + R \rightleftharpoons P \nonumber$ where kf is the rate constant for the forward reaction, and kr is the rate constant for the reverse reaction. If the forward and the reverse reactions occur as single steps, then the rate law is $\text { rate }=-\frac{d[A]}{d t}=k_{f}[A][R]-k_{r}[P] \label{13.4}$ The first term, kf[A][R] accounts for the loss of A as it reacts with R to make P, and the second term, kr[P] accounts for the formation of A as P converts back to A and to R. Although we know the reaction’s rate law, there is no simple integrated form that we can use to determine the analyte’s initial concentration. We can simplify Equation \ref{13.4} by restricting our measurements to the beginning of the reaction when the concentration of product is negligible. Under these conditions we can ignore the second term in Equation \ref{13.4}, which simplifies to $\text { rate }=-\frac{d[A]}{d t}=k_{f}[A][R] \label{13.5}$ The integrated rate law for Equation \ref{13.5}, however, is still too complicated to be analytically useful. We can further simplify the kinetics by making further adjustments to the reaction conditions [Mottola, H. A. Anal. Chim. Acta 1993, 280, 279–287]. For example, we can ensure pseudo-first-order kinetics by using a large excess of R so that its concentration remains essentially constant during the time we monitor the reaction. Under these conditions Equation \ref{13.5} simplifies to $\text { rate }=-\frac{d[A]}{d t}=k_{f}[A][R]_{0}=k^{\prime}[A] \label{13.6}$ where k′ = kf[R]0. The integrated rate law for Equation \ref{13.6} then is $\ln{[A]_t} = \ln{[A]_0} - k’t \label{13.7}$ or $[A]_{t}=[A]_{0} e^{-k^{\prime} t} \label{13.8}$ It may even be possible to adjust the conditions so that we use the reaction under pseudo-zero-order conditions. $\text { rate }=-\frac{d[A]}{d t}=k_{f}[A]_{0}[R]_{0}=k^{\prime \prime} t \label{13.9}$ $[A]_{t}=[A]_{0}-k^{\prime \prime} t \label{13.10}$ where $k^{\prime \prime}$ = kf [A]0[R]0. To say that the reaction is pseudo-first-order in A means the reaction behaves as if it is first order in A and zero order in R even though the underlying kinetics are more complicated. We call $k^{\prime}$ a pseudo-first-order rate constant. To say that a reaction is pseudo-zero-order means the reaction behaves as if it is zero order in A and zero order in R even though the underlying kinetics are more complicated. We call $k^{\prime \prime}$ the pseudo-zero-order rate constant. Monitoring the Reaction The final requirement is that we must be able to monitor the reaction’s progress by following the change in concentration for at least one of its species. Which species we choose to monitor is not important: it can be the analyte, a reagent that reacts with the analyte, or a product. For example, we can determine the concentration of phosphate by first reacting it with Mo(VI) to form 12-molybdophosphoric acid (12-MPA). $\mathrm{H}_{3} \mathrm{PO}_{4}(a q)+6 \mathrm{Mo}(\mathrm{VI})(a q) \longrightarrow 12-\mathrm{MPA}(a q)+9 \mathrm{H}^{+}(a q) \label{13.11}$ Next, we reduce 12-MPA to heteropolyphosphomolybdenum blue, PMB. The rate of formation of PMB is measured spectrophotometrically, and is proportional to the concentration of 12-MPA. The concentration of 12-MPA, in turn, is proportional to the concentration of phosphate [see, for example, (a) Crouch, S. R.; Malmstadt, H. V. Anal. Chem. 1967, 39, 1084–1089; (b) Crouch, S. R.; Malmstadt, H. V. Anal. Chem. 1967, 39, 1090–1093; (c) Malmstadt, H. V.; Cordos, E. A.; Delaney, C. J. Anal. Chem. 1972, 44(12), 26A–41A]. We also can follow reaction 13.11 spectrophotometrically by monitoring the formation of the yellow-colored 12-MPA [Javier, A. C.; Crouch, S. R.; Malmstadt, H. V. Anal. Chem. 1969, 41, 239–243]. Reaction \ref{13.11} is, of course, unbalanced; the additional hydrogens on the reaction’s right side come from the six Mo(VI) that appear on the reaction’s left side where Mo(VI) is thought to be present as the molybdate dimer HMo2O6+. Classifying Chemical Kinetic Methods Figure 13.2.1 provides one useful scheme for classifying chemical kinetic methods of analysis. Methods are divided into two broad categories: direct- computation methods and curve-fitting methods. In a direct-computation method we calculate the analyte’s initial concentration, [A]0, using the appropriate rate law. For example, if the reaction is first-order in analyte, we can use Equation \ref{13.2} to determine [A]0 given values for k, t, and [A]t. With a curve-fitting method, we use regression to find the best fit between the data—for example, [A]t as a function of time—and the known mathematical model for the rate law. If the reaction is first-order in analyte, then we fit Equation \ref{13.2} to the data using k and [A]0 as adjustable parameters. Direct-Computation Fixed-Time Integral Methods A direct-computation integral method uses the integrated form of the rate law. In a one-point fixed-time integral method, for example, we determine the analyte’s concentration at a single time and calculate the analyte’s initial concentration, [A]0, using the appropriate integrated rate law. To determine the reaction’s rate constant, k, we run a separate experiment using a standard solution of analyte. Alternatively, we can determine the analyte’s initial concentration by measuring [A]t for several standards that contain known concentrations of analyte and construct a calibration curve. Example 13.2.1 The concentration of nitromethane, CH3NO2, is determined from the kinetics of its decomposition reaction. In the presence of excess base the reaction is pseudo-first-order in nitromethane. For a standard solution of 0.0100 M nitromethane, the concentration of nitromethane after 2.00 s is $4.24 \times 10^{-4}$ M. When a sample that contains an unknown amount of nitromethane is analyzed, the concentration of nitromethane remaining after 2.00 s is $5.35 \times10^{-4}$ M. What is the initial concentration of nitromethane in the sample? Solution First, we determine the value for the pseudo-first-order rate constant, $k^{\prime}$. Using Equation \ref{13.7} and the result for the standard, we find its value is $k^{\prime} = \frac {\ln{[A]_0} - \ln{[A]_t}} {t} = \frac {\ln{(0.0100)} - \ln{(4.24 \times 10^{-4})}} {2.00 \text{ s}} = 1.58 \text{ s}^{-1} \nonumber$ Next we use Equation \ref{13.8} to calculate the initial concentration of nitromethane in the sample. $[A]_0 = \frac {[A]_t} {e^{-k^{\prime}t}} = \frac {5.35 \times 10^{-4} \text{ M}} {e^{-(1.58 \text{ s}^{-1})(2.00 \text{ s})}} = 0.0126 \text{ M} \nonumber$ Equation \ref{13.7} and Equation \ref{13.8} are equally appropriate integrated rate laws for a pseudo-first-order reaction. The decision to use Equation \ref{13.7} to calculate $k^{\prime}$ and Equation \ref{13.8} to calculate [A]0 is a matter of convenience. Exercise 13.2.1 In a separate determination for nitromethane, a series of external standards gives the following concentrations of nitromethane after a 2.00 s decomposition under pseudo-first-order conditions. [CH3NO2]0 (M) [CH3NO2]t = 2.00 s (M) 0.0100 $3.82 \times 10^{-4}$ 0.0200 $8.19 \times 10^{-3}$ 0.0300 $1.15 \times 10^{-3}$ 0.0400 $1.65 \times 10^{-3}$ 0.0500 $2.14 \times 10^{-3}$ 0.0600 $2.53 \times 10^{-3}$ 0.0700 $3.21 \times 10^{-3}$ 0.0800 $3.35 \times 10^{-3}$ 0.0900 $3.99 \times 10^{-3}$ 0.100 $4.13 \times 10^{-3}$ Analysis of a sample under the same conditions gives a nitromethane concentration of $2.21 \times 10^{-3}$ M after 2 s. What is the initial concentration of nitromethane in the sample? Answer The calibration curve and the calibration equation for the external standards are shown below. Substituting $2.21 \times 10^{-3}$ M for [CH3NO2]t = 2s gives [CH3NO2]0 as $5.21 \times 10^{-2}$ M. In Example 13.2.1 we determine the analyte’s initial concentration by measuring the amount of analyte that has not reacted. Sometimes it is more convenient to measure the concentration of a reagent that reacts with the analyte, or to measure the concentration of one of the reaction’s products. We can use a one-point fixed-time integral method if we know the reaction’s stoichiometry. For example, if we measure the concentration of the product, P, in the reaction $A+R \rightarrow P \nonumber$ then the concentration of the analyte at time t is $[A]_{t}=[A]_{0}-[P]_{t} \label{13.12}$ because the stoichiometry between the analyte and product is 1:1. If the reaction is pseudo-first-order in A, then substituting Equation \ref{13.12} into Equation \ref{13.7} gives $\ln \left([A]_{0}-[P]_{t}\right) = \ln{[A]_{0}} - k^{\prime} t \label{13.13}$ which we simplify by writing in exponential form. $[A]_0 - [P]_t = [A]_0 e^{-k^{\prime}t} \label{13.14}$ Finally, solving Equation \ref{13.14} for [A]0 gives the following equation. $[A]_{0}=\frac{[P]_{t}}{1-e^{-k^{\prime}t}} \label{13.15}$ Example 13.2.2 The concentration of thiocyanate, SCN, is determined from the pseudo-first-order kinetics of its reaction with excess Fe3+ to form a reddish-colored complex of Fe(SCN)2+. The reaction’s progress is monitored by measuring the absorbance of Fe(SCN)2+ at a wavelength of 480 nm. When using a standard solution of 0.100 M SCN, the concentration of Fe(SCN)2+ after 10 s is 0.0516 M. The concentration of Fe(SCN)2+ in a sample that contains an unknown amount of SCN is 0.0420 M after 10 s. What is the initial concentration of SCN in the sample? Solution First, we must determine a value for the pseudo-first-order rate constant, $k^{\prime}$. Using Equation \ref{13.13}, we find that its value is $k^{\prime} = \frac{\ln{[A]_{0}} - \ln \left([A]_{0} - [P]_{1}\right)}{t}= \frac {\ln(0.100) - \ln(0.100 - 0.0516)} {10.0 \text{ s}} = 0.0726 \text{ s}^{-1} \nonumber$ Next, we use Equation \ref{13.15} to determine the initial concentration of SCN in the sample. $[A]_{0}=\frac{[P]_{t}}{1-e^{-k^{\prime} t}}=\frac{0.0420 \mathrm{M}}{1-e^{-\left(0.0726 \text{ s}^{-1}\right)(10.0 \text{ s})}}=0.0868 \mathrm{M} \nonumber$ Exercise 13.2.2 In a separate determination for SCN, a series of external standards gives the following concentrations of Fe(SCN)2+ after a 10.0 s reaction with excess Fe3+ under pseudo-first-order conditions. [SCN-] (M) [Fe(SCN)2+]t = 10.0 s (M) $5.00 \times 10^{-3}$ $1.79 \times 10^{-3}$ $1.50 \times 10^{-2}$ $8.24 \times 10^{-3}$ $2.50 \times 10^{-2}$ $1.28 \times 10^{-2}$ $3.50 \times 10^{-2}$ $1.85 \times 10^{-2}$ $4.50 \times 10^{-2}$ $2.21 \times 10^{-2}$ $5.50 \times 10^{-2}$ $2.81 \times 10^{-2}$ $6.50 \times 10^{-2}$ $3.27 \times 10^{-2}$ $7.50\times 10^{-2}$ $3.91 \times 10^{-2}$ $8.50 \times 10^{-2}$ $4.23 \times 10^{-2}$ $9.50 \times 10^{-2}$ $4.89 \times 10^{-2}$ Analysis of a sample under the same conditions gives an Fe(SCN)2+ concentration of $3.52 \times 10^{-2}$ M after 10 s. What is the initial concentration of SCN in the sample? Answer The calibration curve and the calibration equation for the external standards are shown below. Substituting $3.52 \times 10^{-2}$ M for [Fe(SCN)2+]t = 10 s gives [SCN]0 as $6.87 \times 10^{-2}$ M. A one-point fixed-time integral method has the advantage of simplicity because we need only a single measurement to determine the analyte’s initial concentration. As with any method that relies on a single determination, a one-point fixed-time integral method can not compensate for a constant determinate error. In a two-point fixed-time integral method we correct for constant determinate errors by making measurements at two points in time and use the difference between the measurements to determine the analyte’s initial concentration. Because it affects both measurements equally, the difference between the measurements is independent of a constant de- terminate error. For a pseudo-first-order reaction in which we measure the analyte’s concentration at times t1 and t2, we can write the following two equations. $[A]_{t_{1}}=[A]_{0} e^{-k^{\prime} t_1} \label{13.16}$ $[A]_{t_{2}}=[A]_{0} e^{-k^{\prime} t_2} \label{13.17}$ Subtracting Equation \ref{13.17} from Equation \ref{13.16} and solving for [A]0 leaves us with $[A]_{0}=\frac{[A]_{t_1}-[A]_{t_2}}{e^{-k^{\prime} t_{1}}-e^{-k^{\prime} t_{2}}} \label{13.18}$ To determine the rate constant, $k^{\prime}$, we measure $[A]_{t_1}$ and $[A]_{t_2}$ for a standard solution of analyte. Having obtained a value for $k^{\prime}$, we can determine [A]0 by measuring the analyte’s concentration at t1 and t2. We also can determine the analyte’s initial concentration using a calibration curve consisting of a plot of ($[A]_{t_1}$ – $[A]_{t_2}$) versus [A]0. A fixed-time integral method is particularly useful when the signal is a linear function of concentration because we can replace the reactant’s concentration with the corresponding signal. For example, if we follow a reaction spectrophotometrically under conditions where the analyte’s concentration obeys Beer’s law $(A b s)_{t}=\varepsilon b[A]_{t} \nonumber$ then we can rewrite Equation \ref{13.8} and Equation \ref{13.18} as $(A b s)_{t}=[A]_{0} e^{-k^{\prime}} \varepsilon b=c[A]_{0} \nonumber$ $[A]_t = \frac {(Abs)_{t_1} - (Abs)_{t_2}} {e^{-k^{\prime}t_1} - e^{-k^{\prime}t_2}} \times (\epsilon b)^{-1} = c^{\prime}[(Abs)_{t_1} - (Abs)_{t_2}] \nonumber$ where (Abs)t is the absorbance at time t, and c and $c^{\prime}$ are constants. Direct-Computation Variable-Time Integral Methods In a variable-time integral method we measure the total time, $\Delta_t$, needed to effect a specific change in concentration for one species in the chemical reaction. One important application is the quantitative analysis of catalysts, which takes advantage of the catalyst’s ability to increase the rate of reaction. As the concentration of catalyst increased, $\Delta_t$ decreases. For many catalytic systems the relationship between $\Delta_t$ and the catalyst’s concentration is $\frac {1} {\Delta t} = F_{cat}[A]_0 + F_{uncat} \label{13.19}$ where [A]0 is the catalyst’s concentration, and Fcat and Funcat are constants that account for the rate of the catalyzed and uncatalyzed reactions [Mark, H. B.; Rechnitz, G. A. Kinetics in Analytical Chemistry, Interscience: New York, 1968]. Example 13.2.3 Sandell and Kolthoff developed a quantitative method for iodide based on its ability to catalyze the following redox reaction [Sandell, E. B.; Kolthoff, I. M. J. Am. Chem. Soc. 1934, 56, 1426]. $\mathrm{As}^{3+}(a q)+2 \mathrm{Ce}^{4+}(a q) \longrightarrow \mathrm{As}^{\mathrm{5+}}(a q)+2 \mathrm{Ce}^{3+}(a q) \nonumber$ An external standards calibration curve was prepared by adding 1 mL of a KI standard to a mixture of 2 mL of 0.05 M As3+, 1 mL of 0.1 M Ce4+, and 1 mL of 3 M H2SO4, and measuring the time for the yellow color of Ce4+ to disappear. The following table summarizes the results for one analysis. [I-] (µg/mL) $\Delta_t$ (min) 5.0 0.9 2.5 1.8 1.0 4.5 What is the concentration of I in a sample if $\Delta_t$ is 3.2 min? Solution Figure 13.2.2 shows the calibration curve and the calibration equation for the external standards based on Equation \ref{13.19}. Substituting 3.2 min for $\Delta_t$ gives the concentration of I in the sample as 1.4 μg/mL. Direct-Computation Rate Methods In a rate method we use the differential form of the rate law—Equation \ref{13.1} is one example of a differential rate law—to determine the analyte’s concentration. As shown in Figure 13.2.3 , the rate of a reaction at time t, (rate)t, is the slope of a line tangent to a curve that shows the change in concentration as a function of time. For a reaction that is first-order in analyte, the rate at time t is $(r a t e)_{t}=k[A]_{t} \nonumber$ Substituting in Equation \ref{13.3} leaves us with the following equation relating the rate at time t to the analyte’s initial concentration. $(\text {rate})_{t}=k[A]_{0} e^{-k t} \nonumber$ If we measure the rate at a fixed time, then both k and ekt are constant and we can use a calibration curve of (rate)t versus [A]0 for the quantitative analysis of the analyte. There are several advantages to using the reaction’s initial rate (t = 0). First, because the reaction’s rate decreases over time, the initial rate provides the greatest sensitivity. Second, because the initial rate is measured under nearly pseudo-zero-order conditions, in which the change in concentration with time effectively is linear, it is easier to determine the slope. Finally, as the reaction of interest progresses competing reactions may develop, which complicating the kinetics: using the initial rate eliminates these complications. One disadvantage of the initial rate method is that there may be insufficient time to completely mix the reactants. This problem is avoided by using an intermediate rate measured at a later time (t > 0). As a general rule (see Mottola, H. A. “Kinetic Determinations of Reactants Utilizing Uncatalyzed Reactions,” Anal. Chim. Acta 1993, 280, 279–287), the time for measuring a reaction’s initial rate should result in the consumption of no more than 2% of the reactants. The smaller this percentage, the more linear the change in concentration as a function of time. Example 13.2.4 The concentration of norfloxacin, a commonly prescribed antibacterial agent, is determined using the initial rate method. Norfloxacin is converted to an N-vinylpiperazine derivative and reacted with 2,3,5,6-tetra-chloro-1,4-benzoquinone to form an N-vinylpiperazino-substituted ben-zoquinone derivative that absorbs strongly at 625 nm [Darwish, I. A.; Sultan, M. A.; Al-Arfaj, H. A. Talanta 2009, 78, 1383–1388]. The initial rate of the reaction—as measured by the change in absorbance as a function of time (AU/min)—is pseudo-first order in norfloxacin. The following data were obtained for a series of external norfloxacin standards. [norfloxacin] (µg/mL) initial rate (AU/min) 63 0.0139 125 0.0355 188 0.0491 251 0.0656 313 0.0859 To analyze a sample of prescription eye drops, a 10.00-mL portion is extracted with dichloromethane. The extract is dried and the norfloxacin reconstituted in methanol and diluted to 10 mL in a volumetric flask. A 5.00-mL portion of this solution is diluted to volume in a 100-mL volumetric flask. Analysis of this sample gives an initial rate of 0.0394 AU/min. What is the concentration of norfloxacin in the eye drops in mg/mL? Solution Figure 13.2.4 shows the calibration curve and the calibration equation for the external standards. Substituting 0.0394 AU/min for the initial rate and solving for the concentration of norfloxacin gives a result of 152 μg/mL. This is the concentration in a diluted sample of the extract. The concentration in the extract before dilution is $\frac{152 \: \mu \text{g}}{\mathrm{mL}} \times \frac{100.0 \: \mathrm{mL}}{5.00 \: \mathrm{mL}} \times \frac{1 \:\mathrm{mg}}{1000 \: \mu \mathrm{g}}=3.04 \: \mathrm{mg} / \mathrm{mL} \nonumber$ Because the dried extract was reconstituted using a volume identical to that of the original sample, the concentration of norfloxacin in the eye drops is 3.04 mg/mL. Curve-Fitting Methods In a direct-computation method we determine the analyte’s concentration by solving the appropriate rate equation at one or two discrete times. The relationship between the analyte’s concentration and the measured response is a function of the rate constant, which we determine in a separate experiment using a single external standard (see Example 13.2.1 or Example 13.2.2), or a calibration curve (see Example 13.2.3 or Example 13.2.4). In a curve-fitting method we continuously monitor the concentration of a reactant or a product as a function of time and use a regression analysis to fit the data to an appropriate differential rate law or integrated rate law. For example, if we are monitoring the concentration of a product for a reaction that is pseudo-first-order in the analyte, then we can fit the data to the following rearranged form of Equation \ref{13.15} $[P]_{t}=[A]_{0}\left(1-e^{-k^{\prime} t}\right) \nonumber$ using [A]0 and $k^{\prime}$ as adjustable parameters. Because we use data from more than one or two discrete times, a curve-fitting method is capable of producing more reliable results. Example 13.2.5 The data shown in the following table were collected for a reaction that is known to be pseudo-zero-order in analyte. What is the initial concentration of analyte in the sample and the rate constant for the reaction? time (s) [A]t (mM) time (s) [A]t (mM) 3 0.0731 8 0.0448 4 0.0728 9 0.0404 5 0.0681 10 0.0339 6 0.0582 11 0.0217 7 0.0511 12 0.0143 Solution From Equation \ref{13.10} we know that for a pseudo-zero-order reaction a plot of [A]t versus time is linear with a slope of $-k^{\prime \prime}$ and a y-intercept of [A]0. Figure 13.2.5 shows a plot of the kinetic data and the result of a linear regression analysis. The initial concentration of analyte is 0.0986 mM and the rate constant is 0.00677 M–1 s–1. The best way to appreciate the theoretical and the practical details discussed in this section is to carefully examine a typical analytical method. Although each method is unique, the following description of the determination of creatinine in urine provides an instructive example of a typical procedure. The description here is based on Diamandis, E. P.; Koupparis, M. A.; Hadjiioannou, T. P. “Kinetic Studies with Ion Selective Electrodes: Determination of Creatinine in Urine with a Picrate Ion Selective Electrode,” J. Chem. Educ. 1983, 60, 74–76. Representative Method 13.2.1: Determination of Creatinine in Urine Description of Method Creatine is an organic acid in muscle tissue that supplies energy for muscle contractions. One of its metabolic products is creatinine, which is excreted in urine. Because the concentration of creatinine in urine and serum is an important indication of renal function, a rapid method for its analysis is clinically important. In this method the rate of reaction between creatinine and picrate in an alkaline medium is used to determine the concentration of creatinine in urine. Under the conditions of the analysis the reaction is first order in picrate, creatinine, and hydroxide. $\text { rate }=k[\text { picrate }][\text { creatinine }]\left[\mathrm{OH}^{-}\right] \nonumber$ The reaction is monitored using a picrate ion selective electrode. Procedure Prepare a set of external standards that contain 0.5–3.0 g/L creatinine using a stock solution of 10.00 g/L creatinine in 5 mM H2SO4, diluting each standard to volume using 5 mM H2SO4. Prepare a solution of $1.00 \times 10^{-2}$ M sodium picrate. Pipet 25.00 mL of 0.20 M NaOH, adjusted to an ionic strength of 1.00 M using Na2SO4, into a thermostated reaction cell at 25oC. Add 0.500 mL of the $1.00 \times 10^{-2}$ M picrate solution to the reaction cell. Suspend a picrate ion selective in the solution and monitor the potential until it stabilizes. When the potential is stable, add 2.00 mL of a creatinine external standard and record the potential as a function of time. Repeat this procedure using the remaining external standards. Construct a calibration curve of $\Delta E / \Delta t$ versus the initial concentration of creatinine. Use the same procedure to analyze samples, using 2.00 mL of urine in place of the external standard. Determine the concentration of creatinine in the sample using the calibration curve. Questions 1. The analysis is carried out under conditions that are pseudo-first order in picrate. Show that under these conditions the change in potential as a function of time is linear. The potential, E, of the picrate ion selective electrode is given by the Nernst equation $E=K-\frac{R T}{F} \ln{[\text { picrate }]} \nonumber$ where K is a constant that accounts for the reference electrodes, the junction potentials, and the ion selective electrode’s asymmetry potential, R is the gas constant, T is the temperature, and F is Faraday’s constant. We know from Equation \ref{13.7} that for a pseudo-first-order reaction, the concentration of picrate at time t is $\ln {[\text{picrate}]_t}=\ln{[\text {picrate}]}_{0}-k^{\prime} t \nonumber$ where $k^{\prime}$ is the pseudo-first-order rate constant. Substituting this integrated rate law into the ion selective electrode’s Nernst equation leaves us with the following result. $E_{t} = K - \frac{R T} {F} \left( \ln{[\text {picrate}]}_{0} - k^{\prime} t\right) \nonumber$ $E_{t} = K - \frac{R T} {F} \ln{[\text {picrate}]}_{0} + \frac{R T} {F} k^{\prime}t \nonumber$ Because K and (RT/F)ln[picrate]0 are constants, a plot of Et versus t is a straight line with a slope of $\frac{R T} {F} k^{\prime}$. 2. Under the conditions of the analysis, the rate of the reaction is pseudo-first-order in picrate and pseudo-zero-order in creatinine and OH. Explain why it is possible to prepare a calibration curve of $\Delta E / \Delta t$ versus the concentration of creatinine. The slope of a plot of Et versus t is $\Delta E / \Delta t = RTk^{\prime}/F$ = RTk′/F (see the previous question). Because the reaction is carried out under conditions where it is pseudo-zero-order in creatinine and OH, the rate law is $\text{rate} = k[\text{picrate}][\text{creatinine}]_0[\text{OH}^-]_0 = k^{\prime}[\text{picrate}] \nonumber$ The pseudo-first-order rate constant, $k^{\prime}$, is $k^{\prime}=k[\text { creatinine }]_{0}\left[\mathrm{OH}^{-}\right]_{0}=c[\text {creatinine}]_{0} \nonumber$ where c is a constant equivalent to k[OH-]0 . The slope of a plot of Et versus t, therefore, is linear function of creatinine’s initial concentration $\frac{\Delta E}{\Delta t}=\frac{R T k^{\prime}}{F}=\frac{R T c}{F}[\text {creatinine}]_{0} \nonumber$ and a plot of $\Delta E / \Delta t$ versus the concentration of creatinine can serve as a calibration curve. 3. Why is it necessary to thermostat the reaction cell? The rate of a reaction is temperature-dependent. The reaction cell is thermostated to maintain a constant temperature to prevent a determinate error from a systematic change in temperature, and to minimize indeterminate errors from random fluctuations in temperature. 4. Why is it necessary to prepare the NaOH solution so that it has an ionic strength of 1.00 M? The potential of the picrate ion selective electrode actually responds to the activity of the picrate anion in solution. By adjusting the NaOH solution to a high ionic strength we maintain a constant ionic strength in all standards and samples. Because the relationship between activity and concentration is a function of ionic strength, the use of a constant ionic strength allows us to write the Nernst equation in terms of picrate’s concentration instead of its activity. Making Kinetic Measurements When using Representative Method 13.2.1 to determine the concentration of creatinine in urine, we follow the reactions kinetics using an ion selective electrode. In principle, we can use any of the analytical techniques in Chapters 8–12 to follow a reaction’s kinetics provided that the reaction does not proceed to an appreciable extent during the time it takes to make a measurement. As you might expect, this requirement places a serious limitation on kinetic methods of analysis. If the reaction’s kinetics are slow relative to the analysis time, then we can make a measurement without the analyte undergoing a significant change in concentration. If the reaction’s rate is too fast—which often is the case—then we introduce a significant error if our analysis time is too long. One solution to this problem is to stop, or quench the reaction by adjusting experimental conditions. For example, many reactions show a strong dependence on pH and are quenched by adding a strong acid or a strong base. Figure 13.2.6 shows a typical example for the enzymatic analysis of p-nitrophenylphosphate, which uses the enzyme wheat germ acid phosphatase to hydrolyze the analyte to p-nitrophenol. The reaction has a maximum rate at a pH of 5. Increasing the pH by adding NaOH quenches the reaction and converts the colorless p-nitrophenol to the yellow-colored p-nitrophenolate, which absorbs at 405 nm. An additional problem when the reaction’s kinetics are fast is ensuring that we rapidly and reproducibly mix the sample and the reagents. For a fast reaction, we need to make our measurements within a few seconds—or even a few milliseconds—of combining the sample and reagents. This presents us with a problem and an advantage. The problem is that rapidly and reproducibly mixing the sample and the reagent requires a dedicated instrument, which adds an additional expense to the analysis. The advantage is that a rapid, automated analysis allows for a high throughput of samples. Instruments for the automated kinetic analysis of phosphate using reaction \ref{13.11}, for example, have sampling rates of approximately 3000 determinations per hour. A variety of instruments have been developed to automate the kinetic analysis of fast reactions. One example, which is shown in Figure 13.2.7 , is the stopped-flow analyzer. The sample and the reagents are loaded into separate syringes and precisely measured volumes are dispensed into a mixing chamber by the action of a syringe drive. The continued action of the syringe drive pushes the mixture through an observation cell and into a stopping syringe. The back pressure generated when the stopping syringe hits the stopping block completes the mixing, after which the reaction’s progress is monitored spectrophotometrically. With a stopped-flow analyzer it is possible to complete the mixing of sample and reagent, and initiate the kinetic measurements in approximately 0.5 ms. By attaching an autosampler to the sample syringe it is possible to analyze up to several hundred samples per hour. Another instrument for kinetic measurements is the centrifugal analyzer, a partial cross section of which is shown in Figure 13.2.8 . The sample and the reagents are placed in separate wells, which are oriented radially around a circular transfer disk. As the centrifuge spins, the centrifugal force pulls the sample and the reagents into the cuvette where mixing occurs. A single optical source and detector, located below and above the transfer disk’s outer edge, measures the absorbance each time the cuvette passes through the optical beam. When using a transfer disk with 30 cuvettes and rotating at 600 rpm, we can collect 10 data points per second for each sample. The ability to collect lots of data and to collect it quickly requires appropriate hardware and software. Not surprisingly, automated kinetic analyzers developed in parallel with advances in analog and digital circuitry—the hardware—and computer software for smoothing, integrating, and differentiating the analytical signal. For an early discussion of the importance of hardware and software, see Malmstadt, H. V.; Delaney, C. J.; Cordos, E. A. “Instruments for Rate Determinations,” Anal. Chem. 1972, 44(12), 79A–89A. Quantitative Applications Chemical kinetic methods of analysis continue to find use for the analysis of a variety of analytes, most notably in clinical laboratories where automated methods aid in handling the large volume of samples. In this section we consider several general quantitative applications. Enzyme-Catalyzed Reactions Enzymes are highly specific catalysts for biochemical reactions, with each enzyme showing a selectivity for a single reactant, or substrate. For example, the enzyme acetylcholinesterase catalyzes the decomposition of the neurotransmitter acetylcholine to choline and acetic acid. Many enzyme–substrate reactions follow a simple mechanism that consists of the initial formation of an enzyme–substrate complex, ES, which subsequently decomposes to form product, releasing the enzyme to react again. $E + S \underset{k_{-1}}{\stackrel{k_1}{\rightleftharpoons}} ES \underset{k_{-2}}{\stackrel{k_2}{\rightleftharpoons}} E + P \label{13.20}$ where k1, k–1, k2, and k–2 are rate constants. If we make measurement early in the reaction, the concentration of products is negligible and we can ignore the step described by the rate constant k–2. Under these conditions the reaction’s rate is $\text { rate }=\frac{d[P]}{d t}=k_{2}[E S] \label{13.21}$ To be analytically useful we need to write Equation \ref{13.21} in terms of the concentrations of the enzyme, E, and the substrate, S. To do this we use the steady-state approximation, in which we assume the concentration of ES remains essentially constant. Following an initial period, during which the enzyme–substrate complex first forms, the rate at which ES forms $\frac{d[E S]}{d t}=k_{1}[E][S]=k_{1}\left([E]_{0}-[E S]\right)[S] \label{13.22}$ is equal to the rate at which it disappears $-\frac{d[E S]}{d t}=k_{-1}[E S]+k_{2}[E S] \label{13.23}$ where [E]0 is the enzyme’s original concentration. Combining Equation \ref{13.22} and Equation \ref{13.23} gives $k_{1}\left([E]_{0}-[E S]\right)[S]=k_{-1}[E S]+k_{2}[E S] \nonumber$ which we solve for the concentration of the enzyme–substrate complex $[E S]=\frac{[E]_{0}[S]}{\frac{k_{-1}+k_{2}}{k_{1}}+[S]}=\frac{[E]_{0}[S]}{K_{m}+[S]} \label{13.24}$ where Km is the Michaelis constant. Substituting Equation \ref{13.24} into Equation \ref{13.21} leaves us with our final rate equation. $\frac{d[P]}{d t}=\frac{k_{2}[E]_{0}[S]}{K_{m}+[S]} \label{13.25}$ A plot of Equation \ref{13.25}, as shown in Figure 13.2.9 , helps us define conditions where we can use the rate of an enzymatic reaction for the quantitative analysis of an enzyme or a substrate. For high substrate concentrations, where [S] >> Km, Equation \ref{13.25} simplifies to $\frac{d[P]}{d t}=\frac{k_{2}[E]_{0}[S]}{K_{m}+[S]} \approx \frac{k_{2}[E]_{0}[S]}{[S]}=k_{2}[E]_{0}=V_{\max } \label{13.26}$ where Vmax is the maximum rate for the catalyzed reaction. Under these conditions the reaction is pseudo-zero-order in substrate, and we can use Vmax to calculate the enzyme’s concentration, typically using a variable-time method. At lower substrate concentrations, where [S] << Km, Equation \ref{13.25} becomes $\frac{d[P]}{d t}=\frac{k_{2}[E]_{0}[S]}{K_{m}+[S]} \approx \frac{k_{2}[E]_{0}[S]}{K_{m}}=\frac{V_{\max }[S]}{K_{m}} \label{13.27}$ Because the reaction is first-order in substrate we can use the reaction’s rate to determine the substrate’s concentration using a fixed-time method. Chemical kinetic methods have been applied to the quantitative analysis of a number of enzymes and substrates [Guilbault, G. G. Handbook of Enzymatic Methods of Analysis, Marcel Dekker: New York, 1976]. One example, is the determination of glucose based on its oxidation by the enzyme glucose oxidase $\text{glucose}(aq) + \text{H}_2\text{O}(g) \xrightarrow{\text{glucose oxidase}} \text{gluconolactone}(aq) + \text{H}_2\text{O}_2(aq) \nonumber$ under conditions where Equation \ref{13.20} is valid. The reaction is monitored by following the rate of change in the concentration of dissolved O2 using an appropriate voltammetric technique. One method for measuring the concentration of dissolved O2 is the Clark amperometric sensor described in Chapter 11. Nonenzyme-Catalyzed Reactions The variable-time method also is used to determine the concentration of nonenzymatic catalysts. One example uses the reduction of H2O2 by thiosulfate, iodide, or hydroquinone, a reaction catalyzed by trace amounts of selected metal ions. For example the reduction of H2O2 by I $2 \mathrm{I}^{-}(a q)+\mathrm{H}_{2} \mathrm{O}_{2}(a q)+2 \mathrm{H}_{3} \mathrm{O}^{+}(a q) \longrightarrow 4 \mathrm{H}_{2} \mathrm{O}(l)+\mathrm{I}_{2}(a q) \nonumber$ is catalyzed by Mo(VI), W(VI), and Zr(IV). A variable-time analysis is conducted by adding a small, fixed amount of ascorbic acid to each solution. As I2 is produced it rapidly oxidizes the ascorbic acid and is reduced back to I. Once all the ascorbic acid is consumed, the presence of excess I2 provides a visual endpoint. Noncatalytic Reactions Chemical kinetic methods are not as common for the quantitative analysis of analytes in noncatalytic reactions. Because they lack the enhancement of reaction rate that a catalyst affords, a noncatalytic method generally is not useful for determining small concentrations of analyte. Noncatalytic methods for inorganic analytes usually are based on a complexation reaction. One example is the determination of aluminum in serum by measuring the initial rate for the formation of its complex with 2-hydroxy-1-napthaldehyde p-methoxybenzoyl-hydrazone [Ioannou. P. C.; Piperaki, E. A. Clin. Chem. 1986, 32, 1481–1483]. The greatest number of noncatalytic methods, however, are for the quantitative analysis of organic analytes. For example, the insecticide methyl parathion has been determined by measuring its rate of hydrolysis in alkaline solutions [Cruces Blanco, C.; Garcia Sanchez, F. Int. J. Environ. Anal. Chem. 1990, 38, 513–523]. Characterization Applications Chemical kinetic methods also find use in determining rate constants and in elucidating reaction mechanisms. Two examples from the kinetic analysis of enzymes illustrate these applications. Determining Vmax and Km for Enzyme-Catalyzed Reactions The value of Vmax and Km for an enzymatic reaction are of significant interest in the study of cellular chemistry. For an enzyme that follows the mechanism in reaction \ref{13.20}, Vmax is equivalent to k2 $\times$ [E]0, where [E]0 is the enzyme’s concentration and k2 is the enzyme’s turnover number. An enzyme’s turnover number is the maximum number of substrate molecules converted to product by a single active site on the enzyme, per unit time. A turnover number, therefore, provides a direct indication of the active site’s catalytic efficiency. The Michaelis constant, Km, is significant because it provides an estimate of the substrate’s intracellular concentration [(a) Northup, D. B. J. Chem. Educ. 1998, 75, 1153–1157; (b) Zubay, G. Biochemistry, Macmillan Publishing Co.: New York, 2nd Ed., p 269]. An enzyme’s turnover number also is know as kcat and is equal to Vmax/[E]0. For the mechanism in reaction \ref{13.20}, kcat is equivalent to k2. For more complicated mechanisms, kcat is a function of additional rate constants. As shown in Figure 13.2.9 , we can find values for Vmax and Km by measuring the reaction’s rate for small and for large concentrations of the substrate. Unfortunately, this is not always practical as the substrate’s limited solubility may prevent us from using the large substrate concentrations needed to determine Vmax. Another approach is to rewrite Equation \ref{13.25} by taking its reciprocal $\frac{1}{d[P] / d t}=\frac{1}{v}=\frac{K_{m}}{V_{\max }} \times \frac{1}{[S]}+\frac{1}{V_{\max }} \label{13.28}$ where v is the reaction’s rate. As shown in Figure 13.2.10 , a plot of 1/v versus 1/[S], which is called a double reciprocal or Lineweaver–Burk plot, is a straight line with a slope of Km/Vmax, a y-intercept of 1/Vmax, and an x-intercept of –1/Km. In Chapter 5 we noted that when faced with a nonlinear model—and Equation \ref{13.25} is one example of a nonlinear model—it may be possible to rewrite the equation in a linear form. This is the strategy used here. Linearizing a nonlinear model is not without limitations, two of which deserve a brief mention. First, because we are unlikely to have data for large substrate concentrations, we will not have many data points for small values of 1/[S]. As a result, our determination of the y-intercept’s value relies on a significant extrapolation. Second, taking the reciprocal of the rate distorts the experimental error in a way that may invalidate the assumptions of a linear regression. Nonlinear regression provides a more rigorous method for fitting Equation \ref{13.25} to experimental data. The details are beyond the level of this textbook, but you may consult Massart, D. L.; Vandeginste, B. G. M.; Buydens, L. M. C. De Jong, S.; Lewi, P. J.; Smeyers-Verbeke, J. “Nonlinear Regression,” which is Chapter 11 in Handbook of Chemometrics and Qualimetrics: Part A, Elsevier: Amsterdam, 1997, for additional details. The simplex algorithm described in Chapter 14 of this text also can be used to fit a nonlinear equation to experimental data. Example 13.2.6 The reaction between nicotineamide mononucleotide and ATP to form nicotineamide–adenine dinucleotide and pyrophosphate is catalyzed by the enzyme nicotinamide mononucleotide adenylyltransferase [(a) Atkinson, M. R.; Jackson, J. F.; Morton, R. K. Biochem. J. 1961, 80, 318–323; (b) Wilkinson, G. N. Biochem. J. 1961, 80, 324–332]. The following table provides typical data obtained at a pH of 4.95. The substrate, S, is nicotinamide mononucleotide and the initial rate, v, is the μmol of nicotinamide–adenine dinucleotide formed in a 3-min reaction period. [S] (mM) v (µmol) [S] (mM) v (µmol) 0.138 0.148 0.560 0.324 0.220 0.171 0.766 0.390 0.291 0.234 1.460 0.493 Determine values for Vmax and Km. Solution Figure 13.2.11 shows the Lineweaver–Burk plot for this data and the result-ng regression equation. Using the y-intercept, we calculate Vmax as $V_{\max }=\frac{1}{y\text {-intercept }}=\frac{1}{1.708 \: \mu \mathrm{mol}^{-1}}=0.585 \: \mu \mathrm{mol} \nonumber$ and using the slope we find that Km is $K_{m} = \text {slope} \times V_{\max}=0.7528 \: \mu \mathrm{mol}^{-1} \mathrm{mM} \times 0.585 \: \mu \mathrm{mol}=0.440 \mathrm{ mM} \nonumber$ Exercise 13.2.3 The following data were collected during the oxidation of catechol (the substrate) to o-quinone by the enzyme o-diphenyl oxidase. The reaction was followed by monitoring the change in absorbance at 540 nm. The data in this exercise is adapted from jkimball. [catechol] (mM): 0.3 0.6 1.2 4.8 rate ($\Delta$ AU/min): 0.020 0.035 0.048 0.081 Answer The figure below shows the Lineweaver–Burk plot and the equation for the data. The y-intercept of 9.974 min/($\Delta$AU is equivalent to 1/Vmax; thus, Vmax is 0.10 $\Delta$ AU/min. The slope of 11.89 min/(\Delta\)AU•mM is equivalent to Km/Vmax; thus, Km is 1.2 mM. Elucidating Mechanisms for the Inhibition of Enzyme Catalysis When an inhibitor interacts with an enzyme it decreases the enzyme’s catalytic efficiency. An irreversible inhibitor binds covalently to the enzyme’s active site, producing a permanent loss in catalytic efficiency even if we decrease the inhibitor’s concentration. A reversible inhibitor forms a noncovalent complex with the enzyme, resulting in a temporary decrease in catalytic efficiency. If we remove the inhibitor, the enzyme’s catalytic efficiency returns to its normal level. There are several pathways for the reversible binding of an inhibitor and an enzyme, as shown in Figure 13.2.12 . In competitive inhibition the substrate and the inhibitor compete for the same active site on the enzyme. Because the substrate cannot bind to an enzyme–inhibitor complex, EI, the enzyme’s catalytic efficiency for the substrate decreases. With noncompetitive inhibition the substrate and the inhibitor bind to different active sites on the enzyme, forming an enzyme–substrate–inhibitor, or ESI complex. The formation of an ESI complex decreases catalytic efficiency because only the enzyme–substrate complex reacts to form the product. Finally, in uncompetitive inhibition the inhibitor binds to the enzyme–substrate complex, forming an inactive ESI complex. We can identify the type of reversible inhibition by observing how a change in the inhibitor’s concentration affects the relationship between the rate of reaction and the substrate’s concentration. As shown in Figure 13.2.13 , when we display kinetic data using as a Lineweaver-Burk plot it is easy to determine which mechanism is in effect. For example, an increase in slope, a decrease in the x-intercept, and no change in the y-intercept indicates competitive inhibition. Because the inhibitor’s binding is reversible, we can still obtain the same maximum velocity—thus the constant value for the y-intercept—by adding enough substrate to completely displace the inhibitor. Because it takes more substrate, the value of Km increases, which explains the increase in the slope and the decrease in the x-intercept’s value. Example 13.2.7 Exercise 13.2.3 provides kinetic data for the oxidation of catechol (the substrate) to o-quinone by the enzyme o-diphenyl oxidase in the absence of an inhibitor. The following additional data are available when the reaction is run in the presence of p-hydroxybenzoic acid, PBHA. Is PBHA an inhibitor for this reaction and, if so, what type of inhibitor is it? The data in this exercise are adapted from jkimball. [catechol] (mM): 0.3 0.6 1.2 4.8 rate ($\Delta$ AU/min): 0.011 0.019 0.022 0.060 Solution Figure 13.2.14 shows the resulting Lineweaver–Burk plot for the data in Exercise 13.2.3 and Example 13.2.7 . Although the two y-intercepts are not identical in value—the result of uncertainty in measuring the rates—the plot suggests that PBHA is a competitive inhibitor for the enzyme’s reaction with catechol. Exercise 13.2.4 Exercise 13.2.3 provides kinetic data for the oxidation of catechol (the substrate) to o-quinone by the enzyme o-diphenyl oxidase in the absence of an inhibitor. The following additional data are available when the reaction is run in the presence of phenylthiourea. Is phenylthiourea an inhibitor for this reaction and, if so, what type of inhibitor is it? The data in this exercise are adapted from jkimball. [catechol] (mM): 0.3 0.6 1.2 4.8 rate ($\Delta$ AU/min): 0.010 0.016 0.024 0.040 Answer The figure below shows the Lineweaver–Burk plots for the two sets of data. The nearly identical x-intercepts suggests that phenylthiourea is a noncompetitive inhibitor. Evaluation of Chemical Kinetic Methods Scale of Operation The detection limit for a chemical kinetic method ranges from minor components to ultratrace components, and is determined by two factors: the rate of the reaction and the instrumental technique used to monitor the rate. Because the signal is directly proportional to the reaction’s rate, a faster reaction generally results in a lower detection limit. All other factors being equal, detection limits are smaller for catalytic reactions than for noncatalytic reactions. Not surprisingly, some of the earliest chemical kinetic methods took advantage of catalytic reactions. For example, ultratrace levels of Cu (<1 ppb) are determined by measuring its catalytic effect on the redox reaction between hydroquinone and H2O2. In the absence of a catalyst, most chemical kinetic methods for organic compounds use reactions with relatively slow rates, which limits the analysis to minor and to higher concentration trace analytes. Noncatalytic chemical kinetic methods for inorganic compounds that use metal–ligand complexation reactions may be fast or slow, with detection limits ranging from trace to minor analyte. The second factor that influences a method’s detection limit is the instrumentation used to monitor the reaction’s progress. Most reactions are monitored spectrophotometrically or electrochemically. The scale of operation for these techniques are discussed in Chapter 10 and Chapter 11. Accuracy As noted earlier, a chemical kinetic method potentially is subject to larger errors than an equilibrium method due to the effect of uncontrolled or poorly controlled variables, such as temperature or pH. Although a direct-computation chemical kinetic method can achieve moderately accurate results (a relative error of 1–5%), the accuracy often is much worse. Curve-fitting methods provide significant improvements in accuracy because they use more data. In one study, for example, accuracy was improved by two orders of magnitude—from errors of 500% to 5%—by replacing a direct-computation analysis with a curve-fitting analysis [Pauch, J. B.; Margerum, D. W. Anal. Chem. 1969, 41, 226–232]. Although not discussed in this chapter, data analysis methods that include the ability to compensate for experimental errors can lead to a significant improvement in accuracy [(a) Holler, F. J.; Calhoun, R. K.; MClanahan, S. F. Anal. Chem. 1982, 54, 755–761; (b) Wentzel, P. D.; Crouch, S. R. Anal. Chem. 1986, 58, 2851–2855; (c) Wentzel, P. D.; Crouch, S. R. Anal. Chem. 1986, 58, 2855–2858]. Precision The precision of a chemical kinetic method is limited by the signal-to-noise ratio of the instrumentation used to monitor the reaction’s progress. When using an integral method, a precision of 1–2% is routinely possible. The precision for a differential method may be somewhat poorer, particularly if the signal is noisy. Sensitivity We can improve the sensitivity of a one-point fixed-time integral method by making measurements under conditions where the concentration of the monitored species is as large as possible. When monitoring the analyte’s concentration—or the concentration of any other reactant—we want to take measurements early in the reaction before its concentration decreases. On the other hand, if we choose to monitor one of the reaction’s products, then it is better to take measurements at longer times. For a two-point fixed-time integral method, we can improve sensitivity by increasing the difference between times t1 and t2. As discussed earlier, the sensitivity of a rate method improves when we choose to measure the initial rate. Selectivity The analysis of closely related compounds, as discussed in earlier chapters, often is complicated by their tendency to interfere with each other. To overcome this problem we usually need to separate the analyte and the interferent before completing the analysis. One advantage of a chemical kinetic method is that it often is possible adjust the reaction conditions so that the analyte and the interferent have different reaction rates. If the difference in their respective rates is large enough, then one species will react completely before the other species has a chance to react. The need to analyze multiple analytes in complex mixtures is, of course, one of the advantages of the separation techniques covered in Chapter 12. Kinetic techniques provide an alternative approach for simple mixtures. We can use the appropriate integrated rate laws to find the conditions necessary to separate a faster reacting species from a more slowly reacting species. Let’s consider a system that consists of an analyte, A, and an interferent, B, both of which show first-order kinetics with a common reagent. To avoid an interference, the relative magnitudes of their rate constants must be sufficiently different. The fractions, f, of A and B that remain at any point in time, t, are defined by the following equations $\left(f_{A}\right)_{t}=\frac{[A]_{t}}{[A]_{0}} \label{13.29}$ $\left(f_{B}\right)_{t}=\frac{[B]_{t}}{[B]_{0}} \label{13.30}$ where [A]0 and [B]0 are the initial concentrations of A and B, respectively. Rearranging Equation \ref{13.2} and substituting in Equation \ref{13.29} or Equation \ref{13.30} leaves use with the following two equations. $\ln \frac{[A]_{t}}{[A]_{0}}=\ln \left(f_{A}\right)_{t}=-k_{A} t \label{13.31}$ $\ln \frac{[B]_{t}}{[B]_{0}}=\ln \left(f_{B}\right)_{t}=-k_{B} t \label{13.32}$ where kA and kB are the rate constants for A and for B. Dividing Equation \ref{13.31} by Equation \ref{13.32} leave us with $\frac{k_{A}}{k_{B}}=\frac{\ln \left(f_{\mathcal{A}}\right)_{t}}{\ln \left(f_{B}\right)_{t}} \nonumber$ Suppose we want 99% of A to react before 1% of B reacts. The fraction of A that remains is 0.01 and the fraction of B that remains is 0.99, which requires that $\frac{k_{A}}{k_{B}}=\frac{\ln \left(f_{A}\right)_{t}}{\ln \left(f_{B}\right)_{t}}=\frac{\ln (0.01)}{\ln (0.99)}=460 \nonumber$ the rate constant for A must be at least 460 times larger than that for B. When this condition is met we can determine the analyte’s concentration before the interferent begins to react. If the analyte has the slower reaction, then we can determine its concentration after we allow the interferent to react to completion. This method of adjusting reaction rates is useful if we need to analyze an analyte in the presence of an interferent, but is impractical if both A and B are analytes because the condition that favors the analysis of A will not favor the analysis of B. For example, if we adjust conditions so that 99% of A reacts in 5 s, then 99% of B must react within 0.01 s if it has the faster kinetics, or in 2300 s if it has the slower kinetics. The reaction of B is too fast or too slow to make this a useful analytical method. What do we do if the difference in the rate constants for A and B are not significantly different? We still can complete an analysis if we can simultaneously monitor both species. Because both A and B react at the same time, the integrated form of the first-order rate law becomes $C_{t}=[A]_{t}+[B]_{t}=[A]_{0} e^{-k_{A}t}+[B]_{0} e^{-k_{B}t} \label{13.33}$ where Ct is the total concentration of A and B at time, t. If we measure Ct at times t1 and t2, we can solve the resulting pair of simultaneous equations to determine values [A]0 and [B]0. The rate constants kA and kB are determined in separate experiments using standard solutions of A and B. Equation \ref{13.33} can also serve as the basis for a curve-fitting method. As shown in Figure 13.2.15 , a plot of ln(Ct) as a function of time consists of two regions. At shorter times the plot is curved because A and B react simultaneously. At later times, however, the concentration of the faster reacting component, A, decreases to zero, and Equation \ref{13.33} simplifies to $C_{t} \approx[B]_{t}=[B]_{0} e^{-k_{B}t} \nonumber$ Under these conditions, a plot of ln(Ct) versus time is linear. Extrapolating the linear portion to t = 0 gives [B]0, with [A]0 determined by difference. Example 13.2.8 Use the data in Figure 13.2.15 to determine the concentrations of A and B in the original sample. Solution Extrapolating the linear part of the curve back to t = 0 gives ln[B]0 as –2.3, or a [B]0 of 0.10 M. At t = 0, ln[C]0 is –1.2, which corresponds to a [C]0 of 0.30 M. Because [C]0 = [A]0 + [B]0, the concentration of A in the original sample is 0.20 M. Time, Cost, and Equipment An automated chemical kinetic method of analysis provides a rapid means for analyzing samples, with throughputs ranging from several hundred to several thousand determinations per hour. The initial start-up costs may be fairly high because an automated analysis requires a dedicated instrument designed to meet the specific needs of the analysis. When measurements are handled manually, a chemical kinetic method requires routinely available equipment and instrumentation, although the sample throughput is much lower than with an automated method.
textbooks/chem/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/13%3A_Kinetic_Methods/13.02%3A_Chemical_Kinetics.txt
Atoms that have the same number of protons but a different number of neutrons are isotopes. To identify an isotope we use the notation ${}_Z^A E$, where E is the element’s atomic symbol, Z is the element’s atomic number, and A is the element’s atomic mass number. Although an element’s different isotopes have the same chemical properties, their nuclear properties are not identical. The most important difference between isotopes is their stability. The nuclear configuration of a stable isotope remains constant with time. Unstable isotopes, however, disintegrate spontaneously, emitting radioactive particles as they transform into a more stable form. An element’s atomic number, Z, is equal to the number of protons and its atomic mass, A, is equal to the sum of the number of protons and neutrons. We represent an isotope of carbon-13 as $_{6}^{13} \text{C}$ because carbon has six protons and seven neutrons. Sometimes we omit Z from this notation—identifying the element and the atomic number is repetitive because all isotopes of carbon have six protons and any atom that has six protons is an isotope of carbon. Thus, 13C and C–13 are alternative notations for this isotope of carbon. The most important types of radioactive particles are alpha particles, beta particles, gamma rays, and X-rays. An alpha particle, $\alpha$, is equivalent to a helium nucleus, ${}_2^4 \text{He}$. When an atom emits an alpha particle, the product in a new atom whose atomic number and atomic mass number are, respectively, 2 and 4 less than its unstable parent. The decay of uranium to thorium is one example of alpha emission. $_{92}^{238} \text{U} \longrightarrow _{90}^{234} \text{Th}+\alpha \nonumber$ A beta particle, $\beta$, comes in one of two forms. A negatron, $_{-1}^0 \beta$, is produced when a neutron changes into a proton, increasing the atomic number by one, as shown here for lead. $_{82}^{214} \mathrm{Pb} \longrightarrow_{83}^{214} \mathrm{Bi} + _{-1}^{0} \beta \nonumber$ The conversion of a proton to a neutron results in the emission of a positron, $_{1}^0 \beta$. $_{15}^{30} \mathrm{P} \longrightarrow_{14}^{30} \mathrm{Si} + _{1}^{0} \beta \nonumber$ A negatron, which is the more common type of beta particle, is equivalent to an electron. The emission of an alpha or a beta particle often produces an isotope in an unstable, high energy state. This excess energy is released as a gamma ray, $\gamma$, or as an X-ray. Gamma ray and X-ray emission may also occur without the release of an alpha particle or a beta particle. Theory and Practice A radioactive isotope’s rate of decay, or activity, follows first-order kinetics $A=-\frac{d N}{d t}=\lambda N \label{13.1}$ where A is the isotope’s activity, N is the number of radioactive atoms present in the sample at time t, and $\lambda$ is the isotope’s decay constant. Activity is expressed as the number of disintegrations per unit time. As with any first-order process, we can rewrite Equation \ref{13.1} in an integrated form. $N_{t}=N_{0} e^{-\lambda t} \label{13.2}$ Substituting Equation \ref{13.2} into Equation \ref{13.2} gives $A=\lambda N_{0} e^{-\lambda t}=A_{0} e^{-\lambda t} \label{13.3}$ If we measure a sample’s activity at time t we can determine the sample’s initial activity, A0, or the number of radioactive atoms originally present in the sample, N0. An important characteristic property of a radioactive isotope is its half-life, t1/2, which is the amount of time required for half of the radioactive atoms to disintegrate. For first-order kinetics the half-life is $t_{1 / 2}=\frac{0.693}{\lambda} \label{13.4}$ Because the half-life is independent of the number of radioactive atoms, it remains constant throughout the decay process. For example, if 50% of the radioactive atoms remain after one half-life, then 25% remain after two half-lives, and 12.5% remain after three half-lives. Suppose we begin with an N0 of 1200 atoms During the first half-life, 600 atoms disintegrate and 600 remain. During the second half-life, 300 of the 600 remaining atoms disintegrate, leaving 300 atoms or 25% of the original 1200 atoms. Of the 300 remaining atoms, only 150 remain after the third half-life, or 12.5% of the original 1200 atoms. Kinetic information about a radioactive isotope usually is given in terms of its half-life because it provides a more intuitive sense of the isotope’s stability. Knowing, for example, that the decay constant for $_{38}^{90}\text{Sr}$ is 0.0247 yr–1 does not give an immediate sense of how fast it disintegrates. On the other hand, knowing that its half-life is 28.1 yr makes it clear that the concentration of $_{38}^{90}\text{Sr}$ in a sample remains essentially constant over a short period of time. Instrumentation Alpha particles, beta particles, gamma rays, and X-rays are measured by using the particle’s energy to produce an amplified pulse of electrical current in a detector. These pulses are counted to give the rate of disintegration. There are three common types of detectors: gas-filled detectors, scintillation counters, and semiconductor detectors. A gas-filled detector consists of a tube that contains an inert gas, such as Ar. When a radioactive particle enters the tube it ionizes the inert gas, producing an Ar+/e ion-pair. Movement of the electron toward the anode and of the Ar+ toward the cathode generates a measurable electrical current. A Geiger counter is one example of a gas-filled detector. A scintillation counter uses a fluorescent material to convert radioactive particles into easy to measure photons. For example, one solid-state scintillation counter consists of a NaI crystal that contains 0.2% TlI, which produces several thousand photons for each radioactive particle. Finally, in a semiconductor detector, adsorption of a single radioactive particle promotes thousands of electrons to the semiconductor’s conduction band, increasing conductivity. Quantitative Applications In this section we consider three common quantitative radiochemical methods of analysis: the direct analysis of a radioactive isotope by measuring its rate of disintegration, neutron activation, and isotope dilution. Direct Analysis of Radioactive Analytes The concentration of a long-lived radioactive isotope remains essentially constant during the period of analysis. As shown in Example 13.3.1 , we can use the sample’s activity to calculate the number of radioactive particles in the sample. Example 13.3.1 The activity in a 10.00-mL sample of wastewater that contains $_{38}^{90}\text{Sr}$ is $9.07 \times 10^6$ disintegrations/s. What is the molar concentration of $_{38}^{90}\text{Sr}$ in the sample? The half-life for $_{38}^{90}\text{Sr}$ is 28.1 yr. Solution Solving Equation \ref{13.4} for $\lambda$, substituting into Equation \ref{13.1}, and solving for N gives $N=\frac{A \times t_{1 / 2}}{0.693} \nonumber$ Before we can determine the number of atoms of $_{38}^{90}\text{Sr}$ in the sample we must express its activity and its half-life using the same units. Converting the half-life to seconds gives t1/2 as $8.86 \times 10^8$ s; thus, there are $\frac{\left(9.07 \times 10^{6} \text { disintegrations/s }\right)\left(8.86 \times 10^{8} \text{ s}\right)}{0.693} = 1.16 \times 10^{16} \text{ atoms} _{38}^{90}\text{Sr} \nonumber$ The concentration of $_{38}^{90}\text{Sr}$ in the sample is $\frac{1.16 \times 10^{16} \text { atoms } _{38}^{90} \text{Sr}}{\left(6.022 \times 10^{23} \text { atoms/mol }\right)(0.01000 \mathrm{L})} = 1.93 \times 10^{-6} \text{ M } _{38}^{90}\text{Sr} \nonumber$ The direct analysis of a short-lived radioactive isotope using the method outlined in Example 13.3.1 is less useful because it provides only a transient measure of the isotope’s concentration. Instead, we can measure its activity after an elapsed time, t, and use Equation \ref{13.3} to calculate N0. Neutron Activation Analysis Few analytes are naturally radioactive. For many analytes, however, we can induce radioactivity by irradiating the sample with neutrons in a process called neutron activation analysis (NAA). The radioactive element formed by neutron activation decays to a stable isotope by emitting a gamma ray, and, possibly, other nuclear particles. The rate of gamma-ray emission is proportional to the analyte’s initial concentration in the sample. For example, if we place a sample containing non-radioactive $_{13}^{27}\text{Al}$ in a nuclear reactor and irradiate it with neutrons, the following nuclear reaction takes place. $_{13}^{27} \mathrm{Al}+_{0}^{1} \mathrm{n} \longrightarrow_{13}^{28} \mathrm{Al} \nonumber$ The radioactive isotope of 13Al has a characteristic decay process that includes the release of a beta particle and a gamma ray. $_{13}^{28} \mathrm{Al} \longrightarrow_{14}^{28} \mathrm{Al}+_{-1}^{0} \beta + \gamma \nonumber$ When irradiation is complete, we remove the sample from the nuclear reactor, allow any short-lived radioactive interferences to decay into the background, and measure the rate of gamma-ray emission. The initial activity at the end of irradiation depends on the number of atoms that are present. This, in turn, is a equal to the difference between the rate of formation for $_{13}^{28}\text{Al}$ and its rate of disintegration $\frac {dN_{_{13}^{28} \text{Al}}} {dt} = \Phi \sigma N_{_{13}^{27} \text{Al}} - \lambda N_{_{13}^{28} \text{Al}} \label{13.5}$ where $\Phi$ is the neutron flux and $\sigma$ is the reaction cross-section, or probability that a $_{13}^{27}\text{Al}$ nucleus captures a neutron. Integrating Equation \ref{13.5} over the time of irradiation, ti, and multiplying by $\lambda$ gives the initial activity, A0, at the end of irradiation as $A_0 = \lambda N_{_{13}^{28}\text{Al}} = \Phi \sigma N_{_{13}^{27}\text{Al}} (1-e^{-kt}) \nonumber$ If we know the values for A0, $\Phi$, $\sigma$, $\lambda$, and ti, then we can calculate the number of atoms of $_{13}^{27}\text{Al}$ initially present in the sample. A simpler approach is to use one or more external standards. Letting $(A_0)_x$ and $(A_0)_s$ represent the analyte’s initial activity in an unknown and in an external standard, and letting $w_x$ and $w_s$ represent the analyte’s weight in the unknown and in the external standard, we obtain the following pair of equations $\left(A_{0}\right)_{x}=k w_{x} \label{13.6}$ $\left(A_{0}\right)_{s}=k w_{s} \label{13.7}$ that we can solve to determine the analyte’s mass in the sample. As noted earlier, gamma ray emission is measured following a period during which we allow short-lived interferents to decay into the background. As shown in Figure 13.3.1 , we determine the sample’s or the standard’s initial activity by extrapolating a curve of activity versus time back to t = 0. Alternatively, if we irradiate the sample and the standard at the same time, and if we measure their activities at the same time, then we can substitute these activities for (A0)x and (A0)s. This is the strategy used in the following example. Example 13.3.2 The concentration of Mn in steel is determined by a neutron activation analysis using the method of external standards. A 1.000-g sample of an unknown steel sample and a 0.950-g sample of a standard steel known to contain 0.463% w/w Mn are irradiated with neutrons for 10 h in a nuclear reactor. After a 40-min delay the gamma ray emission is 2542 cpm (counts per minute) for the unknown and 1984 cpm for the external standard. What is the %w/w Mn in the unknown steel sample? Solution Combining equations \ref{13.6} and \ref{13.7} gives $w_{x}=\frac{A_{x}}{A_{s}} \times w_{s} \nonumber$ The weight of Mn in the external standard is $w_{s}=\frac{0.00463 \text{ g } \text{Mn}}{\text{ g } \text { steel }} \times 0.950 \text{ g} \text { steel }=0.00440 \text{ g} \text{ Mn} \nonumber$ Substituting into the above equation gives $w_{x}=\frac{2542 \text{ cpm}}{1984 \text{ cpm}} \times 0.00440 \text{ g} \text{ Mn}=0.00564 \text{ g} \text{ Mn} \nonumber$ Because the original mass of steel is 1.000 g, the %w/w Mn is 0.564%. Among the advantages of neutron activation are its applicability to almost all elements in the periodic table and that it is nondestructive to the sample. Consequently, NAA is an important technique for analyzing archeological and forensic samples, as well as works of art. Isotope Dilution Another important radiochemical method for the analysis of nonradioactive analytes is isotope dilution. An external source of analyte is prepared in a radioactive form with a known activity, AT, for its radioactive decay—we call this form of the analyte a tracer. To prepare a sample for analysis we add a known mass of the tracer, wT, to a portion of sample that contains an unknown mass, wx , of analyte. After homogenizing the sample and tracer, we isolate wA grams of analyte by using a series of appropriate chemical and physical treatments. Because these chemical and physical treatments cannot distinguish between radioactive and nonradioactive forms of the analyte, the isolated material contains both. Finally, we measure the activity of the isolated sample, AA. If we recover all the analyte—both the radioactive tracer and the nonradioactive analyte—then AA and AT are equal and wx = wA wT. Normally, we fail to recover all the analyte. In this case AA is less than AT, and $A_{A}=A_{T} \times \frac{w_{A}}{w_{x}+w_{T}} \label{13.8}$ The ratio of weights in Equation \ref{13.8} accounts for any loss of activity that results from our failure to recover all the analyte. Solving Equation \ref{13.8} for wx gives $w_{x}=\frac{A_{T}}{A_{A}} w_{A}-w_{T} \label{13.9}$ How we process the sample depends on the analyte and the sample’s matrix. We might, for example, digest the sample to bring the analyte into solution. After filtering the sample to remove the residual solids, we might precipitate the analyte, isolate it by filtration, dry it in an oven, and obtain its weight. Given that the goal of an analysis is to determine the amount of nonradioactive analyte in our sample, the realization that we might not recover all the analyte might strike you as unsettling. Recall from Chapter 7.7, that a single liquid–liquid extraction rarely has an extraction efficiency of 100%. One advantage of isotope dilution is that the extraction efficiency for the nonradioactive analyte and for the tracer are the same. If we recover 50% of the tracer, then we also recover 50% of the nonradioactive analyte. Because we know how much tracer we added to the sample, we can determine how much of the nonradioactive analyte is in the sample. Example 13.3.3 The concentration of insulin in a production vat is determined by isotope dilution. A 1.00-mg sample of insulin labeled with 14C having an activity of 549 cpm is added to a 10.0-mL sample taken from the production vat. After homogenizing the sample, a portion of the insulin is separated and purified, yielding 18.3 mg of pure insulin. The activity for the isolated insulin is measured at 148 cpm. How many mg of insulin are in the original sample? Solution Substituting known values into Equation \ref{13.8} gives $w_{x}=\frac{549 \text{ cpm}}{148 \text{ cpm}} \times 18.3 \text{ mg}-1.00 \text{ mg}=66.9 \text{ mg} \text { insulin } \nonumber$ Equation \ref{13.8} and Equation \ref{13.9} are valid only if the tracer’s half-life is considerably longer than the time it takes to conduct the analysis. If this is not the case, then the decrease in activity is due both to the incomplete recovery and the natural decrease in the tracer’s activity. Table 13.3.1 provides a list of several common tracers for isotope dilution. Table 13.3.1 . Common Tracers for Isotope Dilution isotope half-life 3H 12.5 years 14C 5730 years 32P 14.3 days 35S 87.1 days 45Ca 152 days 55Fe 2.91 years 60Co 5.3 years 131I 8 days An important feature of isotope dilution is that it is not necessary to recover all the analyte to determine the amount of analyte present in the original sample. Isotope dilution, therefore, is useful for the analysis of samples with complex matrices, where a complete recovery of the analyte is difficult. Characterization Applications One example of a characterization application is the determination of a sample’s age based on the decay of a radioactive isotope naturally present in the sample. The most common example is carbon-14 dating, which is used to determine the age of natural organic materials. As cosmic rays pass through the upper atmosphere, some $_7^{14}\text{N}$ atoms in the atmosphere capture high energy neutrons, converting them into $_6^{14}\text{C}$. The $_6^{14}\text{C}$ then migrates into the lower atmosphere where it oxidizes to form C-14 labeled CO2. Animals and plants subsequently incorporate this labeled CO2 into their tissues. Because this is a steady-state process, all plants and animals have the same ratio of $_6^{14}\text{C}$ to $_6^{12}\text{C}$ in their tissues. When an organism dies, the radioactive decay of $_6^{14}\text{C}$ to $_7^{14}\text{N}$ by $_{-1}^0 \beta$ emission (t = 5730 years) leads to predictable reduction in the $_6^{14}\text{C}$ to $_6^{12}\text{C}$ ratio. We can use the change in this ratio to date samples that are as much as 30000 years old, although the precision of the analysis is best when the sample’s age is less than 7000 years. The accuracy of carbon-14 dating depends upon our assumption that the natural $_6^{14}\text{C}$ to $_6^{12}\text{C}$ ratio in the atmosphere is constant over time. Some variation in the ratio has occurred as the result of the increased consumption of fossil fuels and the production of $_6^{14}\text{C}$ during the testing of nuclear weapons. A calibration curve prepared using samples of known age—examples of samples include tree rings, deep ocean sediments, coral samples, and cave deposits—limits this source of uncertainty. There is no need to prepare a calibration curve for each analysis. Instead, there is a universal calibration curve known as IntCal. The most recent such curve, IntCal13 is described in the following paper: Reimer, P. J., et. al. “IntCal13 and Marine 13 Radiocarbon Age Calibration Curve 0–50,000 Years Cal BP,” Radiocarbon 2013, 55, 1869–1887. This calibration spans 50 000 years before the present (BP). Example 13.3.4 To determine the age of a fabric sample, the relative ratio of $_6^{14}\text{C}$ to $_6^{12}\text{C}$ was measured yielding a result of 80.9% of that found in modern fibers. How old is the fabric? Solution Equation \ref{13.3} and Equation \ref{13.4} provide us with a method to convert a change in the ratio of $_6^{14}\text{C}$ to $_6^{12}\text{C}$ to the fabric’s age. Letting A0 be the ratio of $_6^{14}\text{C}$ to $_6^{12}\text{C}$ in modern fibers, we assign it a value of 1.00. The ratio of $_6^{14}\text{C}$ to $_6^{12}\text{C}$ in the sample, A, is 0.809. Solving gives $t=\ln \frac{A_{0}}{A} \times \frac{t_{1 / 2}}{0.693}=\ln \frac{1.00}{0.809} \times \frac{5730 \text { yr }}{0.693}=1750 \text { yr } \nonumber$ Other isotopes can be used to determine a sample’s age. The age of rocks, for example, has been determined from the ratio of the number of $_{92}^{238}\text{U}$ to the number of stable $_{82}^{206}\text{Pb}$ atoms produced by radioactive decay. For rocks that do not contain uranium, dating is accomplished by comparing the ratio of radioactive $_{19}^{40}\text{K}$ to the stable $_{18}^{40}\text{Ar}$. Another example is the dating of sediments collected from lakes by measuring the amount of $_{82}^{210}\text{Pb}$ that is present. Evaluation Radiochemical methods routinely are used for the analysis of trace analytes in macro and meso samples. The accuracy and precision of radiochemical methods generally are within the range of 1–5%. We can improve the precision—which is limited by the random nature of radioactive decay—by counting the emission of radioactive particles for as long a time as is practical. If the number of counts, M, is reasonably large (M ≥ 100), and the counting period is significantly less than the isotope’s half-life, then the percent relative standard deviation for the activity, $(\sigma_A)_{rel}$, is approximately $\left(\sigma_{A}\right)_{\mathrm{rel}}=\frac{1}{\sqrt{M}} \times 100 \nonumber$ For example, if we determine the activity by counting 10 000 radioactive particles, then the relative standard deviation is 1%. A radiochemical method’s sensitivity is inversely proportional to $(\sigma_A)_{rel}$, which means we can improve the sensitivity by counting more particles. Selectivity rarely is of concern when using a radiochemical method because most samples have only a single radioactive isotope. When several radioactive isotopes are present, we can determine each isotope’s activity by taking advantage of differences in the energies of their respective radioactive particles or differences in their respective decay rates. In comparison to most other analytical techniques, radiochemical methods usually are more expensive and require more time to complete an analysis. Radiochemical methods also are subject to significant safety concerns due to the analyst’s potential exposure to high energy radiation and the need to safely dispose of radioactive waste.
textbooks/chem/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/13%3A_Kinetic_Methods/13.03%3A_Radiochemistry.txt
The focus of this chapter is on methods in which we measure a time-dependent signal. Chemical kinetic methods and radiochemical methods are two examples. In this section we consider the technique of flow injection analysis in which we inject the sample into a flowing carrier stream that gives rise to a transient signal at the detector. Because the shape of this transient signal depends on the physical and chemical kinetic processes that take place in the carrier stream during the time between injection and detection, we include flow injection analysis in this chapter. Theory and Practice Flow injection analysis (FIA) was developed in the mid-1970s as a highly efficient technique for the automated analyses of samples [see, for example, (a) Ruzicka, J.; Hansen, E. H. Anal. Chim. Acta 1975, 78, 145–157; (b) Stewart, K. K.; Beecher, G. R.; Hare, P. E. Anal. Biochem. 1976, 70, 167–173; (c) Valcárcel, M.; Luque de Castro, M. D. Flow Injection Analysis: Principles and Applications, Ellis Horwood: Chichester, England, 1987]. Unlike the centrifugal analyzer described earlier in this chapter (see Figure 13.2.8), in which the number of samples is limited by the transfer disk’s size, FIA allows for the rapid, sequential analysis of an unlimited number of samples. FIA is one example of a continuous-flow analyzer, in which we sequentially introduce samples at regular intervals into a liquid carrier stream that transports them to the detector. A schematic diagram detailing the basic components of a flow injection analyzer is shown in Figure 13.4.1 . The reagent that serves as the carrier is stored in a reservoir, and a propelling unit maintains a constant flow of the carrier through a system of tubing that comprises the transport system. We inject the sample directly into the flowing carrier stream, where it travels through one or more mixing and reaction zones before it reaches the detector’s flow-cell. Figure 13.4.1 is the simplest design for a flow injection analyzer, which consists of a single channel and a single reagent reservoir. Multiple channel instruments that merge together separate channels, each of which introduces a new reagent into the carrier stream, also are possible. A more detailed discussion of FIA instrumentation is found in the next section. When we first inject a sample into the carrier stream it has the rectangular flow profile of width w shown in Figure 13.4.2 a. As the sample moves through the mixing zone and the reaction zone, the width of its flow profile increases as the sample disperses into the carrier stream. Dispersion results from two processes: convection due to the flow of the carrier stream and diffusion due to the concentration gradient between the sample and the carrier stream. Convection occurs by laminar flow. The linear velocity of the sample at the tube’s walls is zero, but the sample at the center of the tube moves with a linear velocity twice that of the carrier stream. The result is the parabolic flow profile shown in Figure 13.4.2 b. Convection is the primary means of dispersion in the first 100 ms following the sample’s injection. The second contribution to the sample’s dispersion is diffusion due to the concentration gradient that exists between the sample and the carrier stream. As shown in Figure 13.20, diffusion occurs parallel (axially) and perpendicular (radially) to the direction in which the carrier stream is moving. Only radial diffusion is important in a flow injection analysis. Radial diffusion decreases the sample’s linear velocity at the center of the tubing, while the sample at the edge of the tubing experiences an increase in its linear velocity. Diffusion helps to maintain the integrity of the sample’s flow profile (Figure 13.4.2 c) and prevents adjacent samples in the carrier stream from dispersing into one another. Both convection and diffusion make significant contributions to dispersion from approximately 3–20 s after the sample’s injection. This is the normal time scale for a flow injection analysis. After approximately 25 s, diffusion is the only significant contributor to dispersion, resulting in a flow profile similar to that shown in Figure 13.4.2 d. An FIA curve, or fiagram, is a plot of the detector’s signal as a function of time. Figure 13.4.4 shows a typical fiagram for conditions in which both convection and diffusion contribute to the sample’s dispersion. Also shown on the figure are several parameters that characterize a sample’s fiagram. Two parameters define the time for a sample to move from the injector to the detector. Travel time, ta, is the time between the sample’s injection and the arrival of its leading edge at the detector. Residence time, T, on the other hand, is the time required to obtain the maximum signal. The difference between the residence time and the travel time is $t^{\prime}$, which approaches zero when convection is the primary means of dispersion, and increases in value as the contribution from diffusion becomes more important. The time required for the sample to pass through the detector’s flow cell—and for the signal to return to the baseline—is also described by two parameters. The baseline-to-baseline time, $\Delta t$, is the time between the arrival of the sample’s leading edge to the departure of its trailing edge. The elapsed time between the maximum signal and its return to the baseline is the return time, $T^{\prime}$. The final characteristic parameter of a fiagram is the sample’s peak height, h. Of the six parameters shown in Figure 13.4.4 , the most important are peak height and the return time. Peak height is important because it is directly or indirectly related to the analyte’s concentration. The sensitivity of an FIA method, therefore, is determined by the peak height. The return time is important because it determines the frequency with which we may inject samples. Figure 13.4.5 shows that if we inject a second sample at a time $T^{\prime}$ after we inject the first sample, there is little overlap of the two FIA curves. By injecting samples at intervals of $T^{\prime}$, we obtain the maximum possible sampling rate. Peak heights and return times are influenced by the dispersion of the sample’s flow profile and by the physical and chemical properties of the flow injection system. Physical parameters that affect h and $T^{\prime}$ include the volume of sample we inject, the flow rate, the length, diameter and geometry of the mixing zone and the reaction zone, and the presence of junctions where separate channels merge together. The kinetics of any chemical reactions between the sample and the reagents in the carrier stream also influ-ence the peak height and return time. Unfortunately, there is no good theory that we can use to consistently predict the peak height and the return time for a given set of physical and chemical parameters. The design of a flow injection analyzer for a particular analytical problem still occurs largely by a process of experimentation. Nevertheless, we can make some general observations about the effects of physical and chemical parameters. In the absence of chemical effects, we can improve sensitivity—that is, obtain larger peak heights—by injecting larger samples, by increasing the flow rate, by decreasing the length and diameter of the tubing in the mixing zone and the reaction zone, and by merging separate channels before the point where the sample is injected. With the exception of sample volume, we can increase the sampling rate—that is, decrease the return time—by using the same combination of physical parameters. Larger sample volumes, however, lead to longer return times and a decrease in sample throughput. The effect of chemical reactivity depends on whether the species we are monitoring is a reactant or a product. For example, if we are monitoring a reactant, we can improve sensitivity by choosing conditions that decrease the residence time, T, or by adjusting the carrier stream’s composition so that the reaction occurs more slowly. Instrumentation The basic components of a flow injection analyzer are shown in Figure 13.4.6 and include a pump to propel the carrier stream and the reagent streams, a means to inject the sample into the carrier stream, and a detector to monitor the composition of the carrier stream. Connecting these units is a transport system that brings together separate channels and provides time for the sample to mix with the carrier stream and to react with the reagent streams. We also can incorporate separation modules into the transport system. Each of these components is considered in greater detail in this section. Propelling Unit The propelling unit moves the carrier stream through the flow injection analyzer. Although several different propelling units have been used, the most common is a peristaltic pump, which, as shown in Figure 13.4.7 , consists of a set of rollers attached to the outside of a rotating drum. Tubing from the reagent reservoirs fits between the rollers and a fixed plate. As the drum rotates the rollers squeeze the tubing, forcing the contents of the tubing to move in the direction of the rotation. Peristaltic pumps provide a constant flow rate, which is controlled by the drum’s speed of rotation and the inner diameter of the tubing. Flow rates from 0.0005–40 mL/min are possible, which is more than adequate to meet the needs of FIA where flow rates of 0.5–2.5 mL/min are common. One limitation to a peristaltic pump is that it produces a pulsed flow—particularly at higher flow rates—that may lead to oscillations in the signal. Injector The sample, typically 5–200 μL, is injected into the carrier stream. Although syringe injections through a rubber septum are possible, the more common method—as seen in Figure 13.4.6 —is to use a rotary, or loop injector similar to that used in an HPLC. This type of injector provides for a reproducible sample volume and is easily adaptable to automation, an important feature when high sampling rates are needed. Detector The most common detectors for flow injection analysis are the electrochemical and optical detectors used in HPLC. These detectors are discussed in Chapter 12 and are not considered further in this section. FIA detectors also have been designed around the use of ion selective electrodes and atomic absorption spectroscopy. Transport System The heart of a flow injection analyzer is the transport system that brings together the carrier stream, the sample, and any reagents that react with the sample. Each reagent stream is considered a separate channel, and all channels must merge before the carrier stream reaches the detector. The complete transport system is called a manifold. The simplest manifold has a single channel, the basic outline of which is shown in Figure 13.4.8 . This type of manifold is used for direct analysis of analyte that does not require a chemical reaction. In this case the carrier stream serves only as a means for rapidly and reproducibly transporting the sample to the detector. For example, this manifold design has been used for sample introduction in atomic absorption spectroscopy, achieving sampling rates as high as 700 samples/h. A single-channel manifold also is used for determining a sample’s pH or determining the concentration of metal ions using an ion selective electrode. We can also use the single-channel manifold in Figure 13.4.8 for an analysis in which we monitor the product of a chemical reaction between the sample and a reactant. In this case the carrier stream both transports the sample to the detector and reacts with the sample. Because the sample must mix with the carrier stream, a lower flow rate is used. One example is the determination of chloride in water, which is based on the following sequence of reactions. $\mathrm{Hg}(\mathrm{SCN})_{2}(a q)+2 \mathrm{Cl}^{-}(a q) \rightleftharpoons \: \mathrm{HgCl}_{2}(a q)+2 \mathrm{SCN}^{-}(a q) \nonumber$ $\mathrm{Fe}^{3+}(a q)+\mathrm{SCN}^{-}(a q) \rightleftharpoons \mathrm{Fe}(\mathrm{SCN})^{2+}(a q) \nonumber$ The carrier stream consists of an acidic solution of Hg(SCN)2 and Fe3+. Injecting a sample that contains chloride into the carrier stream displaces thiocyanate from Hg(SCN)2. The displaced thiocyanate then reacts with Fe3+ to form the red-colored Fe(SCN)2+ complex, the absorbance of which is monitored at a wavelength of 480 nm. Sampling rates of approximately 120 samples per hour have been achieved with this system [Hansen, E. H.; Ruzicka, J. J. Chem. Educ. 1979, 56, 677–680]. Most flow injection analyses that include a chemical reaction use a manifold with two or more channels. Including additional channels provides more control over the mixing of reagents and the interaction between the reagents and the sample. Two configurations are possible for a dual-channel system. A dual-channel manifold, such as the one shown in Figure 13.4.9 a, is used when the reagents cannot be premixed because of their reactivity. For example, in acidic solutions phosphate reacts with molybdate to form the heteropoly acid H3P(Mo12O40). In the presence of ascorbic acid the molybdenum in the heteropoly acid is reduced from Mo(VI) to Mo(V), forming a blue-colored complex that is monitored spectrophotometrically at 660 nm [Hansen, E. H.; Ruzicka, J. J. Chem. Educ. 1979, 56, 677–680]. Because ascorbic acid reduces molybdate, the two reagents are placed in separate channels that merge just before the loop injector. A dual-channel manifold also is used to add a second reagent after injecting the sample into a carrier stream, as shown in Figure 13.4.9 b. This style of manifold is used for the quantitative analysis of many analytes, including the determination of a wastewater’s chemical oxygen demand (COD) [Korenaga, T.; Ikatsu, H. Anal. Chim. Acta 1982, 141, 301–309]. Chemical oxygen demand is a measure of the amount organic matter in the wastewater sample. In the conventional method of analysis, COD is determined by refluxing the sample for 2 h in the presence of acid and a strong oxidizing agent, such as K2Cr2O7 or KMnO4. When refluxing is complete, the amount of oxidant consumed in the reaction is determined by a redox titration. In the flow injection version of this analysis, the sample is injected into a carrier stream of aqueous H2SO4, which merges with a solution of the oxidant from a secondary channel. The oxidation reaction is kinetically slow and, as a result, the mixing coil and the reaction coil are very long—typically 40 m—and submerged in a thermostated bath. The sampling rate is lower than that for most flow injection analyses, but at 10–30 samples/h it is substantially greater than the redox titrimetric method. More complex manifolds involving three or more channels are common, but the possible combination of designs is too numerous to discuss. One example of a four-channel manifold is shown in Figure 13.4.10 . Separation Modules By incorporating a separation module into the flow injection manifold we can include a separation—dialysis, gaseous diffusion and liquid-liquid extractions are examples—in a flow injection analysis. Although these separations are never complete, they are reproducible if we carefully control the experimental conditions. Dialysis and gaseous diffusion are accomplished by placing a semipermeable membrane between the carrier stream containing the sample and an acceptor stream, as shown in Figure 13.4.11 . As the sample stream passes through the separation module, a portion of those species that can cross the semipermeable membrane do so, entering the acceptor stream. This type of separation module is common for the analysis of clinical samples, such as serum and urine, where a dialysis membrane separates the analyte from its complex matrix. Semipermeable gaseous diffusion membranes are used for the determination of ammonia and carbon dioxide in blood. For example, ammonia is determined by injecting the sample into a carrier stream of aqueous NaOH. Ammonia diffuses across the semipermeable membrane into an acceptor stream that contains an acid–base indicator. The resulting acid–base reaction between ammonia and the indicator is monitored spectrophotometrically. Liquid–liquid extractions are accomplished by merging together two immiscible fluids, each carried in a separate channel. The result is a segmented flow through the separation module, consisting of alternating portions of the two phases. At the outlet of the separation module the two fluids are separated by taking advantage of the difference in their densities. Figure 13.4.12 shows a typical configuration for a separation module in which the sample is injected into an aqueous phase and extracted into a less dense organic phase that passes through the detector. Quantitative Applications In a quantitative flow injection method a calibration curve is determined by injecting a series of external standards that contain known concentrations of analyte. The calibration curve’s format—examples include plots of absorbance versus concentration and of potential versus concentration—depends on the method of detection. Calibration curves for standard spectroscopic and electrochemical methods are discussed in Chapter 10 and in Chapter 11, respectively and are not considered further in this chapter. Flow injection analysis has been used to analyze a wide variety of samples, including environmental, clinical, agricultural, industrial, and pharmaceutical samples. The majority of analyses involve environmental and clinical samples, which is the focus of this section. Quantitative flow injection methods have been developed for cationic, anionic, and molecular pollutants in wastewater, freshwaters, groundwaters, and marine waters, three examples of which were described in the previous section. Table 13.4.1 provides a partial listing of other analytes that have been determined using FIA, many of which are modifications of standard spectrophotometric and potentiometric methods. An additional advantage of FIA for environmental analysis is the ability to provide for the continuous, in situ monitoring of pollutants in the field [Andrew, K. N.; Blundell, N. J.; Price, D.; Worsfold, P. J. Anal. Chem. 1994, 66, 916A–922A]. Table 13.4.1 . Selected Flow Injection Analysis Methods for Environmental Samples analyte sample sample volume (µL) concentration range sampling frequency (h–1) Ca2+ freshwater 20 0.8–7.2 ppm 80 Cu2+ groundwater 70–700 100–400 ppb 20 Pb2+ groundwater 70–700 0–40 ppb 20 Zn2+ seawater 1000 1–100 ppb 30–60 $\text{NH}_4^+$ seawater 60 0.18–18.1 ppb 288 $\text{NO}_3^-$ rainwater 1000 1–10 ppb 40 $\text{SO}_4^{2-}$ freshwater 400 4–140 ppb 180 CN industrial 10 0.3–100 ppm 40 Source: Adapted from Valcárcel, M.; Luque de Castro, M. D. Flow-Injection Analysis: Principles and Practice, Ellis Horwood: Chichester, England, 1987. As noted in Chapter 9, several standard methods for the analysis of water involve an acid–base, complexation, or redox titration. It is easy to adapt these titrations to FIA using a single-channel manifold similar to that shown in Figure 13.4.8 [Ramsing, A. U.; Ruzicka, J.; Hansen, E. H. Anal. Chim. Acta 1981, 129, 1–17]. The titrant—whose concentration must be stoichiometrically less than that of the analyte—and a visual indicator are placed in the reagent reservoir and pumped continuously through the manifold. When we inject the sample it mixes thoroughly with the titrant in the carrier stream. The reaction between the analyte, which is in excess, and the titrant produces a relatively broad rectangular flow profile for the sample. As the sample moves toward the detector, additional mixing oc- curs and the width of the sample’s flow profile decreases. When the sample passes through the detector, we determine the width of its flow profile, $\Delta T$, by monitoring the indicator’s absorbance. A calibration curve of $\Delta T$ versus log[analyte] is prepared using standard solutions of analyte. Flow injection analysis has also found numerous applications in the analysis of clinical samples, using both enzymatic and nonenzymatic methods. Table 13.4.2 summarizes several examples. Table 13.4.2 . Selected Flow Injection Analysis Methods for Clinical Samples analyte sample sample volume (µL) concentration range sampling frequency (h–1) nonenzymatic methods Cu2+ serum 20 0.7–1.5 ppm 70 Cl serum 60 50–150 meq/L 125 $\text{PO}_4^{3-}$ serum 200 10–60 ppm 130 total CO2 serum 50 10–50 mM 70 chlorpromazine blood plasma 200 1.5–9 $\mu \text{M}$ 24 enzymatic methods glucose blood serum 26.5 0.5–15 mM 60 urea blood serum 30 4–20 mM 60 ethanol blood 30 5–30 ppm 50 Source: Adapted from Valcárcel, M.; Luque de Castro, M. D. Flow-Injection Analysis: Principles and Practice, Ellis Horwood: Chichester, England, 1987. The best way to appreciate the theoretical and the practical details discussed in this section is to carefully examine a typical analytical method. Although each method is unique, the following description of the determination of phosphate provides an instructive example of a typical procedure. The description here is based on Guy, R. D.; Ramaley, L.; Wentzell, P. D. “An Experiment in the Sampling of Solids for Chemical Analysis,” J. Chem. Educ. 1998, 75, 1028–1033. As the title suggests, the primary focus of this chapter is on sampling. A flow injection analysis, however, is used to analyze samples. Representative Method 13.4.1: Determination of Phosphate by FIA Description of Method The FIA determination of phosphate is an adaptation of a standard spectrophotometric analysis for phosphate. In the presence of acid, phosphate reacts with ammonium molybdate to form a yellow-colored complex in which molybdenum is present as Mo(VI). $\mathrm{H}_{3} \mathrm{PO}_{4}(a q)+12 \mathrm{H}_{2} \mathrm{MoO}_{4}(a q) \leftrightharpoons \: \mathrm{H}_{3} \mathrm{P}\left(\mathrm{Mo}_{12} \mathrm{O}_{40}\right)(a q)+12 \mathrm{H}_{2} \mathrm{O}(\mathrm{l}) \nonumber$ In the presence of a reducing agent, such as ascorbic acid, the yellow-colored complex is reduced to a blue-colored complex of Mo(V). Procedure Prepare the following three solutions: (a) 5.0 mM ammonium molybdate in 0.40 M HNO3; (b) 0.7% w/v ascorbic acid in 1% v/v glycerin; and a (c) 100.0 ppm phosphate standard using KH2PO4. Using the phosphate standard, prepare a set of external standards with phosphate concentrations of 10, 20, 30, 40, 50 and 60 ppm. Use a manifold similar to that shown in Figure 13.4.9 a, placing a 50-cm mixing coil between the pump and the loop injector and a 50-cm reaction coil between the loop injector and the detector. For both coils, use PTFE tubing with an internal diameter of 0.8 mm. Set the flow rate to 0.5 mL/min. Prepare a calibration curve by injecting 50 μL of each standard, measuring the absorbance at 650 nm. Samples are analyzed in the same manner. Questions 1. How long does it take a sample to move from the loop injector to the detector? The reaction coil is 50-cm long with an internal diameter of 0.8 mm. The volume of this tubing is $V=l \pi r^{2}=50 \mathrm{cm} \times 3.14 \times\left(\frac{0.08 \mathrm{cm}}{2}\right)^{2}=0.25 \mathrm{cm}^{3}=0.25 \mathrm{mL} \nonumber$ With a flow rate of 0.5 mL/min, it takes about 30 s for a sample to pass through the system. 2. The instructions for the standard spectrophotometric method indicate that the absorbance is measured 5–10 min after adding the ascorbic acid. Why is this waiting period necessary in the spectrophotometric method, but not necessary in the FIA method? The reduction of the yellow-colored Mo(VI) complex to the blue-colored Mo(V) complex is a slow reaction. In the standard spectro-photometric method it is difficult to control reproducibly the time between adding the reagents to the sample and measuring the sample’s absorbance. To achieve good precision we allow the reaction to proceed to completion before we measure the absorbance. As seen by the answer to the previous question, in the FIA method the flow rate and the dimensions of the reaction coil determine the reaction time. Because this time is controlled precisely, the reaction occurs to the same extent for all standards and samples. A shorter reaction time has the advantage of allowing for a higher throughput of samples. 3. The spectrophotometric method recommends using phosphate standards of 2–10 ppm. Explain why the FIA method uses a different range of standards. In the FIA method we measure the absorbance before the formation of the blue-colored Mo(V) complex is complete. Because the absorbance for any standard solution of phosphate is always smaller when using the FIA method, the FIA method is less sensitive and higher concentrations of phosphate are necessary. 4. How would you incorporate a reagent blank into the FIA analysis? A reagent blank is obtained by injecting a sample of distilled water in place of the external standard or the sample. The reagent blank’s absorbance is subtracted from the absorbances obtained for the standards and samples. Example 13.4.1 The following data were obtained for a set of external standards when using Representative Method 13.4.1 to analyze phosphate in a wastewater sample. [$\text{PO}_4^{3-}$] (ppm) absorbance 10.00 0.079 20.00 0.160 30.00 0.233 40.00 0.316 60.00 0.482 What is the concentration of phosphate in a sample if it gives an absorbance of 0.287? Solution Figure 13.4.13 shows the external standards calibration curve and the calibration equation. Substituting in the sample’s absorbance gives the con- centration of phosphate in the sample as 36.1 ppm. Evaluation The majority of flow injection analysis applications are modifications of conventional titrimetric, spectrophotometric, and electrochemical methods of analysis; thus, it is appropriate to compare FIA methods to these conventional methods. The scale of operations for FIA allows for the routine analysis of minor and trace analytes, and for macro, meso, and micro samples. The ability to work with microliter injection volumes is useful when the sample is scarce. Conventional methods of analysis usually have smaller detection limits. The accuracy and precision of FIA methods are comparable to conven- tional methods of analysis; however, the precision of FIA is influenced by several variables that do not affect conventional methods, including the stability of the flow rate and the reproducibility of the sample’s injection. In addition, results from FIA are more susceptible to temperature variations. In general, the sensitivity of FIA is less than that for conventional methods of analysis for at least two reasons. First, as with chemical kinetic methods, measurements in FIA are made under nonequilibrium conditions when the signal has yet to reach its maximum value. Second, dispersion dilutes the sample as it moves through the manifold. Because the variables that affect sensitivity are known, we can design the FIA manifold to optimize the method’s sensitivity. Selectivity for an FIA method often is better than that for the corresponding conventional method of analysis. In many cases this is due to the kinetic nature of the measurement process, in which potential interferents may react more slowly than the analyte. Contamination from external sources also is less of a problem because reagents are stored in closed reservoirs and are pumped through a system of transport tubing that is closed to the environment. Finally, FIA is an attractive technique when considering time, cost, and equipment. When using an autosampler, a flow injection method can achieve very high sampling rates. A sampling rate of 20–120 samples/h is not unusual and sampling rates as high as 1700 samples/h are possible. Because the volume of the flow injection manifold is small, typically less than 2 mL, the consumption of reagents is substantially smaller than that for a conventional method. This can lead to a significant decrease in the cost per analysis. Flow injection analysis does require the need for additional equipment—a pump, a loop injector, and a manifold—which adds to the cost of an analysis. For a review of the importance of flow injection analysis, see Hansen, E. H.; Miró, M. “How Flow-Injection Analysis (FIA) Over the Past 25 Years has Changed Our Way of Performing Chemical Analyses,” TRAC, Trends Anal. Chem. 2007, 26, 18–26.
textbooks/chem/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/13%3A_Kinetic_Methods/13.04%3A_Flow_Injection_Analysis.txt
1. Equation 13.2.18 shows how [A]0 is determined using a two-point fixed-time integral method in which the concentration of A for the pseudo-first-order reaction $A+R \longrightarrow P \nonumber$ is measured at times t1 and t2. Derive a similar equation for the case where the product is monitored under pseudo-first order conditions. 2. The concentration of phenylacetate is determined from the kinetics of its pseudo-first order hydrolysis reaction in an ethylamine buffer. When a standard solution of 0.55 mM phenylacetate is analyzed, the concentration of phenylacetate after 60 s is 0.17 mM. When a sample is analyzed the concentration of phenylacetate that remains after 60 s is 0.23 mM. What is the concentration of phenylacetate in the sample? 3. In the presence of acid, iodide is oxidized by hydrogen peroxide $2 \mathrm{I}^{-}(a q)+\mathrm{H}_{2} \mathrm{O}_{2}(a q)+2 \mathrm{H}_{3} \mathrm{O}^{+}(a q) \longrightarrow 4 \mathrm{H}_{2} \mathrm{O}(l)+\mathrm{I}_{2}(a q) \nonumber$ When I and H3O+ are present in excess, we can use the reaction’s kinetics of the reaction, which is pseudo-first order in H2O2, to determine the concentration of H2O2 by following the production of I2 with time. In one analysis the solution’s absorbance at 348 nm was measured after 240 s. Analysis of a set of standard gives the results shown below. [H2O2] (µM) absorbance 100.0 0.236 200.0 0.471 400.0 0.933 800.0 1.872 What is the concentration of H2O2 in a sample if its absorbance is 0.669 after 240 s? 4. The concentration of chromic acid is determined by reducing it under conditions that are pseudo-first order in analyte. One approach is to monitor the reaction absorbance at a wavelength of 355 nm. A standard of $5.1 \times 10^{-4}$ M chromic acid yields absorbances of 0.855 and 0.709 at 100 s and 300 s after the reaction’s initiation. When a sample is analyzed under identical conditions, the absorbances are 0.883 and 0.706. What is the concentration of chromic acid in the sample? 5. Malmstadt and Pardue developed a variable time method for the determination of glucose based on its oxidation by the enzyme glucose oxidase [Malmstadt, H. V.; Pardue, H. L. Anal. Chem. 1961 33, 1040–1047]. To monitor the reaction’s progress, iodide is added to the samples and standards. The H2O2 produced by the oxidation of glucose reacts with I, forming I2 as a product. The time required to produce a fixed amount of I2 is determined spectrophotometrically. The following data was reported for a set of calibration standards [glucose] (ppm) time (s) 5.0 146.5 150.0 149.6 10.0 69.2 67.1 66.0 20.0 34.8 35.0 34.0 30.0 22.3 22.7 22.6 40.0 16.7 16.5 17.0 50.0 13.3 13.3 13.8 To verify the method a standard solution of 20.0 ppm glucose was analyzed in the same way as the standards, requiring 34.6 s to produce the same extent of reaction. Determine the concentration of glucose in the standard and the percent error for the analysis. 6. Deming and Pardue studied the kinetics for the hydrolysis of p-nitrophenyl phosphate by the enzyme alkaline phosphatase [Deming, S. N.; Pardue, H. L. Anal. Chem. 1971, 43, 192–200]. The reaction’s progress was monitored by measuring the absorbance of p-nitrophenol, which is one of the reaction’s products. A plot of the reaction’s rate (with units of μmol mL–1 sec–1) versus the volume, V, in milliliters of a serum calibration standard that contained the enzyme, yielded a straight line with the following equation. $\text { rate } = 2.7 \times 10^{-7} \mu \text{mol } \mathrm{mL}^{-1} \text{ s}^{-1}+\left(3.485 \times 10^{-5} \mu \text{mol } \mathrm{mL}^{-2} \text{ s}^{-1}\right) V \nonumber$ A 10.00-mL sample of serum is analyzed, yielding a rate of $6.84 \times 10^{-5}$ μmol mL–1 sec–1. How much more dilute is the enzyme in the serum sample than in the serum calibration standard? 7. The following data were collected for a reaction known to be pseudo-first order in analyte, A, during the time in which the reaction is monitored. time (s) [A]t (mM) 2 1.36 4 1.24 6 1.12 8 1.02 10 0.924 12 0.838 14 0.760 16 0.690 18 0.626 20 0.568 What is the rate constant and the initial concentration of analyte in the sample? 8. The enzyme acetylcholinesterase catalyzes the decomposition of acetylcholine to choline and acetic acid. Under a given set of conditions the enzyme has a Km of $9 \times 10^{-5}$ M and a k2 of $1.4 \times 10^4$ s–1. What is the concentration of acetylcholine in a sample if the reaction’s rate is 12.33 μM s–1 in the presence of $6.61 \times 10^{-7}$ M enzyme? You may assume the concentration of acetylcholine is significantly smaller than Km. 9. The enzyme fumarase catalyzes the stereospecific addition of water to fumarate to form l-malate. A standard 0.150 μM solution of fumarase has a rate of reaction of 2.00 μM min–1 under conditions in which the substrate’s concentration is significantly greater than Km. The rate of reaction for a sample under identical condition is 1.15 mM min–1. What is the concentration of fumarase in the sample? 10. The enzyme urease catalyzes the hydrolysis of urea. The rate of this reaction is determined for a series of solutions in which the concentration of urea is changed while maintaining a fixed urease concentration of 5.0 mM. The following data are obtained. [urea] (µM) rate (µM s–1) 0.100 6.25 0.200 12.5 0.300 18.8 0.400 25.0 0.500 31.2 0.600 37.5 0.700 43.7 0.800 50.0 0.900 56.2 1.00 62.5 Determine the values of Vmax, k2, and Km for urease. 11. To study the effect of an enzyme inhibitor Vmax and Km are measured for several concentrations of inhibitor. As the concentration of the inhibitor increases Vmax remains essentially constant, but the value of Km increases. Which mechanism for enzyme inhibition is in effect? 12. In the case of competitive inhibition, the equilibrium between the enzyme, E, the inhibitor, I, and the enzyme–inhibitor complex, EI, is described by the equilibrium constant KEI. Show that for competitive inhibition the equation for the rate of reaction is $\frac{d[P]}{d t}=\frac{V_{\max }[S]}{K_{m}\left\{1+\left([I] / K_{E l}\right)\right\}+[S]} \nonumber$ where KI is the formation constant for the EI complex $E+I \rightleftharpoons E I \nonumber$ You may assume that k2 << k–1. 13. Analytes A and B react with a common reagent R with first-order kinetics. If 99.9% of A must react before 0.1% of B has reacted, what is the minimum acceptable ratio for their respective rate constants? 14. A mixture of two analytes, A and B, is analyzed simultaneously by monitoring their combined concentration, C = [A] + [B], as a function of time when they react with a common reagent. Both A and B are known to follow first-order kinetics with the reagent, and A is known to react faster than B. Given the data in the following table, determine the initial concentrations of A and B, and the first-order rate constants, kA and kB. time (min) [C] (mM) 1 0.313 6 0.200 11 0.136 16 0.098 21 0.074 26 0.058 31 0.047 36 0.038 41 0.032 46 0.027 51 0.023 56 0.019 61 0.016 66 0.014 71 0.012 15. Table 13.3.1 provides a list of several isotopes used as tracers. The half-lives for these isotopes also are listed. What is the rate constant for the radioactive decay of each isotope? 16. 60Co is a long-lived isotope (t1/2 = 5.3 yr) frequently used as a radiotracer. The activity in a 5.00-mL sample of a solution of 60Co is $2.1 \times 10^7$ disintegrations/sec. What is the molar concentration of 60Co in the sample? 17. The concentration of Ni in a new alloy is determined by a neutron activation analysis. A 0.500-g sample of the alloy and a 1.000-g sample of a standard alloy that is 5.93% w/w Ni are irradiated with neutrons in a nuclear reactor. When irradiation is complete, the sample and the standard are allowed to cool and their gamma ray activities measured. Given that the activity is 1020 cpm for the sample and 3540 cpm for the standard, determine the %w/w Ni in the alloy. 18. The vitamin B12 content of a multivitamin tablet is determined by the following procedure. A sample of 10 tablets is dissolved in water and diluted to volume in a 100-mL volumetric flask. A 50.00-mL portion is removed and 0.500 mg of radioactive vitamin B12 having an activity of 572 cpm is added as a tracer. The sample and tracer are homogenized and the vitamin B12 isolated and purified, producing 18.6 mg with an activity of 361 cpm. Calculate the milligrams of vitamin B12 in a multivitamin tablet. 19. The oldest sample that can be dated by 14C is approximately 30 000 yr. What percentage of the 14C remains after this time span? 20. Potassium–argon dating is based on the nuclear decay of 40K to 40Ar (t1/2 = $1.3 \times 10^9$ yr). If no 40Ar is originally present in the rock, and if 40Ar cannot escape to the atmosphere, then the relative amounts of 40K and 40Ar can be used to determine the age of the rock. When a 100.0-mg rock sample is analyzed it is found to contain $4.63 \times 10^{-6}$ mol of 40K and $2.09 \times 10^{-6}$ mol 40Ar. How old is the rock sample? 21. The steady state activity for 14C in a sample is 13 cpm per gram of carbon. If counting is limited to 1 hr, what mass of carbon is needed to give a percent relative standard deviation of 1% for the sample’s activity? How long must we monitor the radioactive decay from a 0.50-g sample of carbon to give a percent relative standard deviation of 1.0% for the activity? 22. To improve the sensitivity of a FIA analysis you might do any of the following: inject a larger volume of sample, increase the flow rate, decrease the length and the diameter of the manifold’s tubing, or merge separate channels before injecting the sample. For each action, explain why it leads to an improvement in sensitivity. 23. The figure below shows a fiagram for a solution of 50.0-ppm $\text{PO}_4^{3-}$ using the method in Representative Method 13.4.1. Determine values for h, ta, T, t ′, $\Delta t$, and T ′. What is the sensitivity of this FIA method, assuming a linear relationship between absorbance and concentration? How many samples can be analyzed per hour? 24. A sensitive method for the flow injection analysis of Cu2+ is based on its ability to catalyze the oxidation of di-2-pyridyl ketone hydrazone (DPKH) [Lazaro, F.; Luque de Castro, M. D.; Valcárcel, M. Analyst, 1984, 109, 333–337]. The product of the reaction is fluorescent and is used to generate a signal when using a fluorimeter as a detector. The yield of the reaction is at a maximum when the solution is made basic with NaOH. The fluorescence, however, is greatest in the presence of HCl. Sketch an appropriate FIA manifold for this analysis. 25. The concentration of chloride in seawater is determined by a flow injection analysis. The analysis of a set of calibration standards gives the following results. [Cl] (ppm) absorbance [Cl] (ppm) absorbance 5.00 0.057 40.00 0.478 10.00 0.099 50.00 0.594 20.00 0.230 75.00 0.840 30.00 0.354 A 1.00-mL sample of seawater is placed in a 500-mL volumetric flask and diluted to volume with distilled water. When injected into the flow injection analyzer an absorbance of 0.317 is measured. What is the concentration of Cl in the sample? 26. Ramsing and co-workers developed an FIA method for acid–base titrations using a carrier stream that is $2.0 \times 10^{-3}$ M NaOH and that contains the acid–base indicator bromothymol blue [Ramsing, A. U.; Ruzicka, J.; Hansen, E. H. Anal. Chim. Acta 1981, 129, 1–17]. Standard solutions of HCl were injected, and the following values of $\Delta t$ were measured from the resulting fiagrams. [HCl] (M) $\Delta t$ (s) [HCl] (M) $\Delta t$ (s) 0.008 3.13 0.080 7.71 0.010 3.59 0.100 8.13 0.020 5.11 0.200 9.27 0.040 6.39 0.400 10.45 0.060 7.06 0.600 11.40 A sample with an unknown concentration of HCl is analyzed five times, giving values of 7.43, 7.28, 7.41, 7.37, and 7.33 s for $\Delta t$. Determine the concentration of HCl in the sample. 27. Milardovíc and colleagues used a flow injection analysis method with an amperometric biosensor to determine the concentration of glucose in blood [MilardoviĆ, S.; Kruhak, I.; Ivekovic, D.; Rumenjak, V.; Tkalčec, M.; Grabaric, B. S. Anal. Chim. Acta 1997, 350, 91–96]. Given that a blood sample that is 6.93 mM in glucose has a signal of 7.13 nA, what is the concentration of glucose in a sample of blood if its signal is 11.50 nA? 28. Fernández-Abedul and Costa-García developed an FIA method to determine cocaine in samples using an amperometric detector [Fernández-Abedul, M; Costa-García, A. Anal. Chim. Acta 1996, 328, 67–71]. The following signals (arbitrary units) were collected for 12 replicate injections of a $6.2 \times 10^{-6}$ M sample of cocaine, C17H21NO4. 24.5 24.1 24.1 23.8 23.9 25.1 23.9 24.8 23.7 23.3 23.2 23.2 (a) What is the relative standard deviation for this sample? (b) The following calibration data are available [cocaine] (µM) signal (arb. units) 0.18 0.8 0.36 2.1 0.60 2.4 0.81 3.2 1.0 4.5 2.0 8.1 4.0 14.4 6.0 21.6 8.0 27.1 10.0 32.9 In a typical analysis a 10.0-mg sample is dissolved in water and diluted to volume in a 25-mL volumetric flask. A 125-mL aliquot is transferred to a 25-mL volumetric flask and diluted to volume with a pH 9 buffer. When injected into the flow injection apparatus a signal of 21.4 (arb. units) is obtained. What is the %w/w cocaine in the sample? 29. Holman, Christian, and Ruzicka described an FIA method to determine the concentration of H2SO4 in nonaqueous solvents [Holman, D. A.; Christian, G. D.; Ruzicka, J. Anal. Chem. 1997, 69, 1763–1765]. Agarose beads (22–45 mm diameter) with a bonded acid–base indicator are soaked in NaOH and immobilized in the detector’s flow cell. Samples of H2SO4 in n-butanol are injected into the carrier stream. As a sample passes through the flow cell, an acid–base reaction takes place between H2SO4 and NaOH. The endpoint of the neutralization reaction is signaled by a change in the bound indicator’s color and is detected spectrophotometrically. The elution volume needed to reach the titration’s endpoint is inversely proportional to the concentration of H2SO4; thus, a plot of endpoint volume versus [H2SO4]–1 is linear. The following data is typical of that obtained using a set of external standards. [H2SO4] (mM) end point volume (mL) 0.358 0.266 0.436 0.227 0.560 0.176 0.752 0.136 1.38 0.075 2.98 0.037 5.62 0.017 What is the concentration of H2SO4 in a sample if its endpoint volume is 0.157 mL?
textbooks/chem/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/13%3A_Kinetic_Methods/13.05%3A_Problems.txt
The following set of experiments introduce students to the applications of chemical kinetic methods, including enzyme kinetic methods, and flow injection analysis. Chemical Kinetic Methods • Abramovitch, D. A.; Cunningham, L. K.; Litwer, M. R. “Decomposition Kinetics of Hydrogen Peroxide: Novel Lab Experiments Employing Computer Technology,” J. Chem. Educ. 2003, 80, 790–792. • Antuch, M.; Ramos, Y.; Álvarez, R. “Simulated Analysis of Linear Reversible Enzyme Inhibition with SCILAB,” J. Chem. Educ. 2014, 91, 1203–1206. • Bateman, Jr. R. C.; Evans, J. A. “Using the Glucose Oxidase/Peroxidase Systems in Enzyme Kinetics,” J. Chem. Educ. 1995, 72, A240–A241. • Bendinskas, K.; DiJacomo, C.; Krill, A.; Vitz, E. “Kinetics of Alcohol Dehydrogenase-Catalyzed Oxidation of Ethanol Followed by Visible Spectroscopy,” J. Chem. Educ. 1068, 82, 1068–1070. • Clark, C. R. “A Stopped-Flow Kinetics Experiment for Advanced Undergraduate Laboratories: Formation of Iron(III) Thiocyanate,” J. Chem. Educ. 1997, 74, 1214–1217. • Diamandis, E. P.; Koupparis, M. A.; Hadjiionnou, T. P. “Kinetic Studies with Ion-Selective Electrodes: Determination of Creatinine in Urine with a Picrate Ion-Selective Electrode,” J. Chem. Educ. 1983, 60, 74–76. • Dias, A. A.; Pinto, P. A.; Fraga, I.; Bezerra, R. M. F. “Diagnosis of Enzyme Inhibition Using Excel Solver: A Combined Dry and Wet Laboratory Exercise,” J. Chem. Educ. 2014, 91, 1017–1021. • El Seoud, O. A.; Galgano, P. D.; Arêas, E. P. G.; Moraes, J. M. “Learning Chemistry from Good and (Why Not?) Problematic Results: Kinetics of the pH-Independent Hydrolysis of 4-Nitrophenyl Chloroformate,” J. Chem. Educ. 2015, 92, 752–756. • Frey, M. W.; Frey, S. T.; Soltau, S. R. “Exploring the pH Dependence of L-leucine-p-nitroanilide Cleavage by Aminopeptidase Aeromonas Proteolytica: A Combined Buffer-Enzyme Kinetics Experiment for the General Chemistry Laboratory,” Chem. Educator 2010, 15, 117–120. • Gooding, J. J.; Yang, W.; Situmorang, M. “Bioanalytical Experiments for the Undergraduate Laboratory: Monitoring Glucose in Sport Drinks,” J. Chem. Educ. 2001, 78, 788–790. • Hamilton, T. M.; Dobie-Galuska, A. A.; Wietstock, S. M. “The o-Phenylenediamine-Horseradish Peroxidase System: Enzyme Kinetics in the General Chemistry Lab,” J. Chem. Educ. 1999, 76, 642– 644. • Johnson, K. A. “Factors Affecting Reaction Kinetics of Glucose Oxidase,” J. Chem. Educ. 2002, 79, 74–76. • Mowry, S.; Ogren, P. J. “Kinetics of Methylene Blue Reduction by Ascorbic Acid,” J. Chem. Educ. 1999, 76, 970–974. • Nyasulu, F. W.; Barlag, R. “Gas Pressure Sensor Monitored Iodide-Catalyzed Decomposition Kinetics of Hydrogen Peroxide: An Initial Rate Approach,” Chem. Educator 2008, 13, 227–230. • Nyasulu, F. W.; Barlag, R. “Thermokinetics: Iodide-Catalyzed Decomposition Kinetics of Hydrogen Peroxide; An Integrated Rate Approach,” Chem. Educator 2010, 15, 168–170. • Pandey, S.; McHale, M. E. R.; Horton, A. M.; Padilla, S. A.; Trufant, A. L.; De La Sancha, N. U.; Vela, E.; Acree, Jr., W. E. “Kinetics-Based Indirect Spectrophotometric Method for the Simultaneous Determination of $\text{MnO}_4^-$ and $\text{Cr}_2 \text{O}_7^{2-}$,” J. Chem. Educ. 1998, 75, 450–452. • Stock, E.; Morgan, M. “A Spectroscopic Analysis of the Kinetics of the Iodine Clock Reaction without Starch,” Chem. Educator 2010, 15, 158–161. • Vasilarou, A.-M. G.; Georgiou, C. A. “Enzymatic Spectrophotometric Reaction Rate Determination of Glucose in Fruit Drinks and Carbonated Beverages,” J. Chem. Educ. 2000, 77, 1327–1329. • Williams, K. R.; Adhyaru, B.; Timofeev, J.; Blankenship, M. K. “Decomposition of Aspartame. A Kinetics Experiment for Upper-Level Chemistry Laboratories,” J. Chem. Educ. 2005, 82, 924–925. Flow Injection Methods • Carroll, M. K.; Tyson, J. F. “An Experiment Using Time-Based Detection in Flow Injection Analysis,” J. Chem. Educ. 1993, 70, A210–A216. • ConceiÇão, A. C. L.; Minas da Piedade, M. E. “Determination of Acidity Constants by Gradient Flow-Injection Titration,” J. Chem. Educ. 2006, 83, 1853–1856. • Hansen, E. H.; Ruzicka, J. “The Principles of Flow Injection Analysis as Demonstrated by Three Lab Exercises,” J. Chem. Educ. 1979, 56, 677–680. • McKelvie, I. D.; Cardwell, T. J.; Cattrall, R. W. “A Microconduit Flow Injection Analysis Demonstration using a 35-mm Slide Projector,” J. Chem. Educ. 1990, 67, 262–263. • Meyerhoff, M. E.; Kovach, P. M. “An Ion-Selective Electrode/Flow Injection Analysis Experiment: Determination of Potassium in Serum,” J. Chem. Educ. 1983, 60, 766–768. • Nóbrega, J. A.; Rocha, F. R. P. “Ionic Strength Effect on the Rate of Reduction of Hexacyanoferrate(II) by Ascorbic Acid,” J. Chem. Educ. 1997, 74, 560–562. • Ríos, A.; Luque de Castro, M.; Valcárcel, M. “Determination of Reaction Stoichiometries by Flow Injection Analysis,” J. Chem. Educ. 1986, 63, 552–553. • Stults, C. L. M.; Wade, A. P.; Crouch, S. R. “Investigation of Temperature Effects on Dispersion in a Flow Injection Analyzer,” J. Chem. Educ. 1988, 65, 645–647. • Wolfe, C. A. C.; Oates, M. R.; Hage, D. S. “Automated Protein Assay Using Flow Injection Analysis,” J. Chem. Educ. 1998, 75, 1025–1028. The following sources provides a general review of the importance of chemical kinetics in analytical chemistry. • Bergmyer, H. U.; Grassl, M. Methods of Enzymatic Analysis, Verlag Chemie: Deerfield Beach, FL, 3rd Ed., 1983. • Doménech-Carbó, A. “Dating: An Analytical Task,” ChemTexts 2015, 1:5. • Laitinen, H. A.; Ewing, G. W., eds., A History of Analytical Chemistry, The Division of Analytical Chemistry of the American Chemical Society: Washington, D. C., 1977, pp. 97–102. • Malmstadt, H. V.; Delaney, C. J.; Cordos, E. A. “Reaction-Rate Methods of Chemical Analysis,” Crit. Rev. Anal. Chem. 1972, 2, 559–619. • Mark, H. B.; Rechnitz, G. A. Kinetics in Analytical Chemistry, Wiley: New York, 1968. • Mottola, H. A. “Catalytic and Differential Reaction-Rate Methods of Chemical Analysis,” Crit. Rev. Anal. Chem. 1974, 4, 229–280. • Mottola, H. A. “Some Kinetic Aspects Relevant to Contemporary Analytical Chemistry,” J. Chem. Educ. 1981, 58, 399–403. • Mottola, H. A. Kinetic Aspects of Analytical Chemistry, Wiley: New York, 1988. • Pardue, H. L. “A Comprehensive Classification of Kinetic Methods of Analysis Used in Clinical Chemistry,” Clin. Chem. 1977, 23, 2189–2201. • Pardue, H. L. “Kinetic Aspects of Analytical Chemistry,” Anal. Chim. Acta, 1989, 216, 69–107. • Perez-Bendito, D.; Silva, M. Kinetic Methods in Analytical Chemistry, Ellis Horwood: Chichester, 1988. • Pisakiewicz, D. Kinetics of Chemical and Enzyme-Catalyzed Reactions, Oxford University Press: New York, 1977. The following instrumental analysis textbooks may be consulted for further information on the detectors and signal analyzers used in radiochemical methods of analysis. • Skoog, D. A.; Holler, F. J.; Nieman, T. A. Principles of Instrumental Analysis, 5th Ed., Saunders College Publishing/Harcourt Brace and Co.: Philadelphia., 1998, Chapter 32. • Strobel, H. A.; Heineman, W. R. Chemical Instrumentation: A Systematic Approach, 3rd Ed., Wiley-Interscience: New York, 1989. The following resources provide additional information on the theory and application of flow injection analysis. • Andrew, K. N.; Blundell, N. J.; Price, D.; Worsfold, P. J. “Flow Injection Techniques for Water Monitoring,” Anal. Chem. 1994, 66, 916A–922A. • Betteridge, D. “Flow Injection Analysis,” Anal. Chem. 1978, 50, 832A–846A. • Kowalski, B. R.; Ruzicka, J. Christian, G. D. “Flow Chemography - The Future of Chemical Education,” Trends Anal. Chem. 1990, 9, 8–13. • Mottola, H. A. “Continuous Flow Analysis Revisited,” Anal. Chem. 1981, 53, 1312A–1316A. • Ruzicka, J. “Flow Injection Analysis: From Test Tube to Integrated Microconduits,” Anal. Chem. 1983, 55, 1040A–1053A. • Ruzicka, J.; Hansen, E. H. Flow-Injection Analysis, Wiley-Interscience: New York, 1989. • Ruzicka, J.; Hansen, E. H. “Retro-Review of Flow-Injection Analysis,” Trends Anal. Chem. 2008, 27, 390–393. • Silvestre, C. I. C.; Santos, J. L. M.; Lima, J. L. F. C.; Zagatto, E. A. G. “Liquid-Liquid Extraction in Flow Analysis: A Critical Review,” Anal. Chim. Acta 2009, 652, 54–65. • Stewart, K. K. “Flow Injection Analysis: New Tools for Old Assays, New Approaches to Analytical Measurements,” Anal. Chem. 1983, 55, 931A–940A. • Tyson, J. F. “Atomic Spectrometry and Flow Injection Analysis: A Synergic Combination,” Anal. Chim. Acta, 1988, 214, 57–75. • Valcarcel, M.; Luque de Castro, M. D. Flow-Injection Analysis: Principles and Applications, Ellis Horwood: Chichester, England, 1987. 13.07: Chapter Summary and Key Terms Chapter Summary Kinetic methods of analysis use the rate of a chemical or s physical process to determine an analyte’s concentration. Three types of kinetic methods are discussed in this chapter: chemical kinetic methods, radiochemical methods, and flow injection methods. Chemical kinetic methods use the rate of a chemical reaction and either its integrated or its differential rate law. For an integral method, we determine the concentration of analyte—or the concentration of a reactant or product that is related stoichiometrically to the analyte—at one or more points in time following the reaction’s initiation. The initial concentration of analyte is then determined using the integrated form of the reaction’s rate law. Alternatively, we can measure the time required to effect a given change in concentration. In a differential kinetic method we measure the rate of the reaction at a time t, and use the differential form of the rate law to determine the analyte’s concentration. Chemical kinetic methods are particularly useful for reactions that are too slow for other analytical methods. For reactions with fast kinetics, automation allows for sampling rates of more than 100 samples/h. Another important application of chemical kinetic methods is the quantitative analysis of enzymes and their substrates, and the characterization of enzyme catalysis. Radiochemical methods of analysis take advantage of the decay of radioactive isotopes. A direct measurement of the rate at which a radioactive isotope decays is used to determine its concentration. For an analyte that is not naturally radioactive, neutron activation can be used to induce radio- activity. Isotope dilution, in which we spike a radioactively-labeled form of analyte into the sample, is used as an internal standard for quantitative work. In flow injection analysis we inject the sample into a flowing carrier stream that usually merges with additional streams of reagents. As the sample moves with the carrier stream it both reacts with the contents of the carrier stream and with any additional reagent streams, and undergoes dispersion. The resulting fiagram of signal versus time bears some resemblance to a chromatogram. Unlike chromatography, however, flow injection analysis is not a separation technique. Because all components in a sample move with the carrier stream’s flow rate, it is possible to introduce a second sample before the first sample reaches the detector. As a result, flow injection analysis is ideally suited for the rapid throughput of samples. Key Terms alpha particle competitive inhibitor equilibrium method gamma ray inhibitor intermediate rate kinetic method Michaelis constant noncompetitive inhibitor positron rate constant scintillation counter substrate uncompetitive inhibitor beta particle curve-fitting method fiagram Geiger counter initial rate isotope Lineweaver-Burk plot negatron one-point fixed-time integral method quench rate law steady-state approximation tracer variable time integral method centrifugal analyzer enzyme flow injection analysis half-life integrated rate law isotope dilution manifold neutron activation peristaltic pump rate rate method stopped-flow analyzer two-point fixed-time integral method
textbooks/chem/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/13%3A_Kinetic_Methods/13.06%3A_Additional_Resources.txt
In Chapter 1 we made a distinction between analytical chemistry and chemical analysis. Among the goals of analytical chemistry are improving established methods of analysis, extending existing methods of analysis to new types of samples, and developing new analytical methods. Once we develop a new method, its routine application is best described as chemical analysis. We recognize the status of these established methods by calling them standard methods. Numerous examples of standard methods are presented and discussed in Chapters 8–13. What we have yet to consider is what constitutes a standard method. In this chapter we discuss how we develop a standard method, including optimizing the experimental procedure, verifying that the method produces acceptable precision and accuracy in the hands of a single analyst, and validating the method for general use. • 14.1: Optimizing the Experimental Procedure One of the most effective ways to think about an optimization is to visualize how a system’s response changes when we increase or decrease the levels of one or more of its factors. We call a plot of the system’s response as a function of the factor levels a response surface. • 14.2: Verifying the Method After developing and optimizing a method, the next step is to determine how well it works in the hands of a single analyst. Three steps make up this process: determining single-operator characteristics, completing a blind analysis of standards, and determining the method’s ruggedness. • 14.3: Validating the Method as a Standard Method For an analytical method to be useful, an analyst must be able to achieve results of acceptable accuracy and precision. Verifying a method, as described in the previous section, establishes this goal for a single analyst. The process by which we approve a method for general use is known as validation and it involves a collaborative test of the method by analysts in several laboratories. • 14.4: Using Excel and R for an Analysis of Variance Although the calculations for an analysis of variance are relatively straight- forward, they become tedious when working with large data sets. Both Excel and R include functions for completing an analysis of variance. In addition, R provides a function for identifying the source(s) of significant differences within the data set. • 14.5: Problems End-of-chapter problems to test your understanding of topics in this chapter. • 14.6: Additional Resources A compendium of resources to accompany topics in this chapter. • 14.7: Chapter Summary and Key Terms Summary of chapter's main topics and a list of key terms introduced in the chapter. 14: Developing a Standard Method In the presence of H2O2 and H2SO4, a solution of vanadium forms a reddish brown color that is believed to be a compound with the general formula (VO)2(SO4)3. The intensity of the solution’s color depends on the concentration of vanadium, which means we can use its absorbance at a wavelength of 450 nm to develop a quantitative method for vanadium. The intensity of the solution’s color also depends on the amounts of H2O2 and H2SO4 that we add to the sample—in particular, a large excess of H2O2 decreases the solution’s absorbance as it changes from a reddish brown color to a yellowish color [Vogel’s Textbook of Quantitative Inorganic Analysis, Longman: London, 1978, p. 752.]. Developing a standard method for vanadium based on this reaction requires that we optimize the amount of H2O2 and H2SO4 added to maximize the absorbance at 450 nm. Using the terminology of statisticians, we call the solution’s absorbance the system’s response. Hydrogen peroxide and sulfuric acid are factors whose concentrations, or factor levels, determine the system’s response. To optimize the method we need to find the best combination of factor levels. Usually we seek a maximum response, as is the case for the quantitative analysis of vanadium as (VO)2(SO4)3. In other situations, such as minimizing an analysis’s percent error, we seek a minimum response. We will return to this analytical method for vanadium in Example 14.1.4 and Problem 11 from the end-of-chapter problems. Response Surfaces One of the most effective ways to think about an optimization is to visualize how a system’s response changes when we increase or decrease the levels of one or more of its factors. We call a plot of the system’s response as a function of the factor levels a response surface. The simplest response surface has one factor and is drawn in two dimensions by placing the responses on the y-axis and the factor’s levels on the x-axis. The calibration curve in Figure 14.1.1 is an example of a one-factor response surface. We also can define the response surface mathematically. The response surface in Figure 14.1.1 , for example, is $A = 0.008 + 0.0896C_A \nonumber$ where A is the absorbance and CA is the analyte’s concentration in ppm. For a two-factor system, such as the quantitative analysis for vanadium described earlier, the response surface is a flat or curved plane in three dimensions. As shown in Figure 14.1.2 a, we place the response on the z-axis and the factor levels on the x-axis and the y-axis. Figure 14.1.2 a shows a pseudo-three dimensional wireframe plot for a system that obeys the equation $R = 3.0 - 0.30A + 0.020AB \nonumber$ where R is the response, and A and B are the factors. We also can represent a two-factor response surface using the two-dimensional level plot in Figure 14.1.2 b, which uses a color gradient to show the response on a two-dimensional grid, or using the two-dimensional contour plot in Figure 14.1.2 c, which uses contour lines to display the response surface. We also can overlay a level plot and a contour plot. See Figure 14.1.7 b for a typical example. The response surfaces in Figure 14.1.2 cover a limited range of factor levels (0 ≤ A ≤ 10, 0 ≤ B ≤ 10), but we can extend each to more positive or to more negative values because there are no constraints on the factors. Most response surfaces of interest to an analytical chemist have natural constraints imposed by the factors, or have practical limits set by the analyst. The response surface in Figure 14.1.1 , for example, has a natural constraint on its factor because the analyte’s concentration cannot be less than zero. We express this constraint as CA ≥ 0. If we have an equation for the response surface, then it is relatively easy to find the optimum response. Unfortunately, when developing a new analytical method, we rarely know any useful details about the response surface. Instead, we must determine the response surface’s shape and locate its optimum response by running appropriate experiments. The focus of this section is on useful experimental methods for characterizing a response surface. These experimental methods are divided into two broad categories: searching methods, in which an algorithm guides a systematic search for the optimum response, and modeling methods, in which we use a theoretical model or an empirical model of the response surface to predict the optimum response. Searching Algorithms for Response Surfaces Figure 14.1.3 shows a portion of the South Dakota Badlands, a barren landscape that includes many narrow ridges formed through erosion. Suppose you wish to climb to the highest point on this ridge. Because the shortest path to the summit is not obvious, you might adopt the following simple rule: look around you and take one step in the direction that has the greatest change in elevation, and then repeat until no further step is possible. The route you follow is the result of a systematic search that uses a searching algorithm. Of course there are as many possible routes as there are starting points, three examples of which are shown in Figure 14.1.3 . Note that some routes do not reach the highest point—what we call the global optimum. Instead, many routes reach a local optimum from which further movement is impossible. We can use a systematic searching algorithm to locate the optimum response for an analytical method. We begin by selecting an initial set of factor levels and measure the response. Next, we apply the rules of our searching algorithm to determine a new set of factor levels and measure its response, continuing this process until we reach an optimum response. Before we consider two common searching algorithms, let’s consider how we evaluate a searching algorithm. Effectiveness and Efficiency A searching algorithm is characterized by its effectiveness and its efficiency. To be effective, a searching algorithm must find the response surface’s global optimum, or at least reach a point near the global optimum. A searching algorithm may fail to find the global optimum for several reasons, including a poorly designed algorithm, uncertainty in measuring the response, and the presence of local optima. Let’s consider each of these potential problems. A poorly designed algorithm may prematurely end the search before it reaches the response surface’s global optimum. As shown in Figure 14.1.4 , when climbing a ridge that slopes up to the northeast, an algorithm is likely to fail it if limits your steps only to the north, south, east, or west. An algorithm that cannot responds to a change in the direction of steepest ascent is not an effective algorithm. All measurements contain uncertainty, or noise, that affects our ability to characterize the underlying signal. When the noise is greater than the local change in the signal, then a searching algorithm is likely to end before it reaches the global optimum. Figure 14.1.5 provides a different view of Figure 14.1.3 , which shows us that the relatively flat terrain leading up to the ridge is heavily weathered and very uneven. Because the variation in local height (the noise) exceeds the slope (the signal), our searching algorithm ends the first time we step up onto a less weathered local surface. Finally, a response surface may contain several local optima, only one of which is the global optimum. If we begin the search near a local optimum, our searching algorithm may never reach the global optimum. The ridge in Figure 14.1.3 , for example, has many peaks. Only those searches that begin at the far right will reach the highest point on the ridge. Ideally, a searching algorithm should reach the global optimum regardless of where it starts. A searching algorithm always reaches an optimum. Our problem, of course, is that we do not know if it is the global optimum. One method for evaluating a searching algorithm’s effectiveness is to use several sets of initial factor levels, find the optimum response for each, and compare the results. If we arrive at or near the same optimum response after starting from very different locations on the response surface, then we are more confident that is it the global optimum. Efficiency is a searching algorithm’s second desirable characteristic. An efficient algorithm moves from the initial set of factor levels to the optimum response in as few steps as possible. In seeking the highest point on the ridge in Figure 14.1.5 , we can increase the rate at which we approach the optimum by taking larger steps. If the step size is too large, however, the difference between the experimental optimum and the true optimum may be unacceptably large. One solution is to adjust the step size during the search, using larger steps at the beginning and smaller steps as we approach the global optimum. One-Factor-at-a-Time Optimization A simple algorithm for optimizing the quantitative method for vanadium described earlier is to select initial concentrations for H2O2 and H2SO4 and measure the absorbance. Next, we optimize one reagent by increasing or decreasing its concentration—holding constant the second reagent’s concentration—until the absorbance decreases. We then vary the concentration of the second reagent—maintaining the first reagent’s optimum concentration—until we no longer see an increase in the absorbance. We can stop this process, which we call a one-factor-at-a-time optimization, after one cycle or repeat the steps until the absorbance reaches a maximum value or it exceeds an acceptable threshold value. A one-factor-at-a-time optimization is consistent with a notion that to determine the influence of one factor we must hold constant all other factors. This is an effective, although not necessarily an efficient experimental design when the factors are independent [Sharaf, M. A.; Illman, D. L.; Kowalski, B. R. Chemometrics, Wiley-Interscience: New York, 1986]. Two factors are independent when a change in the level of one factor does not influence the effect of a change in the other factor’s level. Table 14.1.1 provides an example of two independent factors. Table 14.1.1 . Example of Two Independent Factors factor A factor B response $A_1$ $B_1$ 40 $A_2$ $B_1$ 80 $A_1$ $B_2$ 60 $A_2$ $B_2$ 100 If we hold factor B at level B1, changing factor A from level A1 to level A2 increases the response from 40 to 80, or a change in response, $\Delta R$ of $R = 80 - 40 = 40 \nonumber$ If we hold factor B at level B2, we find that we have the same change in response when the level of factor A changes from A1 to A2. $R = 100 - 60 = 40 \nonumber$ We can see this independence visually if we plot the response as a function of factor A’s level, as shown in Figure 14.1.6 . The parallel lines show that the level of factor B does not influence factor A’s effect on the response. Exercise 14.1.1 Using the data in Table 14.1.1 , show that the effect of factor B on the response is independent of factor A. Answer If we hold factor A at level A1, changing factor B from level B1 to level B2 increases the response from 40 to 60, or a change, $\Delta R$, of $\Delta R = 60 - 40 = 20 \nonumber$ If we hold factor A at level A2, we find that we have the same change in response when the level of factor B changes from B1 to B2. $\Delta R = 100 - 80 = 20 \nonumber$ Mathematically, two factors are independent if they do not appear in the same term in the equation that describes the response surface. Equation 14.1.1 , for example, describes a response surface with independent factors because no term in the equation includes both factor A and factor B. $R = 2.0 + 0.12 A + 0.48 B - 0.03A^2 - 0.03 B^2 \label{14.1}$ Figure 14.1.7 shows the resulting pseudo-three-dimensional surface and a contour map for Equation \ref{14.1}. The easiest way to follow the progress of a searching algorithm is to map its path on a contour plot of the response surface. Positions on the response surface are identified as (a, b) where a and b are the levels for factor A and for factor B. The contour plot in Figure 14.1.7 b, for example, shows four one-factor-at-a-time optimizations of the response surface for Equation \ref{14.1}. The effectiveness and efficiency of this algorithm when optimizing independent factors is clear—each trial reaches the optimum response at (2, 8) in a single cycle. Unfortunately, factors often are not independent. Consider, for example, the data in Table 14.1.2 Table 14.1.2 . Example of Two Dependent Factors factor A factor B response $A_1$ $B_1$ 20 $A_2$ $B_1$ 80 $A_1$ $B_2$ 60 $A_2$ $B_2$ 80 where a change in the level of factor B from level B1 to level B2 has a significant effect on the response when factor A is at level A1 $R = 60 - 20 = 40 \nonumber$ but no effect when factor A is at level A2. $R = 80 - 80 = 0 \nonumber$ Figure 14.1.8 shows this dependent relationship between the two factors. Factors that are dependent are said to interact and the equation for the response surface’ includes an interaction term that contains both factor A and factor B. The final term in equation 14.1.2 , for example, accounts for the interaction between factor A and factor B. $R = 5.5 + 1.5 A + 0.6 B - 0.15 A^2 - 0.245 B^2 - 0.0857 AB \label{14.2}$ Figure 14.1.9 shows the resulting pseudo-three-dimensional surface and a contour map for Equation \ref{14.2}. Exercise 14.1.2 Using the data in Table 14.1.2 , show that the effect of factor A on the response is dependent on factor B. Answer If we hold factor B at level B1, changing factor A from level A1 to level A2 increases the response from 20 to 80, or a change, $\Delta R$, of $\Delta R = 80 - 20 = 60 \nonumber$ If we hold factor B at level B2, we find that the change in response when the level of factor A changes from A1 to A2 is now 20. $\Delta R = 80 - 60 = 20 \nonumber$ The progress of a one-factor-at-a-time optimization for Equation \ref{14.2} is shown in Figure 14.1.9 b. Although the optimization for dependent factors is effective, it is less efficient than that for independent factors. In this case it takes four cycles to reach the optimum response of (3, 7) if we begin at (0, 0). Simplex Optimization One strategy for improving the efficiency of a searching algorithm is to change more than one factor at a time. A convenient way to accomplish this when there are two factors is to begin with three sets of initial factor levels as the vertices of a triangle. After measuring the response for each set of factor levels, we identify the combination that gives the worst response and replace it with a new set of factor levels using a set of rules (Figure 14.1.10 ). This process continues until we reach the global optimum or until no further optimization is possible. The set of factor levels is called a simplex. In general, for k factors a simplex is a $k + 1$ dimensional geometric figure [(a) Spendley, W.; Hext, G. R.; Himsworth, F. R. Technometrics 1962, 4, 441–461; (b) Deming, S. N.; Parker, L. R. CRC Crit. Rev. Anal. Chem. 1978 7(3), 187–202]. Thus, for two factors the simplex is a triangle. For three factors the simplex is a tetrahedron. To place the initial two-factor simplex on the response surface, we choose a starting point (a, b) for the first vertex and place the remaining two vertices at (a + sa, b) and (a + 0.5sa, b + 0.87sb) where sa and sb are step sizes for factor A and for factor B [Long, D. E. Anal. Chim. Acta 1969, 46, 193–206]. The following set of rules moves the simplex across the response surface in search of the optimum response: Rule 1. Rank the vertices from best (vb) to worst (vw). Rule 2. Reject the worst vertex (vw) and replace it with a new vertex (vn) by reflecting the worst vertex through the midpoint of the remaining vertices. The new vertex’s factor levels are twice the average factor levels for the retained vertices minus the factor levels for the worst vertex. For a two-factor optimization, the equations are shown here where vs is the third vertex. $a_{v_n} = 2 \left( \frac {a_{v_b} + a_{v_s}} {2} \right) - a_{v_w} \label{14.3}$ $b_{v_n} = 2 \left( \frac {b_{v_b} + b_{v_s}} {2} \right) - b_{v_w} \label{14.4}$ Rule 3. If the new vertex has the worst response, then return to the previous vertex and reject the vertex with the second worst response, (vs) calculating the new vertex’s factor levels using rule 2. This rule ensures that the simplex does not return to the previous simplex. Rule 4. Boundary conditions are a useful way to limit the range of possible factor levels. For example, it may be necessary to limit a factor’s concentration for solubility reasons, or to limit the temperature because a reagent is thermally unstable. If the new vertex exceeds a boundary condition, then assign it the worst response and follow rule 3. The variables a and b in Equation \ref{14.3} and Equation \ref{14.4} are the factor levels for factor A and for factor B, respectively. Problem 3 in the end-of-chapter problems asks you to derive these equations. Because the size of the simplex remains constant during the search, this algorithm is called a fixed-sized simplex optimization. Example 14.1.1 illustrates the application of these rules. Example 14.1.1 Find the optimum for the response surface in Figure 14.1.9 using the fixed-sized simplex searching algorithm. Use (0, 0) for the initial factor levels and set each factor’s step size to 1.00. Solution Letting a = 0, b =0, sa = 1.00, and sb = 1.00 gives the vertices for the initial simplex as $\text{vertex 1:} (a, b) = (0, 0) \nonumber$ $\text{vertex 2:} (a + s_a, b) = (1.00, 0) \nonumber$ $\text{vertex 3:} (a + 0.5s_a, b + 0.87s_b) = (0.50, 0.87) \nonumber$ The responses, from Equation \ref{14.2}, for the three vertices are shown in the following table vertex a b response $v_1$ 0 0 5.50 $v_2$ 1.00 0 6.85 $v_3$ 0.50 0.87 6.68 with $v_1$ giving the worst response and $v_3$ the best response. Following Rule 1, we reject $v_1$ and replace it with a new vertex using Equation \ref{14.3} and Equation \ref{14.4}; thus $a_{v_4} = 2 \left( \frac {1.00 + 0.50} {2} \right) - 0 = 1.50 \nonumber$ $b_{v_4} = 2 \left( \frac {0 + 0.87} {2} \right) - 0 = 0.87 \nonumber$ The following table gives the vertices of the second simplex. vertex a b response $v_2$ 1.50 0 6.85 $v_3$ 0.50 0.87 6.68 $v_4$ 1.50 0.87 7.80 with $v_3$ giving the worst response and $v_4$ the best response. Following Rule 1, we reject $v_3$ and replace it with a new vertex using Equation \ref{14.3} and Equation \ref{14.4}; thus $a_{v_5} = 2 \left( \frac {1.00 + 1.50} {2} \right) - 0.50 = 2.00 \nonumber$ $b_{v_5} = 2 \left( \frac {0 + 0.87} {2} \right) - 0.87 = 0 \nonumber$ The following table gives the vertices of the third simplex. vertex a b response $v_2$ 1.50 0 6.85 $v_4$ 1.50 0.87 780 $v_5$ 2.00 0 7.90 The calculation of the remaining vertices is left as an exercise. Figure 14.1.11 shows the progress of the complete optimization. After 29 steps the simplex begins to repeat itself, circling around the optimum response of (3, 7). The size of the initial simplex ultimately limits the effectiveness and the efficiency of a fixed-size simplex searching algorithm. We can increase its efficiency by allowing the size of the simplex to expand or to contract in response to the rate at which we approach the optimum. For example, if we find that a new vertex is better than any of the vertices in the preceding simplex, then we expand the simplex further in this direction on the assumption that we are moving directly toward the optimum. Other conditions might cause us to contract the simplex—to make it smaller—to encourage the optimization to move in a different direction. We call this a variable-sized simplex optimization. Consult this chapter’s additional resources for further details of the variable-sized simplex optimization. Mathematical Models of Response Surfaces A response surface is described mathematically by an equation that relates the response to its factors. Equation \ref{14.1} and Equation \ref{14.2} provide two examples of such mathematical models. If we measure the response for several combinations of factor levels, then we can model the response surface by using a regression analysis to fit an appropriate equation to the data. There are two broad categories of models that we can use for a regression analysis: theoretical models and empirical models. Theoretical Models of the Response Surface A theoretical model is derived from the known chemical and physical relationships between the response and its factors. In spectrophotometry, for example, Beer’s law is a theoretical model that relates an analyte’s absorbance, A, to its concentration, CA $A = \epsilon b C_A \nonumber$ where $\epsilon$ is the molar absorptivity and b is the pathlength of the electromagnetic radiation passing through the sample. A Beer’s law calibration curve, therefore, is a theoretical model of a response surface. For a review of Beer’s law, see Chapter 10.2. Figure 14.1.1 in this chapter is an example of a Beer’s law calibration curve. Empirical Models of the Response Surface In many cases the underlying theoretical relationship between the response and its factors is unknown. We still can develop a model of the response surface if we make some reasonable assumptions about the underlying relationship between the factors and the response. For example, if we believe that the factors A and B are independent and that each has only a first-order effect on the response, then the following equation is a suitable model. $R = \beta_0 + \beta_a A + \beta_b B \nonumber$ where R is the response, A and B are the factor levels, and $\beta_0$, $\beta_a$, and $\beta_b$ are adjustable parameters whose values are determined by a linear regression analysis. Other examples of equations include those for dependent factors $R = \beta_0 + \beta_a A + \beta_b B + \beta_{ab} AB \nonumber$ and those with higher-order terms. $R = \beta_0 + \beta_a A + \beta_b B + \beta_{aa} A^2 + \beta_{bb} B^2 \nonumber$ Each of these equations provides an empirical model of the response surface because it has no basis in a theoretical understanding of the relationship between the response and its factors. Although an empirical model may provide an excellent description of the response surface over a limited range of factor levels, it has no basis in theory and we cannot reliably extend it to unexplored parts of the response surface. The calculations for a linear regression when the model is first-order in one factor (a straight line) are described in Chapter 5.4. A complete mathematical treatment of linear regression for models that are second-order in one factor or which contain more than one factor is beyond the scope of this text. The computations for a few special cases, however, are straightforward and are considered in this section. A more comprehensive treatment of linear regression is available in several of this chapter’s additional resources. Factorial Designs To build an empirical model we measure the response for at least two levels for each factor. For convenience we label these levels as high, Hf, and low, Lf, where f is the factor; thus HA is the high level for factor A and LB is the low level for factor B. If our empirical model contains more than one factor, then each factor’s high level is paired with both the high level and the low level for all other factors. In the same way, the low level for each factor is paired with the high level and the low level for all other factors. As shown in Figure 14.1.12 , this requires 2k experiments where k is the number of factors. This experimental design is known as a 2k factorial design. Another system of notation is to use a plus sign (+) to indicate a factor’s high level and a minus sign (–) to indicate its low level. We will use H or L when writing an equation and a plus sign or a minus sign in tables. Coded Factor Levels The calculations for a 2k factorial design are straightforward and easy to complete with a calculator or a spreadsheet. To simplify the calculations, we code the factor levels using $+1$ for a high level and $-1$ for a low level. Coding has two additional advantages: scaling the factors to the same magnitude makes it easier to evaluate each factor’s relative importance, and it places the model’s intercept, $\beta_0$, at the center of the experimental design. As shown in Example 14.1.2 , it is easy to convert between coded and uncoded factor levels. Example 14.1.2 To explore the effect of temperature on a reaction, we assign 30oC to a coded factor level of $-1$ and assign a coded level $+1$ to a temperature of 50oC. What temperature corresponds to a coded level of $-0.5$ and what is the coded level for a temperature of 60oC? Solution The difference between $-1$ and $+1$ is 2, and the difference between 30oC and 50oC is 20oC; thus, each unit in coded form is equivalent to 10oC in uncoded form. With this information, it is easy to create a simple scale between the coded and the uncoded values, as shown in Figure 14.1.13 . A temperature of 35oC corresponds to a coded level of $-0.5$ and a coded level of $+2$ corresponds to a temperature of 60oC. Determining the Empirical Model Let’s begin by considering a simple example that involves two factors, A and B, and the following empirical model. $R = \beta_0 + \beta_a A + \beta_b B + \beta_{ab} AB \label{14.5}$ A 2k factorial design with two factors requires four runs. Table 14.1.3 provides the uncoded levels (A and B), the coded levels (A* and B*), and the responses (R) for these experiments. The terms $\beta_0$, $\beta_a$, $\beta_b$, and $\beta_{ab}$ in Equation \ref{14.5} account for, respectively, the mean effect (which is the average response), the first-order effects due to factor A and to factor B, and the interaction between the two factors. Table 14.1.3 . Example of Uncoded and Coded Factor Levels and Responses for a 2k Factorial Design run A B A* B* R 1 15 30 $+1$ $+1$ 22.5 2 15 10 $+1$ $-1$ 11.5 3 5 30 $-1$ $+1$ 17.5 4 5 10 $-1$ $-1$ 8.5 Equation \ref{14.5} has four unknowns—the four beta terms—and Table 14.1.3 describes the four experiments. We have just enough information to calculate values for $\beta_0$, $\beta_a$, $\beta_b$, and $\beta_{ab}$. When working with the coded factor levels, the values of these parameters are easy to calculate using the following equations, where n is the number of runs. $\beta_{0} \approx b_{0}=\frac{1}{n} \sum_{i=1}^{n} R_{i} \label{14.6}$ $\beta_{a} \approx b_{a}=\frac{1}{n} \sum_{i=1}^{n} A^*_{i} R_{i} \label{14.7}$ $\beta_{b} \approx b_{b}=\frac{1}{n} \sum_{i=1}^{n} B^*_{i} R_{i} \label{14.8}$ $\beta_{ab} \approx b_{ab}=\frac{1}{n} \sum_{i=1}^{n} A^*_{i} B^*_{i} R_{i} \label{14.9}$ Solving for the estimated parameters using the data in Table 14.1.3 $b_{0}=\frac{22.5+11.5+17.5+8.5}{4}=15.0 \nonumber$ $b_{a}=\frac{22.5+11.5-17.5-8.5}{4}=2.0 \nonumber$ $b_{b}=\frac{22.5-11.5+17.5-8.5}{4}=5.0 \nonumber$ $b_{ab}=\frac{22.5-11.5-17.5+8.5}{4}=0.5 \nonumber$ leaves us with the coded empirical model for the response surface. $R = 15.0 + 2.0 A^* + 5.0 B^* + 0.05 A^* B^* \label{14.10}$ Recall that we introduced coded factor levels with the promise that they simplify calculations. Although we can convert this coded model into its uncoded form, there is no need to do so. If we need to know the response for a new set of factor levels, we just convert them into coded form and calculate the response. For example, if A is 10 and B is 15, then A* is 0 and B* is –0.5. Substituting these values into Equation \ref{14.10} gives a response of 12.5. We can extend this approach to any number of factors. For a system with three factors—A, B, and C—we can use a 23 factorial design to determine the parameters in the following empirical model $R = \beta_0 + \beta_a A + \beta_b B + \beta_c C + \beta_{ab} AB + \beta_{ac} AC + \beta_{bc} BC + \beta_{abc} ABC \label{14.11}$ where A, B, and C are the factor levels. The terms $\beta_0$, $\beta_a$, $\beta_b$, and $\beta_{ab}$ are estimated using Equation \ref{14.6}, Equation \ref{14.7}, Equation \ref{14.8}, and Equation \ref{14.9}, respectively. To find estimates for the remaining parameters we use the following equations. $\beta_{c} \approx b_{c}=\frac{1}{n} \sum_{i=1}^{n} C^*_{i} R \label{14.12}$ $\beta_{ac} \approx b_{ac}=\frac{1}{n} \sum_{i=1}^{n} A^*_{i} C^*_{i} R \label{14.13}$ $\beta_{bc} \approx b_{bc}=\frac{1}{n} \sum_{i=1}^{n} B^*_{i} C^*_{i} R \label{14.14}$ $\beta_{abc} \approx b_{abc}=\frac{1}{n} \sum_{i=1}^{n} A^*_{i} B^*_{i} C^*_{i} R \label{14.15}$ Example 14.1.3 Table 14.1.4 lists the uncoded factor levels, the coded factor levels, and the responses for a 23 factorial design. Determine the coded empirical model for the response surface based on Equation \ref{14.11}. What is the expected response when A is 10, B is 15, and C is 50? Solution Equation \ref{14.11} has eight unknowns—the eight beta terms—and Table 14.1.4 describes eight experiments. We have just enough information to calculate values for $\beta_0$, $\beta_a$, $\beta_b$, $\beta_{ab}$, $\beta_{ac}$, $\beta_{bc}$, and $\beta_{abc}$; these values are $b_{0}=\frac{1}{8} \times(137.25+54.75+73.75+30.25+61.75+30.25+41.25+18.75 )=56.0 \nonumber$ $b_{a}=\frac{1}{8} \times(137.25+54.75+73.75+30.25-61.75-30.25-41.25-18.75 )=18.0 \nonumber$ $b_{b}=\frac{1}{8} \times(137.25+54.75-73.75-30.25+61.75+30.25-41.25-18.75 )=15.0 \nonumber$ $b_{c}=\frac{1}{8} \times(137.25-54.75+73.75-30.25+61.75-30.25+41.25-18.75 )=22.5 \nonumber$ $b_{ab}=\frac{1}{8} \times(137.25+54.75-73.75-30.25-61.75-30.25+41.25+18.75 )=7.0 \nonumber$ $b_{ac}=\frac{1}{8} \times(137.25-54.75+73.75-30.25-61.75+30.25-41.25+18.75 )=9.0 \nonumber$ $b_{bc}=\frac{1}{8} \times(137.25-54.75-73.75+30.25+61.75-30.25-41.25+18.75 )=6.0 \nonumber$ $b_{abc}=\frac{1}{8} \times(137.25-54.75-73.75+30.25-61.75+30.25+41.25-18.75 )=3.75 \nonumber$ The coded empirical model, therefore, is $R = 56.0 + 18.0 A^* + 15.0 B^* + 22.5 C^* + 7.0 A^* B^* + 9.0 A^* C^* + 6.0 B^* C^* + 3.75 A^* B^* C^* \nonumber$ To find the response when A is 10, B is 15, and C is 50, we first convert these values into their coded form. Figure 14.1.14 helps us make the appropriate conversions; thus, A* is 0, B* is $-0.5$, and C* is $+1.33$. Substituting back into the empirical model gives a response of $R = 56.0 + 18.0 (0) + 15.0 (-0.5) + 22.5 (+1.33) + 7.0 (0) (-0.5) + 9.0 (0) (+1.33) + 6.0 (-0.5) (+1.33) + 3.75 (0) (-0.5) (+1.33) = 74.435 \approx 74.4 \nonumber$ Table 14.1.4 . Example of Uncoded and Coded Factor Levels and Responses for the 23 Factorial Design in Example 14.1.3 . run A B C A* B* C* R 1 15 30 45 $+1$ $+1$ $+1$ 137.5 2 15 30 15 $+1$ $+1$ $-1$ 54.75 3 15 10 45 $+1$ $-1$ $+1$ 73.75 4 15 10 15 $+1$ $-1$ $-1$ 30.25 5 5 30 45 $-1$ $+1$ $+1$ 61.75 6 5 30 15 $-1$ $+1$ $-1$ 30.25 7 5 10 45 $-1$ $-1$ $+1$ 41.25 8 5 10 15 $-1$ $-1$ $-1$ 18.75 A 2k factorial design can model only a factor’s first-order effect, including first-order interactions, on the response. A 22 factorial design, for example, includes each factor’s first-order effect ($\beta_a$ and $\beta_b$) and a first-order interaction between the factors ($\beta_{ab}$). A 2k factorial design cannot model higher-order effects because there is insufficient information. Here is simple example that illustrates the problem. Suppose we need to model a system in which the response is a function of a single factor, A. Figure 14.1.15 a shows the result of an experiment using a 21 factorial design. The only empirical model we can fit to the data is a straight line. $R = \beta_0 + \beta_a A \nonumber$ If the actual response is a curve instead of a straight-line, then the empirical model is in error. To see evidence of curvature we must measure the response for at least three levels for each factor. We can fit the 31 factorial design in Figure 14.1.15 b to an empirical model that includes second-order factor effects. $R = \beta_0 + \beta_a A + \beta_{aa} A^2 \nonumber$ In general, an n-level factorial design can model single-factor and interaction terms up to the (n – 1)th order. We can judge the effectiveness of a first-order empirical model by measuring the response at the center of the factorial design. If there are no higher-order effects, then the average response of the trials in a 2k factorial design should equal the measured response at the center of the factorial design. To account for influence of random errors we make several determinations of the response at the center of the factorial design and establish a suitable confidence interval. If the difference between the two responses is significant, then a first-order empirical model probably is inappropriate. One of the advantages of working with a coded empirical model is that b0 is the average response of the 2 $\times$ k trials in a 2k factorial design. Example 14.1.4 One method for the quantitative analysis of vanadium is to acidify the solution by adding H2SO4 and oxidizing the vanadium with H2O2 to form a red-brown soluble compound with the general formula (VO)2(SO4)3. Palasota and Deming studied the effect of the relative amounts of H2SO4 and H2O2 on the solution’s absorbance, reporting the following results for a 22 factorial design [Palasota, J. A.; Deming, S. N. J. Chem. Educ. 1992, 62, 560–563]. H2SO4 H2O2 absorbance $+1$ $+1$ 0.330 $+1$ $-1$ 0.359 $-1$ $+1$ 0.293 $-1$ $-1$ 0.420 Four replicate measurements at the center of the factorial design give absorbances of 0.334, 0.336, 0.346, and 0.323. Determine if a first-order empirical model is appropriate for this system. Use a 90% confidence interval when accounting for the effect of random error. Solution We begin by determining the confidence interval for the response at the center of the factorial design. The mean response is 0.335 with a standard deviation of 0.0094, which gives a 90% confidence interval of $\mu=\overline{X} \pm \frac{t s}{\sqrt{n}}=0.335 \pm \frac{(2.35)(0.0094)}{\sqrt{4}}=0.335 \pm 0.011 \nonumber$ The average response, $\overline{R}$, from the factorial design is $\overline{R}=\frac{0.330+0.359+0.293+0.420}{4}=0.350 \nonumber$ Because $\overline{R}$ exceeds the confidence interval’s upper limit of 0.346, we can reasonably assume that a 22 factorial design and a first-order empirical model are inappropriate for this system at the 95% confidence level. Central Composite Designs One limitation to a 3k factorial design is the number of trials we need to run. As shown in Figure 14.1.16 , a 32 factorial design requires 9 trials. This number increases to 27 for three factors and to 81 for 4 factors. A more efficient experimental design for a system that contains more than two factors is a central composite design, two examples of which are shown in Figure 14.1.17 . The central composite design consists of a 2k factorial design, which provides data to estimate each factor’s first-order effect and interactions between the factors, and a star design that has $2^k + 1$ points, which provides data to estimate second-order effects. Although a central composite design for two factors requires the same number of trials, nine, as a 32 factorial design, it requires only 15 trials and 25 trials when using three factors or four factors. See this chapter’s additional resources for details about the central composite designs.
textbooks/chem/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/14%3A_Developing_a_Standard_Method/14.01%3A_Optimizing_the_Experimental_Procedure.txt
After developing and optimizing a method, the next step is to determine how well it works in the hands of a single analyst. Three steps make up this process: determining single-operator characteristics, completing a blind analysis of standards, and determining the method’s ruggedness. If another standard method is available, then we can analyze the same sample using both the standard method and the new method, and compare the results. If the result for any single test is unacceptable, then the method is not a suitable standard method. Single Operator Characteristics The first step in verifying a method is to determine the precision, accuracy, and detection limit when a single analyst uses the method to analyze a standard sample. The detection limit is determined by analyzing an appropriate reagent blank. Precision is determined by analyzing replicate portions of the sample, preferably more than ten. Accuracy is evaluated using a t-test to compare the experimental results to the known amount of analyte in the standard. Precision and accuracy are evaluated for several different concentrations of analyte, including at least one concentration near the detection limit, and for each different sample matrix. Including different concentrations of analyte helps to identify constant sources of determinate error and to establish the range of concentrations for which the method is applicable. Blind Analysis of Standard Samples Single-operator characteristics are determined by analyzing a standard sample that has a concentration of analyte known to the analyst. The second step in verifying a method is a blind analysis of standard samples. Although the concentration of analyte in the standard is known to a supervisor, the information is withheld from the analyst. After analyzing the standard sample several times, the analyte’s average concentration is reported to the test’s supervisor. To be accepted, the experimental mean must be within three standard deviations—as determined from the single-operator characteristics—of the analyte’s known concentration. An even more stringent requirement is to require that the experimental mean be within two standard deviations of the analyte’s known concentration. Ruggedness Testing An optimized method may produce excellent results in the laboratory that develops a method, but poor results in other laboratories. This is not particularly surprising because a method typically is optimized by a single analyst using the same reagents, equipment, and instrumentation for each trial. Any variability introduced by different analysts, reagents, equipment, and instrumentation is not included in the single-operator characteristics. Other less obvious factors may affect an analysis, including environmental factors, such as the temperature or relative humidity in the laboratory; if the procedure does not require control of these conditions, then they may contribute to variability. Finally, the analyst who optimizes the method usually takes particular care to perform the analysis in exactly the same way during every trial, which may minimize the run-to-run variability. An important step in developing a standard method is to determine which factors have a pronounced effect on the quality of the results. Once we identify these factors, we can write specific instructions that specify how these factors must be controlled. A procedure that, when carefully followed, produces results of high quality in different laboratories is considered rugged. The method by which the critical factors are discovered is called ruggedness testing [Youden, W. J. Anal. Chem. 1960, 32(13), 23A–37A]. For example, if temperature is a concern, we might specify that it be held at $25 \pm 2$oC. Ruggedness testing usually is performed by the laboratory that develops the standard method. After identifying potential factors, their effects on the response are evaluated by performing the analysis at two levels for each factor. Normally one level is that specified in the procedure, and the other is a level likely encountered when the procedure is used by other laboratories. This approach to ruggedness testing can be time consuming. If there are seven potential factors, for example, a 27 factorial design can evaluate each factor’s first-order effect. Unfortunately, this requires a total of 128 trials—too many trials to be a practical solution. A simpler experimental design is shown in Table 14.2.1 , in which the two factor levels are identified by upper case and lower case letters. This design, which is similar to a 23 factorial design, is called a fractional factorial design. Because it includes only eight runs, the design provides information only the average response and the seven first-order factor effects. It does not provide sufficient information to evaluate higher-order effects or interactions between factors, both of which are probably less important than the first-order effects. Table 14.2.1 . Experimental Design for a Ruggedness Test Involving Seven Factors run A B C D E F G response 1 A B C D E F G R1 2 A B c D e f g R2 3 A b C d E f g R3 4 A b c d e F G R4 5 a B C d e F g R5 6 a B c d E f G R6 7 a b C D e f G R7 8 a b c D E F g R The experimental design in Table 14.2.1 is balanced in that each of a factor’s two levels is paired an equal number of times with the upper case and lower case levels for every other factor. To determine the effect, E, of changing a factor’s level, we subtract the average response when the factor is at its upper case level from the average value when it is at its lower case level. $E = \frac {\left( \sum R_i \right)_\text{upper case}} {4} - \frac {\left( \sum R_i \right)_\text{lower case}} {4} \label{14.1}$ Because the design is balanced, the levels for the remaining factors appear an equal number of times in both summation terms, canceling their effect on E. For example, to determine the effect of factor A, EA, we subtract the average response for runs 5–8 from the average response for runs 1–4. Factor B does not affect E because its upper case levels in runs 1 and 2 are canceled by the upper case levels in runs 5 and 6, and its lower case levels in runs 3 and 4 are canceled by the lower case levels in runs 7 and 8. After we calculate each of the factor effects we rank them from largest to smallest without regard to sign, identifying those factors whose effects are substantially larger than the other factors. To see that this is design is balanced, look closely at the last four runs. Factor A is present at its level a for all four of these runs. For each of the remaining factors, two levels are upper case and two levels are lower case. Runs 5–8 provide information about the effect of a on the response, but do not provide information about the effect of any other factor. Runs 1, 2, 5, and 6 provide information about the effect of B, but not of the remaining factors. Try a few other examples to convince yourself that this relationship is general. We also can use this experimental design to estimate the method’s expected standard deviation due to the effects of small changes in uncontrolled or poorly controlled factors [Youden, W. J. “Statistical Techniques for Collaborative Tests,” in Statistical Manual of the Association of Official Analytical Chemists, Association of Official Analytical Chemists: Washington, D. C., 1975, p. 35]. $s=\sqrt{\frac{2}{7} \sum_{i=1}^{n} E_{i}^{2}} \label{14.2}$ If this standard deviation is too large, then the procedure is modified to bring under control the factors that have the greatest effect on the response. Why does this model estimate the seven first-order factor effects, E, and not seven of the 20 possible first-order interactions? With eight experiments, we can only choose to calculate seven parameters (plus the average response). The calculation of ED, for example, also gives the value for EAB. You can convince yourself of this by replacing each upper case letter with a $+1$ and each lower case letter with a $-1$ and noting that $A \times B = D$. We choose to report the first-order factor effects because they likely are more important than interactions between factors. Example 14.2.1 The concentration of trace metals in sediment samples collected from rivers and lakes are determined by extracting with acid and analyzing the extract by atomic absorption spectrophotometry. One procedure calls for an overnight extraction using dilute HCl or HNO3. The samples are placed in plastic bottles with 25 mL of acid and then placed on a shaker operated at a moderate speed and at ambient temperature. To determine the method’s ruggedness, the effect of the following factors was studied using the experimental design in Table 14.2.1 . Factor A: extraction time A = 24 h a = 12 h Factor B: shaking speed B = medium b = high Factor C: acid type C = HCl c = HNO3 Factor D: acid concentration D = 0.1 M d = 0.05 M Factor E: volume of acid E = 25 mL e = 35 mL Factor F: type of container F = plastic f = glass Factor G: temperature G = ambient g = 25oC Eight replicates of a standard sample that contains a known amount of analyte are carried through the procedure. The percentage of analyte recovered in the eight samples are as follows: R1 = 98.9, R2 = 99.0, R3 = 97.5, R4 = 97.7, R5 = 97.4, R6 = 97.3, R7 = 98.6, and R8 = 98.6. Identify the factors that have a significant effect on the response and estimate the method’s expected standard deviation. Solution To calculate the effect of changing each factor’s level we use Equation \ref{14.1} and substitute in appropriate values. For example, EA is $E_{A}=\frac{98.9+99.0+97.5+97.7}{4} - \frac{97.4+97.3+98.6+98.6}{4}=0.30 \nonumber$ Completing the remaining calculations and ordering the factors by the absolute values of their effects Factor D = 1.30, Factor A = 0.35, Factor E = –0.10, Factor B = 0.05, Factor C = –0.05, Factor F = 0.05, Factor G = 0.00 shows us that the concentration of acid (Factor D) has a substantial effect on the response, with a concentration of 0.05 M providing a much lower percent recovery. The extraction time (Factor A) also appears significant, but its effect is not as important as the acid’s concentration. All other factors appear insignificant. The method’s estimated standard deviation is $s = \sqrt{\frac {2} {7} \times \left[ (1.30)^2 + (0.35)^2 + (-0.10)^2 + (0.05)^2 + (-0.05)^2 + (0.05)^2 + (0.00)^2 \right]} = 0.72 \nonumber$ which, for an average recovery of 98.1% gives a relative standard deviation of approximately 0.7%. If we control the acid’s concentration so that its effect approaches that for factors B, C, and F, then the relative standard deviation becomes 0.18, or approximately 0.2%. Equivalency Testing If an approved standard method is available, then a new method should be evaluated by comparing results to those obtained when using the standard method. Normally this comparison is made at a minimum of three concentrations of analyte to evaluate the new method over a wide dynamic range. Alternatively, we can plot the results obtained using the new method against results obtained using the approved standard method. A slope of 1.00 and a y-intercept of 0.0 provides evidence that the two methods are equivalent.
textbooks/chem/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/14%3A_Developing_a_Standard_Method/14.02%3A_Verifying_the_Method.txt
For an analytical method to be useful, an analyst must be able to achieve results of acceptable accuracy and precision. Verifying a method, as described in the previous section, establishes this goal for a single analyst. Another requirement for a useful analytical method is that an analyst should obtain the same result from day-to-day, and different labs should obtain the same result when analyzing the same sample. The process by which we approve a method for general use is known as validation and it involves a collaborative test of the method by analysts in several laboratories. Collaborative testing is used routinely by regulatory agencies and professional organizations, such as the U. S. Environmental Protection Agency, the American Society for Testing and Materials, the Association of Official Analytical Chemists, and the American Public Health Association. Many of the representative methods in earlier chapters are identified by these agencies as validated methods. When an analyst performs a single analysis on a single sample the difference between the experimentally determined value and the expected value is influenced by three sources of error: random errors, systematic errors inherent to the method, and systematic errors unique to the analyst. If the analyst performs enough replicate analyses, then we can plot a distribution of results, as shown in Figure 14.3.1 a. The width of this distribution is described by a standard deviation that provides an estimate of the random errors affecting the analysis. The position of the distribution’s mean, $\overline{X}$, relative to the sample’s true value, $\mu$, is determined both by systematic errors inherent to the method and those systematic errors unique to the analyst. For a single analyst there is no way to separate the total systematic error into its component parts. The goal of a collaborative test is to determine the magnitude of all three sources of error. If several analysts each analyze the same sample one time, the variation in their collective results (see Figure 14.3.1 b) includes contributions from random errors and systematic errors (biases) unique to the analysts. Without additional information, we cannot separate the standard deviation for this pooled data into the precision of the analysis and the systematic errors introduced by the analysts. We can use the position of the distribution, to detect the presence of a systematic error in the method. Two-Sample Collaborative Testing The design of a collaborative test must provide the additional information needed to separate random errors from the systematic errors introduced by the analysts. One simple approach—accepted by the Association of Official Analytical Chemists—is to have each analyst analyze two samples that are similar in both their matrix and in their concentration of analyte. To analyze the results we represent each analyst as a single point on a two-sample scatterplot, using the result for one sample as the x-coordinate and the result for the other sample as the y-coordinate [Youden, W. J. “Statistical Techniques for Collaborative Tests,” in Statistical Manual of the Association of Official Analytical Chemists, Association of Official Analytical Chemists: Washington, D. C., 1975, pp 10–11]. As shown in Figure 14.3.2 , a two-sample chart places each analyst into one of four quadrants, which we identify as (+, +), (–, +), (–, –) and (+, –). A plus sign indicates the analyst’s result for a sample is greater than the mean for all analysts and a minus sign indicates the analyst’s result is less than the mean for all analysts. The quadrant (+, –), for example, contains those analysts that exceeded the mean for sample X and that undershot the mean for sample Y. If the variation in results is dominated by random errors, then we expect the points to be distributed randomly in all four quadrants, with an equal number of points in each quadrant. Furthermore, as shown in Figure 14.3.2 a, the points will cluster in a circular pattern whose center is the mean values for the two samples. When systematic errors are significantly larger than random errors, then the points fall primarily in the (+, +) and the (–, –) quadrants, forming an elliptical pattern around a line that bisects these quadrants at a 45o angle, as seen in Figure 14.3.2 b. A visual inspection of a two-sample chart is an effective method for qualitatively evaluating the capabilities of a proposed standard method, as shown in Figure 14.3.3 . The length of a perpendicular line from any point to the 45o line is proportional to the effect of random error on that analyst’s results. The distance from the intersection of the axes—which corresponds to the mean values for samples X and Y—to the perpendicular projection of a point on the 45o line is proportional to the analyst’s systematic error. An ideal standard method has small random errors and small systematic errors due to the analysts, and has a compact clustering of points that is more circular than elliptical. We also can use the data in a two-sample chart to separate the total variation in the data, $\sigma_\text{tot}$, into contributions from random error, $\sigma_\text{rand}$, and from systematic errors due to the analysts, $\sigma_\text{syst}$ [Youden, W. J. “Statistical Techniques for Collaborative Tests,” in Statistical Manual of the Association of Official Analytical Chemists, Association of Official Analytical Chemists: Washington, D. C., 1975, pp 22–24]. Because an analyst’s systematic errors are present in his or her analysis of both samples, the difference, D, between the results estimates the contribution of random error. $D = X_i - Y_i \nonumber$ To estimate the total contribution from random error we use the standard deviation of these differences, sD, for all analysts $s_D = \sqrt{\frac {\sum_{i = 1}^n (D_i - \overline{D})^2} {2(n-1)}} = s_\text{rand} \approx \sigma_\text{rand} \label{14.1}$ where n is the number of analysts. The factor of 2 in the denominator of Equation \ref{14.1} is the result of using two values to determine Di. The total, T, of each analyst’s results $T_i = X_i + Y_i \nonumber$ contains contributions from both random error and twice the analyst’s systematic error. $\sigma_{\mathrm{tot}}^{2}=\sigma_{\mathrm{rand}}^{2}+2 \sigma_{\mathrm{syst}}^{2} \label{14.2}$ The standard deviation of the totals, sT, provides an estimate for $\sigma_\text{tot}$. $s_{T}=\sqrt{\frac{\sum_{i=1}^{n}\left(T_{i}-\overline{T}\right)^{2}}{2(n-1)}}=s_{tot} \approx \sigma_{tot} \label{14.3}$ Again, the factor of 2 in the denominator is the result of using two values to determine Ti. If the systematic errors are significantly larger than the random errors, then sT is larger than sD, a hypothesis we can evaluate using a one-tailed F-test $F=\frac{s_{T}^{2}}{s_{D}^{2}} \nonumber$ where the degrees of freedom for both the numerator and the denominator are n – 1. As shown in the following example, if sT is significantly larger than sD we can use Equation \ref{14.2} to separate $\sigma_\text{tot}^2$ into components that represent the random error and the systematic error. Example 14.3.1 As part of a collaborative study of a new method for determining the amount of total cholesterol in blood, you send two samples to 10 analysts with instructions that they analyze each sample one time. The following results, in mg total cholesterol per 100 mL of serum, are returned to you. analyst sample 1 sample 2 1 245.0 229.4 2 247.4 249.7 3 246.0 240.4 4 244.9 235.5 5 255.7 261.7 6 248.0 239.4 7 249.2 255.5 8 255.1 224.3 9 255.0 246.3 10 243.1 253.1 Use this data estimate $\sigma_\text{rand}$ and $\sigma_\text{syst}$ for the method. Solution Figure 14.3.4 provides a two-sample plot of the results. The clustering of points suggests that the systematic errors of the analysts are significant. The vertical line at 245.9 mg/100 mL is the average value for sample 1 and the average value for sample 2 is indicated by the horizontal line at 243.5 mg/100 mL. To estimate $\sigma_\text{rand}$ and $\sigma_\text{syst}$ we first calculate values for Di and Ti. analyst Di Ti 1 15.6 474.4 2 –2.3 497.1 3 5.6 486.4 4 9.4 480.4 5 –6.0 517.4 6 8.6 487.4 7 –6.3 504.7 8 0.8 449.4 9 8.7 501.3 10 –10.0 496.2 Next, we calculate the standard deviations for the differences, sD, and the totals, sT, using Equation \ref{14.1} and Equation \ref{14.2}, obtaining sD = 5.95 and sT = 13.3. To determine if the systematic errors between the analysts are significant, we use an F-test to compare sT and sD. $F=\frac{s_{T}^{2}}{s_{D}^{2}}=\frac{(13.3)^{2}}{(5.95)^{2}}=5.00 \nonumber$ Because the F-ratio is larger than F(0.05,9,9), which is 3.179, we conclude that the systematic errors between the analysts are significant at the 95% confidence level. The estimated precision for a single analyst is $\sigma_{\mathrm{rand}} \approx s_{\mathrm{rand}}=s_{D}=5.95 \nonumber$ The estimated standard deviation due to systematic errors between analysts is calculated from Equation \ref{14.2}. $\sigma_\text{syst} = \sqrt{\frac {\sigma_\text{tot}^2 - \sigma_\text{rand}^2} {2}} \approx \sqrt{\frac {s_t^2 - s_D^2} {2}} = \sqrt{\frac {(13.3)^2-(5.95)^2} {2}} = 8.41 \nonumber$ If the true values for the two samples are known, we also can test for the presence of a systematic error in the method. If there are no systematic method errors, then the sum of the true values, $\mu_\text{tot}$, for samples X and Y $\mu_{\mathrm{tot}}=\mu_{X}+\mu_{Y} \nonumber$ should fall within the confidence interval around T . We can use a two-tailed t-test of the following null and alternate hypotheses $H_{0} : \overline{T}=\mu_{\mathrm{tot}} \quad H_{\mathrm{A}} : \overline{T} \neq \mu_{\mathrm{tot}} \nonumber$ to determine if there is evidence for a systematic error in the method. The test statistic, texp, is $t_\text{exp} = \frac {|\overline{T} - \mu_\text{tot}|\sqrt{n}} {s_T\sqrt{2}} \label{14.4}$ with n – 1 degrees of freedom. We include the 2 in the denominator because sT (see Equation \ref{14.3} underestimates the standard deviation when comparing $\overline{T}$ to $\mu_\text{tot}$. Example 14.3.2 The two samples analyzed in Example 14.3.1 are known to contain the following concentrations of cholesterol: $\mu_\text{samp 1}$ = 248.3 mg/100 mL and $\mu_\text{samp 2}$ = 247.6 mg/100 mL. Determine if there is any evidence for a systematic error in the method at the 95% confidence level. Solution Using the data from Example 14.3.1 and the true values for the samples, we know that sT is 13.3, and that $\overline{T} = \overline{X}_\text{samp 1 } + \overline{X}_\text{samp 2 } = 245.9 + 243.5 = 489.4 \text{ mg/100 mL} \nonumber$ $\mu_\text{tot} = \mu_\text{samp 1 } + mu_\text{samp 2 } = 248.3 + 247.6 = 495.9 \text{ mg/100 mL} \nonumber$ Substituting these values into Equation \ref{14.4} gives $t_{\mathrm{exp}}=\frac{|489.4-495.9| \sqrt{10}}{13.3 \sqrt{2}}=1.09 \nonumber$ Because this value for texp is smaller than the critical value of 2.26 for t(0.05, 9), there is no evidence for a systematic error in the method at the 95% confidence level. Example 14.3.1 and Example 14.3.2 illustrate how we can use a pair of similar samples in a collaborative test of a new method. Ideally, a collaborative test involves several pairs of samples that span the range of analyte concentrations for which we plan to use the method. In doing so, we evaluate the method for constant sources of error and establish the expected relative standard deviation and bias for different levels of analyte. Collaborative Testing and Analysis of Variance In a two-sample collaborative test we ask each analyst to perform a single determination on each of two separate samples. After reducing the data to a set of differences, D, and a set of totals, T, each characterized by a mean and a standard deviation, we extract values for the random errors that affect precision and the systematic differences between then analysts. The calculations are relatively simple and straightforward. An alternative approach to a collaborative test is to have each analyst perform several replicate determinations on a single, common sample. This approach generates a separate data set for each analyst and requires a different statistical treatment to provide estimates for $\sigma_\text{rand}$ and for $\sigma_\text{syst}$. There are several statistical methods for comparing three or more sets of data. The approach we consider in this section is an analysis of variance (ANOVA). In its simplest form, a one-way ANOVA allows us to explore the importance of a single variable—the identity of the analyst is one example—on the total variance. To evaluate the importance of this variable, we compare its variance to the variance explained by indeterminate sources of error. We first introduced variance in Chapter 4 as one measure of a data set’s spread around its central tendency. In the context of an analysis of variance, it is useful for us to understand that variance is simply a ratio of two terms: a sum of squares for the differences between individual values and their mean, and the degrees of freedom. For example, the variance, s2, of a data set consisting of n measurements is $s^{2}=\frac{\sum_{i=1}^{n}\left(X_{i}-\overline{X}\right)^{2}}{n-1} \nonumber$ where Xi is the value of a single measurement and $\overline{X}$ is the mean. The ability to partition the variance into a sum of squares and the degrees of freedom greatly simplifies the calculations in a one-way ANOVA. Let’s use a simple example to develop the rationale behind a one-way ANOVA calculation. The data in Table 14.3.1 are from four analysts, each asked to determine the purity of a single pharmaceutical preparation of sulfanilamide. Each column in Table 14.3.1 provides the results for an individual analyst. To help us keep track of this data, we will represent each result as Xij, where i identifies the analyst and j indicates the replicate. For example, X3,5 is the fifth replicate for the third analyst, or 94.24%. Table 14.3.1 . Determination of the %Purity of a Sulfanilamide Preparation by Four Analysts replicate analyst A analyst B analyst C analyst D 1 94.09 99.55 95.14 93.88 2 94.64 98.24 94.62 94.23 3 95.08 101.1 95.28 96.05 4 94.54 100.4 94.59 93.89 5 95.38 100.1 94.24 94.95 6 93.62 95.49 $\overline{X}$ 94.56 99.88 94.77 94.75 s 0.641 1.073 0.428 0.899 The data in Table 14.3.1 show variability in the results obtained by each analyst and in the difference in the results between the analysts. There are two sources for this variability: indeterminate errors associated with the analytical procedure that are experienced equally by each analyst, and systematic or determinate errors introduced by the individual analysts. One way to view the data in Table 14.3.1 is to treat it as a single large sample, characterized by a global mean and a global variance $\overline{\overline{X}}=\frac{\sum_{i=1}^{h} \sum_{i=1}^{n} X_{ij}}{N} \label{14.5}$ $\overline{\overline{s^{2}}}=\frac{\sum_{i=1}^{h} \sum_{j=1}^{n_{i}}\left(X_{i j}-\overline{\overline{X}}\right)^{2}}{N-1} \label{14.6}$ where h is the number of samples (in this case the number of analysts), ni is the number of replicates for the ith sample (in this case the ith analyst), and N is the total number of data points (in this case 22). The global variance—which includes all sources of variability that affect the data—provides an estimate of the combined influence of indeterminate errors and systematic errors. A second way to work with the data in Table 14.3.1 is to treat the results for each analyst separately. If we assume that each analyst experiences the same indeterminate errors, then the variance, s2, for each analyst provides a separate estimate of $\sigma_\text{rand}^2$. To pool these individual variances, which we call the within-sample variance, $s_w^2$, we square the difference between each replicate and its corresponding mean, add them up, and divide by the degrees of freedom. $\sigma_{\mathrm{rnd}}^{2} \approx s_{w}^{2}=\frac{\sum_{i=1}^{h} \sum_{j=1}^{n_{i}}\left(X_{i j}-\overline{X}_{i}\right)^{2}}{N-h} \label{14.7}$ Carefully compare our description of Equation \ref{14.7} to the equation itself. It is important that you understand why Equation \ref{14.7} provides our best estimate of the indeterminate errors that affect the data in Table 14.3.1 . Note that we lose one degree of freedom for each of the h means included in the calculation. To estimate the systematic errors, $\sigma_\text{syst}^2$, that affect the results in Table 14.3.1 we need to consider the differences between the analysts. The variance of the individual mean values about the global mean, which we call the between-sample variance, $s_b^2$, is $s_{b}^{2}=\frac{\sum_{i=1}^{h} n_{i}\left(\overline{X}_{i}-\overline{\overline{X}}\right)^{2}}{h-1} \label{14.8}$ where we lose one degree of freedom for the global mean. The between-sample variance includes contributions from both indeterminate errors and systematic errrors; thus $s_b^2 = \sigma_\text{rand}^2 + \overline{n}\sigma_\text{syst}^2 \label{14.9}$ where $\overline{n}$ is the average number of replicates per analyst. $\overline{n}=\frac{\sum_{i=1}^{h} n_{i}}{h} \nonumber$ Note the similarity between Equation \ref{14.9} and Equation \ref{14.2}. The analysis of the data in a two-sample plot is the same as a one-way analysis of variance with h = 2. In a one-way ANOVA of the data in Table 14.3.1 we make the null hypothesis that there are no significant differences between the mean values for the analysts. The alternative hypothesis is that at least one of the mean values is significantly different. If the null hypothesis is true, then $\sigma_\text{syst}^2$ must be zero and $s_w^2$ and $s_b^2$ should have similar values. If $s_b^2$ is significantly greater than $s_w^2$ , then $\sigma_\text{syst}^2$ is greater than zero. In this case we must accept the alternative hypothesis that there is a significant difference between the means for the analysts. The test statistic is the F-ratio $F_{\mathrm{exp}}=\frac{s_{b}^{2}}{s_{w}^{2}} \nonumber$ which is compared to the critical value F(a, h – 1, N h). This is a one-tailed significance test because we are interested only in whether $s_b^2$ is significantly greater than $s_w^2$. Both $s_b^2$ and $s_w^2$ are easy to calculate for small data sets. For larger data sets, calculating $s_w^2$ is tedious. We can simplify the calculations by taking advantage of the relationship between the sum-of-squares terms for the global variance (Equation \ref{14.6}), the within-sample variance (equation ref{14.7}), and the between-sample variance (Equation \ref{14.8}). We can split the numerator of Equation \ref{14.6}, which is the total sum-of-squares, SSt, into two terms $SS_t = SS_w + SS_b \nonumber$ where SSw is the sum-of-squares for the within-sample variance and SSb is the sum-of-squares for the between-sample variance. Calculating SSt and SSb gives SSw by difference. Finally, dividing SSw and SSb by their respective degrees of freedom gives $s_w^2$ and $s_b^2$. Table 14.3.2 summarizes the equations for a one-way ANOVA calculation. Example 14.3.3 walks you through the calculations, using the data in Table 14.3.1 . Table 14.3.2 . Summary of Calculations for a One-Way Analysis of Variance source sum-of-squares degrees of freedom variance expected variance F-ratio between samples $SS_b = \sum_{i = 1}^h n_i (\overline{X}_i - \overline{\overline{X}})^2$ $h - 1$ $s_b^2 = \frac {SS_b} {h - 1}$ $s_b^2 = \sigma_\text{rand}^2 + \overline{n}\sigma_\text{syst}^2$ $F_\text{exp} = \frac {s_b^2} {s_w^2}$ within samples $SS_t = SS_w + SS_b$ $N - h$ $s_w^2 = \frac {SS_w} {N - h}$ $s_w^2 = \sigma_\text{rand}^2$ total $SS_t = \sum_{i = 1}^h \sum_{j = 1}^{n_i}(X_{ij} - \overline{\overline{X}})^2$ $SS_t = \overline{\overline{s^2}}(N - 1)$ $N - 1$ Example 14.3.3 The data in Table 14.3.1 are from four analysts, each asked to determine the purity of a single pharmaceutical preparation of sulfanilamide. Determine if the difference in their results is significant at $\alpha = 0.05$. If such a difference exists, estimate values for $\sigma_\text{rand}^2$ and $\sigma_\text{syst}$. Solution To begin we calculate the global mean (Equation \ref{14.5}) and the global variance (Equation \ref{14.6}) for the pooled data, and the means for each analyst; these values are summarized here. $\overline{\overline{X}} = 95.87 \quad \quad \overline{\overline{s^2}} = 5.506 \nonumber$ $\overline{X}_A = 94.56 \quad \overline{X}_B = 99.88 \quad \overline{X}_C = 94.77 \quad \overline{X}_D = 94.75 \nonumber$ Using these values we calculate the total sum of squares $S S_{t}=\overline{\overline{s^{2}}}(N-1)=(5.506)(22-1)=115.63 \nonumber$ the between sample sum of squares $S S_{b}=\sum_{i=1}^{h} n_{i}\left(\overline{X}_{i}-\overline{\overline{X}}\right)^{2}=6(94.56-95.87)^{2}+5(99.88-95.87)^{2}+5(94.77-95.87)^{2}+6(94.75-95.87)^{2}=104.27 \nonumber$ and the within sample sum of squares $S S_{w}=S S_{t}-S S_{b}=115.63-104.27=11.36 \nonumber$ The remainder of the necessary calculations are summarized in the following table. source sum of squares degrees of freedom variance between samples 104.27 $h - 1 = 4 - 1 = 3$ 34.76 within samples 11.36 $N - h = 22 - 4 = 18$ 0.631 Comparing the variances we find that $F_{\mathrm{exp}}=\frac{s_{b}^{2}}{s_{w}^{2}}=\frac{34.76}{0.631}=55.09 \nonumber$ Because Fexp is greater than F(0.05, 3, 18), which is 3.16, we reject the null hypothesis and accept the alternative hypothesis that the work of at least one analyst is significantly different from the remaining analysts. Our best estimate of the within sample variance is $\sigma_{\text {rand}}^{2} \approx s_{w}^{2}=0.631 \nonumber$ and our best estimate of the between sample variance is $\sigma_\text{syst}^2 = \frac {s_b^2 - s_w^2} {\overline{n}} = \frac {35.76 - 0.631} {22/4} = 6.205 \nonumber$ In this example the variance due to systematic differences between the analysts is almost an order of magnitude greater than the variance due to the method’s precision. Having demonstrated that there is significant difference between the analysts, we can use a modified version of the t-test—known as Fisher’s least significant difference—to determine the source of the difference. The test statistic for comparing two mean values is the t-test from Chapter 4, except we replace the pooled standard deviation, spool, by the square root of the within-sample variance from the analysis of variance. $t_{\mathrm{exp}}=\frac{\left|\overline{X}_{1}-\overline{X}_{2}\right|}{\sqrt{s_{w}^{2}}} \times \sqrt{\frac{n_{1} n_{2}}{n_{1}+n_{2}}} \label{14.10}$ We compare texp to its critical value $t(\alpha, \nu)$ using the same significance level as the ANOVA calculation. The degrees of freedom are the same as that for the within sample variance. Since we are interested in whether the larger of the two means is significantly greater than the other mean, the value of $t(\alpha, \nu)$ is that for a one-tailed significance test. You might ask why we bother with the analysis of variance if we are planning to use a t-test to compare pairs of analysts. Each t-test carries a probability, $\alpha$, of claiming that a difference is significant even though it is not (a type 1 error). If we set $\alpha$ to 0.05 and complete six t-tests, the probability of a type 1 error increases to 0.265. Knowing that there is a significant difference within a data set—what we gain from the analysis of variance—protects the t-test. Example 14.3.4 In Example 14.3.3 we showed that there is a significant difference between the work of the four analysts in Table 14.3.1 . Determine the source of this significant difference. Solution Individual comparisons using Fisher’s least significant difference test are based on the following null hypothesis and the appropriate one-tailed alternative hypothesis. $H_0: \overline{X}_1 = \overline{X}_2 \quad \quad H_A: \overline{X}_1 > \overline{X}_2 \quad \text{or} \quad H_A: \overline{X}_1 < \overline{X}_2 \nonumber$ Using Equation \ref{14.10} we calculate values of texp for each possible comparison and compare them to the one-tailed critical value of 1.73 for t(0.05, 18). For example, texp for analysts A and B is $\left(t_{exp}\right)_{AB}=\frac{|94.56-99.88|}{\sqrt{0.631}} \times \sqrt{\frac{6 \times 5}{6+5}}=11.06 \nonumber$ Because (texp)AB is greater than t(0.05,18) we reject the null hypothesis and accept the alternative hypothesis that the results for analyst B are significantly greater than those for analyst A. Continuing with the other pairs it is easy to show that (texp)AC is 0.437, (texp)AD is 0.414, (texp)BC is 10.17, (texp)BD is 10.67, and (texp)CD is 0.04.. Collectively, these results suggest that there is a significant systematic difference between the work of analyst B and the work of the other analysts. There is, of course no way to decide whether any of the four analysts has done accurate work. We have evidence that analyst B’s result is significantly different than the results for analysts A, C, and D, and we have no evidence that there is any significant difference between the results of analysts A, C, and D. We do not know if analyst B’s results are accurate, or if the results of analysts A, C, and D are accurate. In fact, it is possible that none of the results in Table 14.3.1 are accurate. We can extend an analysis of variance to systems that involve more than a single variable. For example, we can use a two-way ANOVA to determine the effect on an analytical method of both the analyst and the instrumentation. The treatment of multivariate ANOVA is beyond the scope of this text, but is covered in several of the texts listed in this chapter’s additional resources. What is a Reasonable Result for a Collaborative Study? Collaborative testing provides us with a method for estimating the variability (or reproducibility) between analysts in different labs. If the variability is significant, we can determine what portion is due to indeterminate method errors, $\sigma_\text{rand}^2$, and what portion is due to systematic differences between the analysts, $\sigma_\text{syst}^2$. What is left unanswered is the following important question: What is a reasonable value for a method’s reproducibility? An analysis of nearly 10 000 collaborative studies suggests that a reasonable estimate for a method’s reproducibility is $R=2^{(1-0.5 \log C)} \label{14.11}$ where R is the percent relative standard deviation for the results included in the collaborative study and C is the fractional amount of analyte in the sample on a weight-to-weight basis. Equation \ref{14.1} is thought to be independent of the type of analyte, the type of matrix, and the method of analysis. For example, when a sample in a collaborative study contains 1 microgram of analyte per gram of sample, C is 10–6 the estimated relative standard deviation is $R=2^{\left(1-0.5 \log 10^{-6}\right)}=16 \% \nonumber$ Example 14.3.5 What is the estimated relative standard deviation for the results of a collaborative study when the sample is pure analyte (100% w/w analyte)? Repeat for the case where the analyte’s concentration is 0.1% w/w. Solution When the sample is 100% w/w analyte (C = 1) the estimated relative standard deviation is $R=2^{(1-0.5 \log 1)}=2 \% \nonumber$ We expect that approximately two-thirds of the participants in the collaborative study ($\pm 1 \sigma$) will report the analyte’s concentration within the range of 98% w/w to 102% w/w. If the analyte’s concentration is 0.1% w/w (C = 0.001), the estimated relative standard deviation is $R=2^{(1-0.5 \log 0.01)}=5.7 \% \nonumber$ and we expect that approximately two-thirds of the analysts will report the analyte’s concentration within the range of 0.094% w/w to 0.106% w/w. Of course, Equation \ref{14.11} only estimates the expected relative standard. If the method’s relative standard deviation falls with a range of one-half to twice the estimated value, then it is acceptable for use by analysts in different laboratories. The percent relative standard deviation for a single analyst should be one-half to two-thirds of that for the variability between analysts. For details on Equation \ref{14.11}, see (a) Horwitz, W. Anal. Chem. 1982, 54, 67A–76A; (b) Hall, P.; Selinger, B. Anal. Chem. 1989, 61, 1465–1466; (c) Albert, R.; Horwitz, W. Anal. Chem. 1997, 69, 789–790, (d) “The Amazing Horwitz Function,” AMC Technical Brief 17, July 2004; (e) Lingser, T. P. J. Trends Anal. Chem. 2006, 25, 1125. For a discussion of the equation's limitations, see Linsinger, T. P. J.; Josephs, R. D. “Limitations of the Application of the Horwitz Equation,” Trends Anal. Chem. 2006, 25, 1125–1130, as well as a rebuttal (Thompson, M. “Limitations of the Application of the Horwitz Equation: A Rebuttal,” Trends Anal. Chem. 2007, 26, 659–661) and response to the rebuttal (Linsinger, T. P. J.; Josephs, R. D. “Reply to Professor Michael Thompson’s Rebuttal,” Trends Anal. Chem. 2007, 26, 662–663.
textbooks/chem/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/14%3A_Developing_a_Standard_Method/14.03%3A_Validating_the_Method_as_a_Standard_Method.txt
Although the calculations for an analysis of variance are relatively straight-forward, they become tedious when working with large data sets. Both Excel and R include functions for completing an analysis of variance. In addition, R provides a function for identifying the source(s) of significant differences within the data set. Excel Excel’s Analysis ToolPak includes a tool to help you complete an analysis of variance. Let’s use the ToolPak to complete an analysis of variance on the data in Table 14.3.1. Enter the data from Table 14.3.1 into a spreadsheet as shown in Figure 14.4.1 . A B C D E 1 replicate analyst A analyst B analyst C analyst D 2 1 94.09 99.55 95.14 93.88 3 2 94.64 98.24 94.62 94.23 4 3 95.08 101.1 95.28 96.05 5 4 94.54 100.4 94.59 93.89 6 5 95.38 100.1 94.24 94.59 7 6 93.62 95.49 Figure 14.4.1 . Portion of a spreadsheet containing the data from Table 14.3.1. To complete the analysis of variance select Data Analysis... from the Tools menu, which opens a window entitled “Data Analysis.” Scroll through the window, select Analysis: Single Factor from the available options and click OK. Place the cursor in the box for the “Input range” and then click and drag over the cells B1:E7. Select the radio button for “Grouped by: columns” and check the box for “Labels in the first row.” In the box for “Alpha” enter 0.05 for $\alpha$. Select the radio button for “Output range,” place the cursor in the box and click on an empty cell; this is where Excel will place the results. Clicking OK generates the information shown in Figure 14.4.2 . The small value of $3.05 \times 10^{-9}$ for falsely rejecting the null hypothesis indicates that there is a significant source of variation between the analysts. R To complete an analysis of variance for the data in Table 14.3.1 using R, we first need to create several objects. The first object contains each result from Table 14.3.1. > results = c(94.090, 94.640, 95.008, 94.540, 95.380, 93.620, 99.550, 98.240, 101.100, 100.400, 100.100, 95.140, 94.620, 95.280, 94.590, 94.240, 93.880, 94.230, 96.050, 93.890, 94.950, 95.490) The second object contains labels that identify the source of each entry in the first object. The following code creates this object. > analyst = c(rep(“a”,6), rep(“b”,5), rep(“c”,5), rep(“d”,6)) Next, we combine the two objects into a table with two columns, one that contains the data (results) and one that contains the labels (analyst). > df= data.frame(results, labels= factor(analyst)) The command factor indicates that the object analyst contains the categorical factors for the analysis of variance. The command for an analysis of variance takes the following form anova(lm(data ~ factors), data = data.frame) where data and factors are the columns that contain the data and the categorical factors, and data.frame is the name we assigned to the data table. Figure 14.4.3 shows the resulting output. The small value of $3.05 \times 10^{-9}$ for falsely rejecting the null hypothesis indicates that there is a significant source of variation between the analysts. Having found a significant difference between the analysts, we want to identify the source of this difference. R does not include Fisher’s least significant difference test, but it does include a function for a related method called Tukey’s honest significant difference test. The command for this test takes the following form > TukeyHSD(aov(lm(data ~ factors), data = data.frame), conf. level = 0.5) where data and factors are the columns that contain the data and the categorical factors, and data.frame is the name we assigned to the data table. Figure 14.4.4 shows the output of this command and its interpretation. The small probability values when comparing analyst B to each of the other analysts indicates that this is the source of the significant difference identified in the analysis of variance.
textbooks/chem/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/14%3A_Developing_a_Standard_Method/14.04%3A_Using_Excel_and_R_for_an_Analysis_of_Variance.txt
1. For each of the following equations determine the optimum response using a one-factor-at-a-time searching algorithm. Begin the search at (0,0) by first changing factor A, using a step-size of 1 for both factors. The boundary conditions for each response surface are 0 ≤ A ≤ 10 and 0≤ B ≤ 10. Continue the search through as many cycles as necessary until you find the optimum response. Compare your optimum response for each equation to the true optimum. Note: These equations are from Deming, S. N.; Morgan, S. L. Experimental Design: A Chemometric Approach, Elsevier: Amsterdam, 1987, and pseudo-three dimensional plots of the response surfaces can be found in their Figures 11.4, 11.5 and 11.14. (a) R = 1.68 + 0.24A + 0.56B – 0.04A2 – 0.04B2 $\mu_\text{opt} = (3, 7)$ (b) R = 4.0 – 0.4A + 0.08AB $\mu_\text{opt} = (10, 10)$ (c) R = 3.264 + 1.537A + 0.5664B – 0.1505A2 – 0.02734B2 – 0.05785AB $\mu_\text{opt} = (3.91, 6.22)$ 2. Use a fixed-sized simplex searching algorithm to find the optimum response for the equation in Problem 1c. For the first simplex, set one vertex at (0,0) with step sizes of one. Compare your optimum response to the true optimum. 3. Show that equation 14.1.3 and equation 14.1.4 are correct. 4. A 2k factorial design was used to determine the equation for the response surface in Problem 1b. The uncoded levels, coded levels, and the responses are shown in the following table. Determine the uncoded equation for the response surface. A B A* B* response 8 8 +1 +1 5.92 8 2 +1 –1 2.08 2 8 –1 +1 4.48 2 2 –1 –1 3.52 5. Koscielniak and Parczewski investigated the influence of Al on the determination of Ca by atomic absorption spectrophotometry using the 2k factorial design shown in the following table [Koscielniak, P.; Parczewski, A. Anal. Chim. Acta 1983, 153, 111–119]. [Ca2+] (ppm) [Al3+] (ppm) Ca* Al* response 10 160 +1 +1 54.92 10 0 +1 –1 98.44 4 16 –1 +1 19.18 4 0 –1 –1 38.52 (a) Determine the uncoded equation for the response surface. (b) If you wish to analyze a sample that is 6.0 ppm Ca2+, what is the maximum concentration of Al3+ that can be present if the error in the response must be less than 5.0%? 6. Strange studied a chemical reaction using a 23 factorial design [Strange, R. S. J. Chem. Educ. 1990, 67, 113–115]. factor high (+1) level low (–1) level X: temperature 140oC 120oC Y: catalyst type B type A Z: [reactant] 0.50 M 0.25 M run X* Y* Z* % yield 1 –1 –1 –1 28 2 +1 –1 –1 17 3 –1 +1 –1 41 4 +1 +1 –1 34 5 –1 –1 +1 56 6 +1 –1 +1 51 7 –1 +1 +1 42 8 +1 +1 +1 36 (a) Determine the coded equation for this data. (b) If $\beta$ terms of less than $\pm 1$ are insignificant, what main effects and what interaction terms in the coded equation are important? Write down this simpler form for the coded equation. (c) Explain why the coded equation for this data can not be transformed into an uncoded form. (d) Which is the better catalyst, A or B? (e) What is the yield if the temperature is set to 125oC, the concentration of the reactant is 0.45 M, and we use the appropriate catalyst? 7. Pharmaceutical tablets coated with lactose often develop a brown discoloration. The primary factors that affect the discoloration are temperature, relative humidity, and the presence of a base acting as a catalyst. The following data have been reported for a 23 factorial design [Armstrong, N. A.; James, K. C. Pharmaceutical Experimental Design and Interpretation, Taylor and Francis: London, 1996 as cited in Gonzalez, A. G. Anal. Chim. Acta 1998, 360, 227–241]. factor high (+1) level low (–1) level X: benzocaine present absent Y: temperature 40oC 25oC Z: relative humidity 75% 50% run X* Y* Z* color (arb. unit) 1 –1 –1 –1 1.55 2 +1 –1 –1 5.40 3 –1 +1 –1 3.50 4 +1 +1 –1 6.75 5 –1 –1 +1 2.45 6 +1 –1 +1 3.60 7 –1 +1 +1 3.05 8 +1 +1 +1 7.10 (a) Determine the coded equation for this data. (b) If $\beta$ terms of less than 0.5 are insignificant, what main effects and what interaction terms in the coded equation are important? Write down this simpler form for the coded equation. 8. The following data for a 23 factorial design were collected during a study of the effect of temperature, pressure, and residence time on the % yield of a reaction [Akhnazarova, S.; Kafarov, V. Experimental Optimization in Chemistry and Chemical Engineering, MIR Publishers: Moscow, 1982 as cited in Gonzalez, A. G. Anal. Chim. Acta 1998, 360, 227–241]. factor high (+1) level low (–1) level X: temperature 200oC 100oC Y: pressure 0.6 MPa 0.2 MPa Z: residence time 20 min 10 min run X* Y* Z* % yield 1 –1 –1 –1 2 2 +1 –1 –1 6 3 –1 +1 –1 4 4 +1 +1 –1 8 5 –1 –1 +1 10 6 +1 –1 +1 18 7 –1 +1 +1 8 8 +1 +1 +1 12 (a) Determine the coded equation for this data. (b) If $\beta$ terms of less than 0.5 are insignificant, what main effects and what interaction terms in the coded equation are important? Write down this simpler form for the coded equation. (c) Three runs at the center of the factorial design—a temperature of 150oC, a pressure of 0.4 MPa, and a residence time of 15 min—give percent yields of 8%, 9%, and 8.8%. Determine if a first-order empirical model is appropriate for this system at $\alpha = 0.05$. 9. Duarte and colleagues used a factorial design to optimize a flow-injection analysis method for determining penicillin [Duarte, M. M. M. B.; de O. Netro, G.; Kubota, L. T.; Filho, J. L. L.; Pimentel, M. F.; Lima, F.; Lins, V. Anal. Chim. Acta 1997, 350, 353–357]. Three factors were studied: reactor length, carrier flow rate, and sample volume, with the high and low values summarized in the following table. factor high (+1) level low (–1) level X: reactor length 1.3 cm 2.0 cm Y: carrier flow rate 1.6 mL/min 2.2 mL/min Z: sample volume 100 $\mu$L 150 $\mu$L The authors determined the optimum response using two criteria: the greatest sensitivity, as determined by the change in potential for the potentiometric detector, and the largest sampling rate. The following table summarizes their optimization results. run X* Y* Z* $\Delta E$ (mV) sample/h 1 –1 –1 –1 37.45 21.5 2 +1 –1 –1 31.70 26.0 3 –1 +1 –1 32.10 30.0 4 +1 +1 –1 27.30 33.0 5 –1 –1 +1 39.85 21.0 6 +1 –1 +1 32.85 19.5 7 –1 +1 +1 35.00 30.0 8 +1 +1 +1 32.15 34.0 (a) Determine the coded equation for the response surface where $\Delta E$ is the response. (b) Determine the coded equation for the response surface where sample/h is the response. (c) Based on the coded equations in (a) and in (b), do conditions that favor sensitivity also improve the sampling rate? (d) What conditions would you choose if your goal is to optimize both sensitivity and sampling rate? 10. Here is a challenge! McMinn, Eatherton, and Hill investigated the effect of five factors for optimizing an H2-atmosphere flame ionization detector using a 25 factorial design [McMinn, D. G.; Eatherton, R. L.; Hill, H. H. Anal. Chem. 1984, 56, 1293–1298]. The factors and their levels were factor high (+1) level low (–1) level A: H2 flow rate 1460 mL/min 1382 mL/min B: SiH4 20.0 ppm 12.2 ppm C: O2 + N2 flow rate 255 mL/min 210 mL/min D: O2/N2 ratio 1.36 1.19 E: electrode height 75 (arb. unit) 55 (arb. unit) The coded (“+” = +1, “–” = –1) factor levels and responses, R, for the 32 experiments are shown in the following table run A* B* C* D* E* R run A* B* C* D* E* R 1 0.36 17 + 0.39 2 + 0.51 18 + + 0.45 3 + 0.15 19 + + 0.32 4 + + 0.39 20 + + + 0.25 5 + 0.79 21 + + 0.18 6 + + 0.83 22 + + + 0.29 7 + + 0.74 23 + + + 0.07 8 + + + 0.69 24 + + + + 0.19 9 + 0.60 25 + + 0.53 10 + + 0.82 26 + + + 0.60 11 + + 0.42 27 + + + 0.36 12 + + + 0.59 28 + + + + 0.43 13 + + 0.96 29 + + + 0.23 14 + + + 0.87 30 + + + + 0.51 15 + + + 0.76 31 + + + + 0.13 16 + + + + 0.74 32 + + + + + 0.43 (a) Determine the coded equation for this response surface, ignoring $\beta$ terms less than $\pm 0.03$. (b) A simplex optimization of this system finds optimal values for the factors of A = 2278 mL/min, B = 9.90 ppm, C = 260.6 mL/min, and D = 1.71. The value of E was maintained at its high level. Are these values consistent with your analysis of the factorial design. 11. A good empirical model provides an accurate picture of the response surface over the range of factor levels within the experimental design. The same model, however, may yield an inaccurate prediction for the response at other factor levels. For this reason, an empirical model, is tested before it is extrapolated to conditions other than those used in determining the model. For example, Palasota and Deming studied the effect of the relative amounts of H2SO4 and H2O2 on the absorbance of solutions of vanadium using the following central composite design [Palasota, J. A.; Deming, S. N. J. Chem. Educ. 1992, 62, 560–563]. run drops of 1% H2SO4 drops of 20% H2O2 1 15 22 2 10 20 3 20 20 4 8 15 5 15 15 6 15 15 7 15 15 8 15 15 9 22 15 10 10 10 11 20 10 12 15 8 The reaction of H2SO4 and H2O2 generates a red-brown solution whose absorbance is measured at a wavelength of 450 nm. A regression analysis on their data yields the following uncoded equation for the response (absorbance $\times$ 1000). $R = 835.90 - 36.82 X_1 - 21.34 X_2 + 0.52 X_1^2 + 0.15 X_2^2 + 0.98 X_1 X_2 \nonumber$ where X1 is the drops of H2O2, and X2 is the drops of H2SO4. Calculate the predicted absorbances for 10 drops of H2O2 and 0 drops of H2SO4, 0 drops of H2O2 and 10 drops of H2SO4, and for 0 drops of each reagent. Are these results reasonable? Explain. What does your answer tell you about this empirical model? 12. A newly proposed method is tested for its single-operator characteristics. To be competitive with the standard method, the new method must have a relative standard deviation of less than 10%, with a bias of less than 10%. To test the method, an analyst performs 10 replicate analyses on a standard sample known to contain 1.30 ppm of analyte. The results for the 10 trials are 1.25 ppm, 1.26 ppm, 1.29 ppm, 1.56 ppm, 1.46 ppm, 1.23 ppm, 1.49 ppm, 1.27 ppm, 1.31 ppm, and 1.43 ppm. Are the single operator characteristics for this method acceptable? 13. A proposed gravimetric method was evaluated for its ruggedness by varying the following factors. Factor A: sample size A = 1 g a = 1.1 g Factor B: pH B = 6.5 b = 6.0 Factor C: digestion time C = 3 h c = 1 h Factor D: number of rinses D = 3 d = 5 Factor E: precipitant E = reagent 1 e = reagent 2 Factor F: digestion temperature F = 50oC f = 60oC Factor G: drying temperature G = 100oC g = 140oC A standard sample that contains a known amount of analyte is carried through the procedure using the experimental design in Table 14.3.1. The percentage of analyte actually found in the eight trials are as follows: R1 = 98.9, R2 = 98.5, R3 = 97.7, R4 = 97.0, R5 = 98.8, R6 = 98.5, R7 = 97.7, and R8 = 97.3. Determine which factors, if any, appear to have a significant affect on the response, and estimate the expected standard deviation for the method. 14. The two-sample plot for the data in Example 14.3.1 is shown in Figure 14.3.4. Identify the analyst whose work is (a) the most accurate, (b) the most precise, (c) the least accurate, and (d) the least precise. 15. Chichilo reports the following data for the determination of the %w/w Al in two samples of limestone [Chichilo, P. J. J. Assoc. Offc. Agr. Chemists 1964, 47, 1019 as reported in Youden, W. J. “Statistical Techniques for Collaborative Tests,” in Statistical Manual of the Association of Official Analytical Chemists, Association of Official Analytical Chemists: Washington, D. C., 1975]. analyst sample 1 sample 2 1 1.35 1.57 2 1.35 1.33 3 1.34 1.47 4 1.50 1.60 5 1.52 1.62 6 1.39 1.52 7 1.30 1.36 8 1.32 1.33 Construct a two-sample plot for this data and estimate values for $\sigma_\text{rand}$ and for $\sigma_\text{syst}$. 16. The importance of between-laboratory variability on the results of an analytical method are determined by having several laboratories analyze the same sample. In one such study, seven laboratories analyzed a sample of homogenized milk for a selected aflatoxin [Massart, D. L.; Vandeginste, B. G. M; Deming, S. N.; Michotte, Y.; Kaufman, L. Chemometrics: A Textbook, Elsevier: Amsterdam, 1988]. The results, in ppb, are summarized below. lab A lab B lab C lab D lab E lab F lab G 1.6 4.6 1.2 1.5 6.0 6.2 3.3 2.9 2.8 1.9 2.7 3.9 3.8 3.8 3.5 3.0 2.9 3.4 4.3 5.5 5.5 4.5 4.5 1.1 2.0 5.8 4.2 4.9 2.2 3.1 2.9 3.4 4.0 5.3 4.5 (a) Determine if the between-laboratory variability is significantly greater than the within-laboratory variability at $\alpha = 0.05$. If the between-laboratory variability is significant, then determine the source(s) of that variability. (b) Estimate values for $\sigma_\text{rand}^2$ and for $\sigma_\text{syst}^2$. 17. Show that the total sum-of-squares (SSt) is the sum of the within-sample sum-of-squares (SSw) and the between-sample sum-of-squares (SSb). See Table 14.3.2 for the relevant equations. 18. Eighteen analytical students are asked to determine the %w/w Mn in a sample of steel, with the results shown here. 0.26% 0.28% 0.27% 0.24% 0.26% 0.25% 0.26% 0.28% 0.25% 0.24% 0.26% 0.25% 0.29% 0.24% 0.27% 0.23% 0.26% 0.24% (a) Given that the steel sample is 0.26% w/w Mn, estimate the expected relative standard deviation for the class’ results. (b) Are the actual results consistent with the estimated relative standard deviation?
textbooks/chem/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/14%3A_Developing_a_Standard_Method/14.05%3A_Problems.txt
The following set of experiments provide practical examples of the optimization of experimental conditions. Examples include simplex optimization, factorial designs for developing empirical models of response surfaces, and fitting experimental data to theoretical models of the response surface. • Amenta, D. S.; Lamb, C. E.; Leary, J. J. “Simplex Optimization of Yield of sec-Butylbenzene in a Friedel-Crafts Alkylation,” J. Chem. Educ. 1979, 56, 557–558. • Gozálvez, J. M.; García-Diaz, J. C. “Mixture Design Experiments Applied to the Formulation of Colo- rant Solutions,” J. Chem. Educ. 2006, 83, 647–650. • Harvey, D. T.; Byerly, S.; Bowman, A.; Tomlin, J. “Optimization of HPLC and GC Separations Using Response Surfaces,” J. Chem. Educ. 1991, 68, 162–168. • Krawcyzk, T.; Shupska, R.; Baj, S. “Applications of Chemiluminescence in the Teaching of Experimental Design,” J. Chem. Educ. 2015, 92, 317–321. • Leggett, D. L. “Instrumental Simplex Optimization,” J. Chem. Educ. 1983, 60, 707–710. • Oles, P. J. “Fractional Factorial Experimental Design as a Teaching Tool for Quantitative Analysis,” J. Chem. Educ. 1998, 75, 357–359. • Palasota, J. A.; Deming, S.N. “Central Composite Experimental Design,” J. Chem. Educ. 1992, 69, 560–561. • Sangsila, S.; Labinaz, G.; Poland, J. S.; vanLoon, G. W. “An Experiment on Sequential Simplex Optimization of an Atomic Absorption Analysis Procedure,” J. Chem. Educ. 1989, 66, 351–353. • Santos-Delgado, M. J.; Larrea-Tarruella, L. “A Didactic Experience of Statistical Analysis for the De- termination of Glycine in a Nonaqueous Medium using ANOVA and a Computer Program,” J. Chem. Educ. 2004, 81, 97–99. • Shavers, C. L.; Parsons, M. L.; Deming, S. N. “Simplex Optimization of Chemical Systems,” J. Chem Educ. 1979, 56, 307–309. • Stieg, S. “A Low-Noise Simplex Optimization Experiment,” J. Chem. Educ. 1986, 63, 547–548. • Stolzberg, R. J. “Screening and Sequential Experimentation: Simulations and Flame Atomic Absorption Spectrometry Experiments,” J. Chem. Educ. 1997, 74, 216–220. • Van Ryswyk, H.; Van Hecke, G. R. “Attaining Optimal Conditions,” J. Chem. Educ. 1991, 66, 878– 882. The following texts and articles provide an excellent discussion of optimization methods based on searching algorithms and mathematical modeling use factorial designs, including a discussion of the relevant calculations. A few of these sources discuss other types of experimental designs. • Analytical Methods Committee “Experimental design and optimization (1): an introduction to some basic concepts,” AMCTB 24, 2006. • Analytical Methods Committee “Experimental design and optimization (2): handling uncontrolled factors,” AMCTB 26, 2006. • Analytical Methods Committee “Experimental design and optimization (3): some fractional factorial designs,” AMCTB 36, 2009. • Analytical Methods Committee “Experimental design and optimisation (4): Plackett–Burman de- signs,” AMCTB 55, 2013. • Bayne, C. K.; Rubin, I. B. Practical Experimental Designs and Optimization Methods for Chemists, VCH Publishers: Deerfield Beach, FL; 1986. • Bezerra, M. A.; Santelli, R. E.; Oliveira, E. P.; Villar, L. S.; Escaleira, L. A. “Response surface methodology (RSM) as a tool for optimization in analytical chemistry,” Talanta 2008, 76, 965–977. • Box, G. E. P. “Statistical Design in the Study of Analytical Methods,” Analyst 1952, 77, 879–891. • Deming, S. N.; Morgan, S. L. Experimental Design: A Chemometric Approach, Elsevier: Amsterdam, 1987. • Ferreira, S. L. C.; dos Santos, W. N. L.; Quintella, C. M.; Neto, B. B.; Bosque-Sendra, J. M. “Doehlert Matrix: A Chemometric Tool for Analytical Chemistry—Review,” Talanta 2004, 63, 1061–1067. • Ferreira, S. L. C.; Bruns, R. E.; Ferreira, H. S.; Matos, G. D.; David, J. M.; Brandão, G. C.; da Silva, E. G. P.; Portugal, L. A.; dos Reis, P. S.; Souza, A. S.; dos Santos, W. N. L. “Box-Behnken Design: An Alternative for the Optimization of Analytical Methods,” Anal. Chim. Acta 2007, 597, 179–186. • Gonzalez, A. G. “Two Level Factorial Experimental Designs Based on Multiple Linear Regression Models: A Tutorial Digest Illustrated by Case Studies,” Anal. Chim. Acta 1998, 360, 227–241. • Goupy, J. “What Kind of Experimental Design for Finding and Checking Robustness of Analytical Methods?” Anal. Chim. Acta 2005, 544, 184–190. • Hendrix, C. D. “What Every Technologist Should Know About Experimental Design,” Chemtech 1979, 9, 167–174. • Hendrix, C. D. “Through the Response Surface with Test Tube and Pipe Wrench,” Chemtech 1980, 10, 488–497. • Leardi, R. “Experimental Design: A Tutorial,” Anal. Chim. Acta 2009, 652, 161–172. • Liang, Y. “Comparison of Optimization Methods,” Chromatography Review 1985, 12(2), 6–9. • Morgan, E. Chemometrics: Experimental Design, John Wiley and Sons: Chichester, 1991. • Walters, F. H.; Morgan, S. L.; Parker, L. P., Jr.; Deming, S. N. Sequential Simplex Optimization, CRC Press: Boca Raton, FL, 1991. The following texts provide additional information about ANOVA calculations, including discussions of two-way analysis of variance. • Graham, R. C. Data Analysis for the Chemical Sciences, VCH Publishers: New York, 1993. • Miller, J. C.; Miller, J. N. Statistics for Analytical Chemistry, Ellis Horwood Limited: Chichester, 1988. The following resources provide additional information on the validation of analytical methods. • Gonzalez, A. G.; Herrador, M. A. “A Practical Guide to Analytical Method Validation, Including Measurement Uncertainty and Accuracy Profiles,” Trends Anal. Chem. 2007, 26, 227–238. • Thompson, M.; Ellison, S. L. R.; Wood, R. “Harmonized Guidelines for Single-Laboratory Validation of Analytical Methods,” Pure Appl. Chem. 2002, 74, 835–855.
textbooks/chem/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/14%3A_Developing_a_Standard_Method/14.06%3A_Additional_Resources.txt
Chapter Summary One of the goals of analytical chemistry is to develop new analytical methods that are accepted as standard methods. In this chapter we have considered how a standard method is developed, including finding the optimum experimental conditions, verifying that the method produces acceptable precision and accuracy, and validating the method for general use. To optimize a method we try to find the combination of experimental parameters that produces the best result or response. We can visualize this process as being similar to finding the highest point on a mountain. In this analogy, the mountain’s topography corresponds to a response surface, which is a plot of the system’s response as a function of the factors under our control. One method for finding the optimum response is to use a searching algorithm. In a one-factor-at-a-time optimization, we change one factor while holding constant all other factors until there is no further improvement in the response. The process continues with the next factor, cycling through the factors until there is no further improvement in the response. This approach to finding the optimum response often is effective, but usually is not efficient. A searching algorithm that is both effective and efficient is a simplex optimization, the rules of which allow us to change the levels of all factors simultaneously. Another approach to optimizing a method is to develop a mathematical model of the response surface. Such models can be theoretical, in that they are derived from a known chemical and physical relationship between the response and its factors. Alternatively, we can develop an empirical model, which does not have a firm theoretical basis, by fitting an empirical equation to our experimental data. One approach is to use a 2k factorial design in which each factor is tested at both a high level and a low level, and paired with the high level and the low level for all other factors. After optimizing a method it is necessary to demonstrate that it can produce acceptable results. Verifying a method usually includes establishing single-operator characteristics, the blind analysis of standard samples, and determining the method’s ruggedness. Single-operator characteristics include the method’s precision, accuracy, and detection limit when used by a single analyst. To test against possible bias on the part of the analyst, he or she analyzes a set of blind samples in which the analyst does not know the concentration of analyte. Finally, we use ruggedness testing to determine which experimental factors must be carefully controlled to avoid unexpectedly large determinate or indeterminate sources of error. The last step in establishing a standard method is to validate its transferability to other laboratories. An important step in the process of validating a method is collaborative testing, in which a common set of samples is analyzed by different laboratories. In a well-designed collaborative test it is possible to establish limits for the method’s precision and accuracy. Key Terms 2k factorial design blind analysis dependent empirical model Fisher’s least significant difference independent response searching algorithm theoretical model within-sample variance analysis of variance central composite design effective factor fixed-size simplex optimization local optimum response surface simplex validation between-sample variance collaborative testing efficiency factor level global optimum one-factor-at-a-time optimization ruggedness testing standard method variable-sized simplex optimization
textbooks/chem/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/14%3A_Developing_a_Standard_Method/14.07%3A_Chapter_Summary_and_Key_Terms.txt
In Chapter 14 we discussed the process of developing a standard method, including optimizing the experimental procedure, verifying that the method produces acceptable precision and accuracy in the hands of a signal analyst, and validating the method for general use by the broader analytical community. Knowing that a method meets suitable standards is important if we are to have confidence in our results. Even so, using a standard method does not guarantee that the result of an analysis is acceptable. In this chapter we introduce the quality assurance procedures used in industry and government labs for the real-time monitoring of routine chemical analyses. • 15.1: The Analytical Perspective Revisited As we noted in Chapter 1, each area of chemistry brings a unique perspective to the broader discipline of chemistry. For analytical chemistry this perspective is as an approach to solving problems. • 15.2: Quality Control Quality control encompasses all activities that bring an analysis into statistical control. The most important facet of quality control is a set of written directives that describe relevant laboratory-specific, technique-specific, sample-specific, method-specific, and protocol-specific operations. • 15.3: Quality Assessment The written directives of a quality control program are a necessary, but not a sufficient condition for obtaining and maintaining a state of statistical control. Although quality control directives explain how to conduct an analysis, they do not indicate whether the system is under statistical control. This is the role of quality assessment, the second component of a quality assurance program. • 15.4: Evaluating Quality Assurance Data Now we turn our attention to how we incorporate this quality assessment data into a complete quality assurance program. There are two general approaches to developing a quality assurance program: a prescriptive approach, in which we prescribe an exact method of quality assessment, and a performance-based approach in which we can use any form of quality assessment, provided that we can demonstrate an acceptable level of statistical control. • 15.5: Problems End-of-chapter problems to test your understanding of topics in this chapter. • 15.6: Additional Resources A compendium of resources to accompany topics in this chapter. • 15.7: Chapter Summary and Key Terms Summary of chapter's main topics and a list of key terms introduced in the chapter. Thumbnail: Examples of property control charts that show a sequence of results. 15: Quality Assurance As we noted in Chapter 1, each area of chemistry brings a unique perspective to the broader discipline of chemistry. For analytical chemistry this perspective is as an approach to solving problems, one representation of which is shown in Figure 15.1.1 . Figure 15.1.1 is the same as Figure 1.2.1. You may wish to review our earlier discussion of this figure and of the analytical approach to solving problem. If you examine an analytical method it often seems that its development was a straightforward process of moving from a problem to its solution. Unfortunately—or, perhaps, fortunately for those who consider themselves analytical chemists!—developing an analytical method seldom is routine. Even a well-established standard analytical method, carefully followed, can yield poor data. An important feature of the analytical approach in Figure 15.1.1 is the feedback loop that includes steps 2, 3, and 4, in which the outcome of one step may lead us to reevaluate the other steps. For example, after standardizing a spectrophotometric method for the analysis of iron (step 3), we may find that its sensitivity does not meet our original design criteria (step 2). In response, we might choose a different method, change the original design criteria, or work to improve the sensitivity. The feedback loop in Figure 15.1.1 is maintained by a quality assurance program, whose objective is to control systematic and random sources of error [see, for example (a) Taylor, J. K. Anal. Chem. 1981, 53, 1588A–1596A; (b) Taylor, J. K. Anal. Chem. 1983, 55, 600A–608A; (c) Taylor, J. K. Am. Lab October 1985, 53, 67–75; (d) Nadkarni, R. A. Anal. Chem. 1991, 63, 675A–682A; (e) Valcárcel, M.; Ríos, A. Trends Anal. Chem. 1994, 13, 17–23.] The underlying assumption of a quality assurance program is that results obtained when an analysis is under statistical control are free of bias and are characterized by well-defined confidence intervals. When used properly, a quality assurance program identifies the practices necessary to bring a system into statistical control, allows us to determine if the system remains in statistical control, and suggests a course of corrective action if the system falls out of statistical control. An analysis is in a state of statistical control when it is reproducible and free from bias. The focus of this chapter is on the two principal components of a quality assurance program: quality control and quality assessment. In addition, we will give considerable attention to the use of control charts for monitoring the quality of analytical data.
textbooks/chem/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/15%3A_Quality_Assurance/15.01%3A_The_Analytical_Perspective_Revisited.txt
Quality control encompasses all activities that bring an analysis into statistical control. The most important facet of quality control is a set of written directives that describe relevant laboratory-specific, technique-specific, sample-specific, method-specific, and protocol-specific operations. Good laboratory practices (GLPs) describe the general laboratory operations that we must follow in any analysis. These practices include properly recording data and maintaining records, using chain-of-custody forms for samples, specifying and purifying chemical reagents, preparing commonly used reagents, cleaning and calibrating glassware, training laboratory personnel, and maintaining the laboratory facilities and general laboratory equipment. For one example of quality control, see Keith, L. H.; Crummett, W.; Deegan, J., Jr.; Libby, R. A.; Taylor, J. K.; Wentler, G. “Principles of Environmental Analysis,” Anal. Chem. 1983, 55, 2210–2218. This article describes guidelines developed by the Subcommittee on Environmental Analytical Chemistry, a subcommittee of the American Chemical Society’s Committee on Environmental Improvement. Good measurement practices (GMPs) describe those operations specific to a technique. In general, GMPs provide instructions for maintaining, calibrating, and using equipment and instrumentation. For example, a GMP for a titration describes how to calibrate the buret (if required), how to fill the buret with titrant, the correct way to read the volume of titrant in the buret, and the correct way to dispense the titrant. The directions for analyzing a specific analyte in a specific matrix are described by a standard operations procedure (SOP). The SOP indicates how we process the sample in the laboratory, how we separate the analyte from potential interferents, how we standardize the method, how we measure the analytical signal, how we transform the data into the desired result, and how we use quality assessment tools to maintain quality control. If the laboratory is responsible for sampling, then the SOP also states how we must collect, process, and preserve the sample in the field. An SOP may be developed and used by a single laboratory, or it may be a standard procedure approved by an organization such as the American Society for Testing Materials or the Federal Food and Drug Administration. A typical SOP is provided in the following example. Example 15.2.1 Provide an SOP for the determination of cadmium in lake sediments using atomic absorption spectroscopy and a normal calibration curve. Solution Collect sediment samples using a bottom grab sampler and store them at 4oC in acid-washed polyethylene bottles during transportation to the laboratory. Dry the samples to constant weight at 105oC and grind them to a uniform particle size. Extract the cadmium in a 1-g sample of sediment by adding the sediment and 25 mL of 0.5 M HCl to an acid-washed 100-mL polyethylene bottle and shaking for 24 h. After filtering, analyze the sample by atomic absorption spectroscopy using an air–acetylene flame, a wavelength of 228.8 nm, and a slit width of 0.5 nm. Prepare a normal calibration curve using five standards with nominal concentrations of 0.20, 0.50, 1.00, 2.00, and 3.00 ppm. Periodically check the accuracy of the calibration curve by analyzing the 1.00-ppm standard. An accuracy of $\pm$ 10% is considered acceptable. Although an SOP provides a written procedure, it is not necessary to follow the procedure exactly as long as we are careful to identify any modifications. On the other hand, we must follow all instructions in a protocol for a specific purpose (PSP)—the most detailed of the written quality control directives—before an agency or a client will accept our results. In many cases the required elements of a PSP are established by the agency that sponsors the analysis. For example, a lab working under contract with the Environmental Protection Agency must develop a PSP that addresses such items as sampling and sample custody, frequency of calibration, schedules for the preventive maintenance of equipment and instrumentation, and management of the quality assurance program. Two additional aspects of a quality control program deserve mention. The first is that the individuals responsible for collecting and analyzing the samples can critically examine and reject individual samples, measurements, and results. For example, when analyzing sediments for cadmium (see the SOP in Example 15.2.1 ) we might choose to screen sediment samples, discarding a sample that contains foreign objects—such as rocks, twigs, or trash—replacing it with an additional sample. If we observe a sudden change in the performance of the atomic absorption spectrometer, we may choose to reanalyze the affected samples. We may also decide to reanalyze a sample if the result of its analysis clearly is unreasonable. By identifying those samples, measurements, and results subject to gross systematic errors, inspection helps control the quality of an analysis. The second additional consideration is the certification of an analyst’s competence to perform the analysis for which he or she is responsible. Before an analyst is allowed to perform a new analytical method, he or she may be required to analyze successfully an independent check sample with acceptable accuracy and precision. The check sample is similar in composition to samples that the analyst will analyze later, with a concentration that is 5 to 50 times that of the method’s detection limit.
textbooks/chem/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/15%3A_Quality_Assurance/15.02%3A_Quality_Control.txt
The written directives of a quality control program are a necessary, but not a sufficient condition for obtaining and maintaining a state of statistical control. Although quality control directives explain how to conduct an analysis, they do not indicate whether the system is under statistical control. This is the role of quality assessment, the second component of a quality assurance program. The goals of quality assessment are to determine when an analysis has reached a state of statistical control, to detect when an analysis falls out of statistical control, and to suggest possible reasons for this loss of statistical control. For convenience, we divide quality assessment into two categories: internal methods coordinated within the laboratory, and external methods organized and maintained by an outside agency. Internal Methods of Quality Assessment The most useful methods for quality assessment are those coordinated by the laboratory, which provide immediate feedback about the analytical method’s state of statistical control. Internal methods of quality assessment include the analysis of duplicate samples, the analysis of blanks, the analysis of standard samples, and spike recoveries. Analysis of Duplicate Samples An effective method for determining the precision of an analysis is to analyze duplicate samples. Duplicate samples are obtained by dividing a single gross sample into two parts, although in some cases the duplicate samples are independently collected gross samples. We report the results for the duplicate samples, X1 and X2, by determining the difference, d, or the relative difference, (d)r, between the two samples $d = X_1 - X_2 \nonumber$ $(d)_r = \frac {d} {(X_1 + X_2)/2} \times 100 \nonumber$ and comparing to an accepted value, such as those in Table 15.3.1 for the analysis of waters and wastewaters. Alternatively, we can estimate the standard deviation using the results for a set of n duplicates $s = \sqrt{\frac {\sum_{i = 1}^n d_i^2} {2n}} \nonumber$ where di is the difference between the ith pair of duplicates. The degrees of freedom for the standard deviation is the same as the number of duplicate samples. If we combine duplicate samples from several sources, then the precision of the measurement process must be approximately the same for each. Table 15.3.1 : Quality Assessment Limits for the Analysis of Waters and Wastewaters analyte $(d)_r$: [analyte] < 20 $\times$ MDL ($\pm$%) $(d)_r$: [analyte] > 20 $\times$ MDL ($\pm$%) spike recovery limit (%) acids 40 20 $\times$ MDL ($\pm$%)">20 60–140 anions 25 20 $\times$ MDL ($\pm$%)">10 80–120 bases or neutrals 40 20 $\times$ MDL ($\pm$%)">20 70–130 carbamate pesticides 40 20 $\times$ MDL ($\pm$%)">20 50–150 herbicides 40 20 $\times$ MDL ($\pm$%)">20 40–160 metals 25 20 $\times$ MDL ($\pm$%)">10 80–120 other inorganics 25 20 $\times$ MDL ($\pm$%)">10 80–120 volatile organics 40 20 $\times$ MDL ($\pm$%)">20 70–130 Abbreviation: MDL = method's detection limit Source: Table 1020.1 in Standard Methods for the Analysis of Water and Wastewater, American Public Health Association: Washington, D. C., 18th Ed. 1992. Example 15.3.1 To evaluate the precision for the determination of potassium in blood serum, duplicate analyses were performed on six samples, yielding the following results in mg K/L. Table of potassium tests in blood serum duplicate $X_1$ $X_2$ 1 160 147 2 196 202 3 207 196 4 185 193 5 172 188 6 133 119 Estimate the standard deviation for the analysis. Solution To estimate the standard deviation we first calculate the difference, $d$, and the squared difference, $d^{2}$, for each duplicate. The results of these calculations are summarized in the following table. Table of potassium tests in blood serum duplicate $d=X_{1}-X_{2}$ $d^{2}$ 1 13 169 2 –6 36 3 11 121 4 –8 64 5 –16 256 6 14 196 Finally, we calculate the standard deviation $s=\sqrt{\frac{169+36+121+64+256+196}{2 \times 6}}=8.4 \nonumber$ Exercise 15.3.1 To evaluate the precision of a glucometer—a device a patient uses at home to monitor his or her blood glucose level—duplicate analyses are performed on samples drawn from five individuals, yielding the following results in mg glucose/100 mL. duplicate $X_1$ $X_2$ 1 148.5 149.1 2 96.5 98.8 3 174.9 174.5 4 118.1 118.9 5 72.7 70.4 Estimate the standard deviation for the analysis. Answer To estimate the standard deviation we first calculate the difference, d, and the squared difference, $d^{2}$, for each duplicate. The results of these calculations are summarized in the following table. duplicate $d=X_{1}-X_{2}$ $d^{2}$ 1 –0.6 0.36 2 –2.3 5.29 3 0.4 0.16 4 –0.8 0.64 5 2.3 5.29 Finally, we calculate the standard deviation. $s=\sqrt{\frac{0.36+5.29+0.16+0.64+5.29}{2 \times 5}}=1.08 \nonumber$ Analysis of Blanks We introduced the use of a blank in Chapter 3 as a way to correct the signal for contributions from sources other than the analyte. The most common blank is a method blank in which we take an analyte free sample through the analysis using the same reagents, glassware, and instrumentation. A method blank allows us to identify and to correct systematic errors due to impurities in the reagents, contaminated glassware, and poorly calibrated instrumentation. At a minimum, a new method blank is analyzed whenever we prepare a new reagent, or after we analyze a sample with a high concentration of analyte as residual carryover of analyte may produce a positive determinate error. When we collect samples in the field, additional blanks are needed to correct for potential sampling errors [Keith, L. H. Environmental Sampling and Analysis: A Practical Guide, Lewis Publishers: Chelsea, MI, 1991]. A field blank is an analyte-free sample carried from the laboratory to the sampling site. At the sampling site the blank is transferred to a clean sample container, which exposes it to the local environment. The field blank is then preserved and transported back to the laboratory for analysis. A field blank helps identify systematic errors due to sampling, transport, and analysis. A trip blank is an analyte-free sample carried from the laboratory to the sampling site and back to the laboratory without being opened. A trip blank helps to identify systematic errors due to cross-contamination of volatile organic compounds during transport, handling, storage, and analysis. A method blank also is called a reagent blank. The contamination of reagents over time is a significant concern. The regular use of a method blank compensates for this contamination. Analysis of Standards Another tool for monitoring an analytical method’s state of statistical control is to analyze a standard that contains a known concentration of analyte. A standard reference material (SRM) is the ideal choice, provided that the SRM’s matrix is similar to that of our samples. A variety of SRMs are available from the National Institute of Standards and Technology (NIST). If a suitable SRM is not available, then we can use an independently prepared synthetic sample if it is prepared from reagents of known purity. In all cases, the analyte’s experimentally determined concentration in the standard must fall within predetermined limits before the analysis is considered under statistical control. Table 4.2.6 in Chapter 4 provides a summary of SRM 2346, a standard sample of Gingko biloba leaves with certified values for the concentrations of flavonoids, terpene ketones, and toxic elements, such as mercury and lead. Spike Recoveries One of the most important quality assessment tools is the recovery of a known addition, or spike, of analyte to a method blank, a field blank, or a sample. To determine a spike recovery, the blank or sample is split into two portions and a known amount of a standard solution of analyte is added to one portion. The analyte’s concentration is determined for both the spiked, F, and unspiked portions, I, and the percent recovery, %R, is calculated as $\% R=\frac{F-I}{A} \times 100 \nonumber$ where A is the concentration of analyte added to the spiked portion. Example 15.3.2 A spike recovery for the analysis of chloride in well water was performed by adding 5.00 mL of a 250.0 ppm solution of Cl to a 50-mL volumetric flask and diluting to volume with the sample. An unspiked sample was prepared by adding 5.00 mL of distilled water to a separate 50-mL volumetric flask and diluting to volume with the sample. Analysis of the sample and the spiked sample return chloride concentrations of 18.3 ppm and 40.9 ppm, respectively. Determine the spike recovery. Solution To calculate the concentration of the analyte added in the spike, we take into account the effect of dilution. $A=250.0 \mathrm{ppm} \times \frac{5.00 \mathrm{mL}}{50.0 \mathrm{mL}}=25.0 \mathrm{ppm} \nonumber$ Thus, the spike recovery is $\% R=\frac{40.9-18.3}{25.0} \times 100=90.4 \% \nonumber$ Exercise 15.3.2 To test a glucometer, a spike recovery is carried out by measuring the amount of glucose in a sample of a patient’s blood before and after spiking it with a standard solution of glucose. Before spiking the sample the glucose level is 86.7 mg/100 mL and after spiking the sample it is 110.3 mg/100 mL. The spike is prepared by adding 10.0 μL of a 25 000 mg/100mL standard to a 10.0-mL portion of the blood. What is the spike recovery for this sample. Answer Adding a 10.0-μL spike to a 10.0-mL sample is a 1000-fold dilution; thus, the concentration of added glucose is 25.0 mg/100 mL and the spike recovery is $\% R=\frac{110.3-86.7}{25.0} \times 100=94.4 \% \nonumber$ We can use a spike recovery on a method blank and a field blank to evaluate the general performance of an analytical procedure. A known concentration of analyte is added to each blank at a concentration that is 5 to 50 times the method’s detection limit. A systematic error during sampling and transport will result in an unacceptable recovery for the field blank, but not for the method blank. A systematic error in the laboratory, however, affects the recoveries for both the field blank and the method blank. Spike recoveries on a sample are used to detect systematic errors due to the sample’s matrix, or to evaluate the stability of a sample after its collection. Ideally, samples are spiked in the field at a concentration that is 1 to 10 times the analyte’s expected concentration or 5 to 50 times the method’s detection limit, whichever is larger. If the recovery for a field spike is unacceptable, then a duplicate sample is spiked in the laboratory and analyzed immediately. If the laboratory spike’s recovery is acceptable, then the poor recovery for the field spike likely is the result of the sample’s deterioration during storage. If the recovery for the laboratory spike also is unacceptable, the most probable cause is a matrix-dependent relationship between the analytical signal and the analyte’s concentration. In this case the sample is analyzed by the method of standard additions. Typical limits for spike recoveries for the analysis of waters and wastewaters are shown in Table 15.3.1 . Figure 15.4.1, which we will discuss in the next section, illustrates the use of spike recoveries as part of a quality assessment program. External Methods of Quality Assessment Internal methods of quality assessment always carry some level of suspicion because there is a potential for bias in their execution and interpretation. For this reason, external methods of quality assessment also play an important role in a quality assurance program. One external method of quality assessment is the certification of a laboratory by a sponsoring agency. Certification of a lab is based on its successful analysis of a set of proficiency standards prepared by the sponsoring agency. For example, laboratories involved in environmental analyses may be required to analyze standard samples prepared by the Environmental Protection Agency. A second example of an external method of quality assessment is a laboratory’s voluntary participation in a collaborative test sponsored by a professional organization, such as the Association of Official Analytical Chemists. Finally, an individual contracting with a laboratory can perform his or her own external quality assessment by submitting blind duplicate samples and blind standards to the laboratory for analysis. If the results for the quality assessment samples are unacceptable, then there is good reason to question the laboratory’s results for other samples. See Chapter 14 for a more detailed description of collaborative testing.
textbooks/chem/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/15%3A_Quality_Assurance/15.03%3A_Quality_Assessment.txt
In the previous section we described several internal methods of quality assessment that provide quantitative estimates of the systematic errors and the random errors in an analytical method. Now we turn our attention to how we incorporate this quality assessment data into a complete quality assurance program. There are two general approaches to developing a quality assurance program: a prescriptive approach, in which we prescribe an exact method of quality assessment, and a performance-based approach in which we can use any form of quality assessment, provided that we can demonstrate an acceptable level of statistical control [Poppiti, J. Environ. Sci. Technol. 1994, 28, 151A–152A]. Prescriptive Approach With a prescriptive approach to quality assessment, duplicate samples, blanks, standards, and spike recoveries are measured using a specific protocol. We compare the result of each analysis to a single predetermined limit, taking an appropriate corrective action if the limit is exceeded. Prescriptive approaches to quality assurance are common for programs and laboratories subject to federal regulation. For example, the Food and Drug Administration (FDA) specifies quality assurance practices that must be followed by laboratories that analyze products regulated by the FDA. Figure 15.4.1 provides a typical example of a prescriptive approach to quality assessment. Two samples, A and B, are collected at the sample site. Sample A is split into two equal-volume samples, A1 and A2. Sample B is also split into two equal-volume samples, one of which, BSF, is spiked in the field with a known amount of analyte. A field blank, DF, also is spiked with the same amount of analyte. All five samples (A1, A2, B, BSF, and DF) are preserved if necessary and transported to the laboratory for analysis. After returning to the lab, the first sample that is analyzed is the field blank. If its spike recovery is unacceptable—an indication of a systematic error in the field or in the lab—then a laboratory method blank, DL, is prepared and analyzed. If the spike recovery for the method blank is unsatisfactory, then the systematic error originated in the laboratory; this is error the analyst can find and correct before proceeding with the analysis. An acceptable spike recovery for the method blank, however, indicates that the systematic error occurred in the field or during transport to the laboratory, casting uncertainty on the quality of the samples. The only recourse is to discard the samples and return to the field to collect new samples. If the field blank is satisfactory, then sample B is analyzed. If the result for sample B is above the method’s detection limit, or if it is within the range of 0.1 to 10 times the amount of analyte spiked into BSF, then a spike recovery for BSF is determined. An unacceptable spike recovery for BSF indicates the presence of a systematic error that involves the sample. To determine the source of the systematic error, a laboratory spike, BSL, is prepared using sample B and analyzed. If the spike recovery for BSL is acceptable, then the systematic error requires a long time to have a noticeable effect on the spike recovery. One possible explanation is that the analyte has not been preserved properly or it has been held beyond the acceptable holding time. An unacceptable spike recovery for BSL suggests an immediate systematic error, such as that due to the influence of the sample’s matrix. In either case the systematic errors are fatal and must be corrected before the sample is reanalyzed. If the spike recovery for BSF is acceptable, or if the result for sample B is below the method’s detection limit, or outside the range of 0.1 to 10 times the amount of analyte spiked in BSF, then the duplicate samples A1 and A2 are analyzed. The results for A1 and A2 are discarded if the difference between their values is excessive. If the difference between the results for A1 and A2 is within the accepted limits, then the results for samples A1 and B are compared. Because samples collected from the same sampling site at the same time should be identical in composition, the results are discarded if the difference between their values is unsatisfactory and the results accepted if the difference is satisfactory. The protocol in Figure 15.4.1 requires four to five evaluations of quality assessment data before the result for a single sample is accepted, a process that we must repeat for each analyte and for each sample. Other prescriptive protocols are equally demanding. For example, Figure 3.6.1 in Chapter 3 shows a portion of a quality assurance protocol for the graphite furnace atomic absorption analysis of trace metals in aqueous solutions. This protocol involves the analysis of an initial calibration verification standard and an initial calibration blank, followed by the analysis of samples in groups of ten. Each group of samples is preceded and followed by continuing calibration verification (CCV) and continuing calibration blank (CCB) quality assessment samples. Results for each group of ten samples are accepted only if both sets of CCV and CCB quality assessment samples are acceptable. The advantage of a prescriptive approach to quality assurance is that all laboratories use a single consistent set of guideline. A significant disadvantage is that it does not take into account a laboratory’s ability to produce quality results when determining the frequency of collecting and analyzing quality assessment data. A laboratory with a record of producing high quality results is forced to spend more time and money on quality assessment than perhaps is necessary. At the same time, the frequency of quality assessment may be insufficient for a laboratory with a history of producing results of poor quality. Performance-Based Approach In a performance-based approach to quality assurance, a laboratory is free to use its experience to determine the best way to gather and monitor quality assessment data. The tools of quality assessment remain the same— duplicate samples, blanks, standards, and spike recoveries—because they provide the necessary information about precision and bias. What a laboratory can control is the frequency with which it analyzes quality assessment samples and the conditions it chooses to signal when an analysis no longer is in a state of statistical control. The principal tool for performance-based quality assessment is a control chart, which provides a continuous record of quality assessment data. The fundamental assumption is that if an analysis is under statistical control, individual quality assessment results are distributed randomly around a known mean with a known standard deviation. When an analysis moves out of statistical control, the quality assessment data is influenced by additional sources of error, which increases the standard deviation or changes the mean value. Control charts were developed in the 1920s as a quality assurance tool for the control of manufactured products [Shewhart, W. A. Economic Control of the Quality of Manufactured Products, Macmillan: London, 1931]. Although there are many types of control charts, two are common in quality assessment programs: a property control chart, in which we record single measurements or the means for several replicate measurements, and a precision control chart, in which we record ranges or standard deviations. In either case, the control chart consists of a line that represents the experimental result and two or more boundary lines whose positions are determined by the precision of the measurement process. The position of the data points about the boundary lines determines whether the analysis is in statistical control. Constructing a Property Control Chart The simplest property control chart is a sequence of points, each of which represents a single determination of the property we are monitoring. To construct the control chart, we analyze a minimum of 7–15 samples while the system is under statistical control. The center line (CL) of the control chart is the average of these n samples. $C L=\overline{X}=\frac{\sum_{i=1}^{n} X_{i}}{n} \nonumber$ The more samples in the original control chart, the easier it is to detect when an analysis is beginning to drift out of statistical control. Building a control chart with an initial run of 30 or more samples is not an unusual choice. Boundary lines around the center line are determined by the standard deviation, S, of the n points $S=\sqrt{\frac{\sum_{i=1}^{n}\left(X_{i}-\overline{X}\right)^{2}}{n-1}} \nonumber$ The upper and lower warning limits (UWL and LWL) and the upper and lower control limits (UCL and LCL) are given by the following equations. \begin{aligned} U W L &=C L+2 S \ L W L &=C L-2 S \ U C L &=C L+3 S \ L C L &=C L-3 S \end{aligned} \nonumber Why these limits? Examine Table 4.4.2 in Chapter 4 and consider your answer to this question. We will return to this point later in this chapter when we consider how to use a control chart. Example 15.4.1 Construct a property control chart using the following spike recovery data (all values are for percentage of spike recovered). sample: 1 2 3 4 5 result: 97.3 98.1 100.3 99.5 100.9 sample: 6 7 8 9 10 result: 98.6 96.9 99.6 101.1 100.4 sample: 11 12 13 14 15 result: 100.0 95.9 98.3 99.2 102.1 sample: 16 17 18 19 20 result: 98.5 101.7 100.4 99.1 100.3 Solution The mean and the standard deviation for the 20 data points are 99.4% and 1.6%, respectively. Using these values, we find that the UCL is 104.2%, the UWL is 102.6%, the LWL is 96.2%, and the LCL is 94.6%. To construct the control chart, we plot the data points sequentially and draw horizontal lines for the center line and the four boundary lines. The resulting property control chart is shown in Figure 15.4.2 . Exercise 15.4.1 A control chart is a useful method for monitoring a glucometer’s performance over time. One approach is to use the glucometer to measure the glucose level of a standard solution. An initial analysis of the standard yields a mean value of 249.4 mg/100 mL and a standard deviation of 2.5 mg/100 mL. An analysis of the standard over 20 consecutive days gives the following results. day: 1 2 3 4 5 6 7 8 9 10 result: 248.1 246.0 247.9 249.4 250.9 249.7 250.2 250.3 247.3 245.6 day: 11 12 13 14 15 16 17 18 19 20 result: 246.2 250.8 249.0 254.3 246.1 250.8 248.1 246.7 253.5 251.0 Construct a control chart of the glucometer’s performance. Answer The UCL is 256.9, the UWL is 254.4, the CL is 249.4, the LWL is 244.4, and the LCL is 241.9 mg glucose/100 mL. Figure 15.4.3 shows the resulting property control plot. We also can construct a control chart using the mean for a set of replicate determinations on each sample. The mean for the ith sample is $\overline{X}_{i}=\frac{\sum_{j=1}^{n_{rep}} X_{i j}}{n_{rep}} \nonumber$ where Xij is the jth replicate and nrep is the number of replicate determinations for each sample. The control chart’s center line is $CL=\frac{\sum_{i=1}^{n} \overline{X}_{i}}{n} \nonumber$ where n is the number of samples used to construct the control chart. To determine the standard deviation for the warning limits and the control limits, we first calculate the variance for each sample. $s_{i}^{2}=\frac{\sum_{j=1}^{n_{rep}}\left(X_{i j}-\overline{X}_{i}\right)^{2}}{n_{rep}-1} \nonumber$ The overall standard deviation, S, is the square root of the average variance for the samples used to construct the control plot. $S=\sqrt{\frac{\sum_{i=1}^{n} s_{i}^{2}}{n}} \nonumber$ The resulting warning and control limits are given by the following four equations. \begin{aligned} U W L &=C L+\frac{2 S}{\sqrt{n_{rep}}} \ L W L &=C L-\frac{2 S}{\sqrt{n_{rep}}} \ U C L &=C L+\frac{3 S}{\sqrt{n_{rep}}} \ L C L &=C L-\frac{3 S}{\sqrt{n_{rep}}} \end{aligned} \nonumber When using means to construct a property control chart, all samples must have the same number of replicates. Constructing a Precision Control Chart A precision control chart shows how the precision of an analysis changes over time. The most common measure of precision is the range, R, between the largest and the smallest results for nrep analyses on a sample. $R=X_{\mathrm{largest}}-X_{\mathrm{smallest}} \nonumber$ To construct the control chart, we analyze a minimum of 15–20 samples while the system is under statistical control. The center line (CL) of the control chart is the average range of these n samples. $\overline{R}=\frac{\sum_{i=1}^{n} R_{i}}{n} \nonumber$ The upper warning line and the upper control line are given by the following equations \begin{aligned} U W L &=f_{U W L} \times \overline{R} \ U C L &=f_{U C L} \times \overline{R} \end{aligned} \nonumber where fUWL and fUCL are statistical factors determined by the number of replicates used to determine the range. Table 15.4.1 provides representative values for fUWL and fUCL. Because the range is greater than or equal to zero, there is no lower control limit and no lower warning limit. Table 15.4.1 . Statistical Factors for the Upper Warning Limit and the Upper Control Limit of a Precision Control Chart replicates fUWL fUCL 2 2.512 3.267 3 2.050 2.575 4 1.855 2.282 5 1.743 2.115 6 1.669 2.004 Example 15.4.2 Construct a precision control chart using the following ranges, each determined from a duplicate analysis of a 10.0-ppm calibration standard. sample: 1 2 3 4 5 result: 0.36 0.09 0.11 0.06 0.25 sample: 6 7 8 9 10 result: 0.15 0.28 0.27 0.03 0.28 sample: 11 12 13 14 15 result: 0.21 0.19 0.06 0.13 0.37 sample: 16 17 18 19 20 result: 0.01 0.19 0.39 0.05 0.05 Solution The average range for the duplicate samples is 0.176. Because two replicates were used for each point the UWL and UCL are \begin{aligned} U W L &=2.512 \times 0.176=0.44 \ U C L &=3.267 \times 0.176=0.57 \end{aligned} \nonumber The resulting property control chart is shown in Figure 15.4.4 . The precision control chart in Figure 15.4.4 is strictly valid only for the replicate analysis of identical samples, such as a calibration standard or a standard reference material. Its use for the analysis of nonidentical samples—as often is the case in clinical analyses and environmental analyses—is complicated by the fact that the range usually is not independent of the magnitude of the measurements. For example, Table 15.4.2 shows the relationship between the average range and the concentration of chromium in 91 water samples. The significant difference in the average range for different concentrations of chromium makes impossible a single precision control chart. As shown in Figure 15.4.5 , one solution is to prepare separate precision control charts, each of which covers a range of concentrations for which $\overline{R}$ is approximately constant. Table 15.4.2 . Average Range for the Concentration of Chromium in Duplicate Water Samples [Cr] (ppb) number of duplicate samples $\overline{R}$ 5 to $< 10$ 32 0.32 10 to $< 25$ 15 0.57 25 to $< 50$ 16 1.12 50 to $< 150$ 15 3.80 150 to $< 500) 8 5.25 \(> 500$ 5 76.0 Source: Environmental Monitoring and Support Laboratory, U. S. Environmental Protection Agency, “Handbook for Analytical Quality Control in Water and Wastewater Laboratories,” March 1979. Interpreting Control Charts The purpose of a control chart is to determine if an analysis is in a state of statistical control. We make this determination by examining the location of individual results relative to the warning limits and the control limits, and by examining the distribution of results around the central line. If we assume that the individual results are normally distributed, then the probability of finding a point at any distance from the control limit is determined by the properties of a normal distribution [Mullins, E. Analyst, 1994, 119, 369–375.]. We set the upper and the lower control limits for a property control chart to CL $\pm$ 3S because 99.74% of a normally distributed population falls within three standard deviations of the population’s mean. This means that there is only a 0.26% probability of obtaining a result larger than the UCL or smaller than the LCL. When a result exceeds a control limit, the most likely explanation is a systematic error in the analysis or a loss of precision. In either case, we assume that the analysis no longer is in a state of statistical control. Rule 1. An analysis is no longer under statistical control if any single point exceeds either the UCL or the LCL. By setting the upper and lower warning limits to CL $\pm$ 2S, we expect that no more than 5% of the results will exceed one of these limits; thus Rule 2. An analysis is no longer under statistical control if two out of three consecutive points are between the UWL and the UCL or between the LWL and the LCL. If an analysis is under statistical control, then we expect a random distribution of results around the center line. The presence of an unlikely pattern in the data is another indication that the analysis is no longer under statistical control. Rule 3. An analysis is no longer under statistical control if seven consecutive results are completely above or completely below the center line. Rule 4. An analysis is no longer under statistical control if six consecutive results increase (or decrease) in value. Rule 5. An analysis is no longer under statistical control if 14 consecutive results alternate up and down in value. Rule 6. An analysis is no longer under statistical control if there is any obvious nonrandom pattern to the results. Figure 15.4.6 shows three examples of control charts in which the results indicate that an analysis no longer is under statistical control. The same rules apply to precision control charts with the exception that there are no lower warning limits and lower control limits. Exercise 15.4.2 In Exercise 15.4.1 you created a property control chart for a glucometer. Examine your property control chart and evaluate the glucometer’s performance. Does your conclusion change if the next three results are 255.6, 253.9, and 255.8 mg/100 mL? Answer Although the variation in the results appears to be greater for the second 10 samples, the results do not violate any of the six rules. There is no evidence in Figure 15.4.3 that the analysis is out of statistical control. The next three results, in which two of the three results are between the UWL and the UCL, violates the second rule. Because the analysis is no longer under statistical control, we must stop using the glucometer until we determine the source of the problem. Using Control Charts for Quality Assurance Control charts play an important role in a performance-based program of quality assurance because they provide an easy to interpret picture of the statistical state of an analysis. Quality assessment samples such as blanks, standards, and spike recoveries are monitored with property control charts. A precision control chart is used to monitor duplicate samples. The first step in using a control chart is to determine the mean value and the standard deviation (or range) for the property being measured while the analysis is under statistical control. These values are established using the same conditions that will be present during subsequent analyses. Preliminary data is collected both throughout the day and over several days to account for short-term and for long-term variability. An initial control chart is prepared using this preliminary data and discrepant points identified using the rules discussed in the previous section. After eliminating questionable points, the control chart is replotted. Once the control chart is in use, the original limits are adjusted if the number of new data points is at least equivalent to the amount of data used to construct the original control chart. For example, if the original control chart includes 15 points, new limits are calculated after collecting 15 additional points. The 30 points are pooled together to calculate the new limits. A second modification is made after collecting an additional 30 points. Another indication that a control chart needs to be modified is when points rarely exceed the warning limits. In this case the new limits are recalculated using the last 20 points. Once a control chart is in use, new quality assessment data is added at a rate sufficient to ensure that the analysis remains in statistical control. As with prescriptive approaches to quality assurance, when the analysis falls out of statistical control, all samples analyzed since the last successful verification of statistical control are reanalyzed. The advantage of a performance-based approach to quality assurance is that a laboratory may use its experience, guided by control charts, to determine the frequency for collecting quality assessment samples. When the system is stable, quality assessment samples can be acquired less frequently.
textbooks/chem/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/15%3A_Quality_Assurance/15.04%3A_Evaluating_Quality_Assurance_Data.txt
1. Make a list of good laboratory practices for the lab that accompanies this course, or another lab if this course does not have an associated laboratory. Explain the rationale for each item on your list. 2. Write directives outlining good measurement practices for (a) a buret, for (b) a pH meter, and for (c) a spectrophotometer. 3. A atomic absorption method for the analysis of lead in an industrial wastewater has a method detection limit of 10 ppb. The relationship between the absorbance and the concentration of lead, as determined from a calibration curve, is $A=0.349 \times(\text{ppm Pb}) \nonumber$ Analysis of a sample in duplicate gives absorbance values of 0.554 and 0.516. Is the precision between these two duplicates acceptable based on the limits in Table 15.3.1? 4. The following data were obtained for the duplicate analysis of a 5.00 ppm NO3 standard. sample $X_1$ (ppm) $X_2$ (ppm) 1 5.02 4.90 2 5.10 5.18 3 5.07 4.95 4 4.96 5.01 5 4.88 4.98 6 5.04 4.97 Calculate the standard deviation for these duplicate samples. If the maximum limit for the relative standard deviation is 1.5%, are these results acceptable? 5. Gonzalez and colleagues developed a voltammetric method for the determination of tert-butylhydroxyanisole (BHA) in chewing gum [Gonzalez, A.; Ruiz, M. A.; Yanez-Sedeno, P.; Pingarron, J. M. Anal. Chim. Acta 1994, 285, 63–71.]. Analysis of a commercial chewing gum gave a result of 0.20 mg/g. To evaluate the accuracy of this results, the authors performed five spike recoveries, adding an amount of BHA equivalent to 0.135 mg/g to each sample. The experimentally determined concentrations of BHA in these samples were reported as 0.342, 0.340, 0.340, 0.324, and 0.322 mg/g. Determine the percent recovery for each sample and the mean percent recovery. 6. A sample is analyzed following the protocol shown in Figure 15.4.1 using a method with a detection limit of 0.05 ppm. The relationship between the analytical signal, Smeas, and the concentration of the analyte in parts per million, CA, as determined from a calibration curve, is $S_{meas}=0.273 \times C_{A} \nonumber$ Answer the following questions if the limit for a successful spike recovery is ±10%: (a) A field blank is spiked with the analyte to a concentration of 2.00 ppm and returned to the lab. Analysis of the spiked field blank gives a signal of 0.573. Is the spike recovery for the field blank acceptable? (b) The analysis of a spiked field blank is unacceptable. To determine the source of the problem, a spiked method blank is prepared by spiking distilled water with the analyte to a concentration of 2.00 ppm. Analysis of the spiked method blank gives a signal of 0.464. Is the source of the problem in the laboratory or in the field? (c) The analysis for a spiked field sample, BSF, is unacceptable. To determine the source of the problem, the sample is spiked in the laboratory by adding sufficient analyte to increase the concentration by 2.00 ppm. Analysis of the sample before and after the spike gives signals of 0.456 for B and a signal of 1.03 for BSL. Considering this data, what is the most likely source of the systematic error? 7. The following data were obtained for the repetitive analysis of a stable standard [Standard Methods for the Analysis of Waters and Wastewaters, American Public Health Association: Washington, D. C., 18th Ed., 1992. The data is from Table 1030:I]. sample $X_i$ (ppm) sample $X_i$ (ppm) sample $X_i$ (ppm) 1 35.1 10 35.0 18 36.4 2 33.2 11 31.4 19 32.1 3 33.7 12 35.6 20 38.2 4 35.9 13 30.2 21 33.1 5 33.5 14 32.7 22 34.9 6 34.5 15 31.1 23 36.2 7 34.4 16 34.8 24 34.0 8 34.3 17 34.3 25 33.8 9 31.8 Construct a property control chart for these data and evaluate the state of statistical control. 8. The following data were obtained for the repetitive spike recoveries of field samples [Standard Methods for the Analysis of Waters and Wastewaters, American Public Health Association: Washington, D. C., 18th Ed., 1992. The data is from Table 1030:II]. sample % recovery sample % recovery sample % recovery 1 94.6 10 104.6 18 104.6 2 93.1 11 123.8 19 91.5 3 100.0 12 93.8 20 83.1 4 122.3 13 80.0 21 100.8 5 120.8 14 99.2 22 123.1 6 93.1 15 101.5 23 96.2 7 117.7 16 74.6 24 96.9 8 96.2 17 108.5 25 102.3 9 73.8 Construct a property control chart for these data and evaluate the state of statistical control. 9. The following data were obtained for the duplicate analysis of a stable standard [Standard Methods for the Analysis of Waters and Wastewaters, American Public Health Association: Washington, D. C., 18th Ed., 1992. The data is from Table 1030:I]. sample $X_1$ (ppm) $X_2$ (ppm) sample $X_1$ (ppm) $X_2$ (ppm) 1 50 46 14 36 36 2 37 36 15 47 45 3 22 19 16 16 20 4 17 20 17 18 21 5 32 34 18 26 22 6 46 46 19 35 36 7 26 28 20 36 25 8 26 30 21 49 51 9 61 58 22 33 32 10 44 45 23 40 38 11 40 44 24 16 13 12 36 35 25 39 42 13 29 31
textbooks/chem/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/15%3A_Quality_Assurance/15.05%3A_Problems.txt
The following experiments introduce aspects of quality assurance and quality control. • Bell, S. C.; Moore, J. “Integration of Quality Assurance/Quality Control into Quantitative Analysis,” J. Chem. Educ. 1998, 75, 874–877. • Cancilla, D. A. “Integration of Environmental Analytical Chemistry with Environmental Law: The Development of a Problem-Based Laboratory,” J. Chem. Educ. 2001, 78, 1652–1660. • Claycomb, G. D.; Venable, F. A. “Selection, Evaluation, and Modification of a Standard Operating Procedure as a Mechanism for Introducing an Undergraduate Student to Chemical Research: A Case Study,” J. Chem. Educ. 2015, 92, 256–262. • Laquer, F. C. “Quality Control Charts in the Quantitative Analysis Laboratory Using Conductance Measurement,” J. Chem. Educ. 1990, 67, 900–902. • Marcos, J.; Ríos, A.; Valcárcel, M. “Practicing Quality Control in a Bioanalytical Experiment,” J. Chem. Educ. 1995, 72, 947–949. The following texts and articles may be consulted for an additional discussion of quality assurance and quality control. • Amore, F. “Good Analytical Practices,” Anal. Chem. 1979, 51, 1105A–1110A. • Anderson, J. E. T. “On the development of quality assurance,” TRAC-Trend. Anal. Chem. 2014, 60, 16–24. • Barnard, Jr. A. J.; Mitchell, R. M.; Wolf, G. E. “Good Analytical Practices in Quality Control,” Anal. Chem. 1978, 50, 1079A–1086A. • Cairns, T.; Rogers, W. M. “Acceptable Analytical Data for Trace Analysis,” Anal. Chem. 1993, 55, 54A–57A. • Taylor, J. K. Quality Assurance of Chemical Measurements, Lewis Publishers: Chelsa, MI, 1987. • Wedlich, R. C.; Libera, A. E.; Pires, A.; Therrien, M. T. “Good Laboratory Practice. Part 1. An Introduction,” J. Chem. Educ. 2013, 90, 854–857. • Wedlich, R. C.; Libera, A. E.; Pires, A.; Tellarini, C. “Good Laboratory Practice. Part 1. Recording and Retaining Raw Data,” J. Chem. Educ. 2013, 90, 858–861. • Wedlich, R. C.; Libera, A. E.; Fazzino, L.; Fransen, J. M. “Good Laboratory Practice. Part 1. Imple- menting Good Laboratory Practice in the Analytical Lab,” J. Chem. Educ. 2013, 90, 862–865. Additional information about the construction and use of control charts may be found in the following sources. • Miller, J. C.; Miller, J. N. Statistics for Analytical Chemistry, 2nd Ed., Ellis Horwood Limited: Chich-ester, 1988. • Ouchi, G. I. “Creating Control Charts with a Spreadsheet Program,” LC•GC 1993, 11, 416–423. • Ouchi, G. I. “Creating Control Charts with a Spreadsheet Program,” LC•GC 1997, 15, 336–344. • Simpson, J. M. “Spreadsheet Statistics,” J. Chem. Educ. 1994, 71, A88–A89. 15.07: Chapter Summary and Key Terms Summary Few analyses are so straightforward that high quality results are obtained with ease. Good analytical work requires careful planning and an attention to detail. Creating and maintaining a quality assurance program is one way to help ensure the quality of analytical results. Quality assurance programs usually include elements of quality control and quality assessment. Quality control encompasses all activities used to bring a system into statistical control. The most important facet of quality control is written documentation, including statements of good laboratory practices, good measurement practices, standard operating procedures, and protocols for a specific purpose. Quality assessment includes the statistical tools used to determine whether an analysis is in a state of statistical control, and, if possible, to suggest why an analysis has drifted out of statistical control. Among the tools included in quality assessment are the analysis of duplicate samples, the analysis of blanks, the analysis of standards, and the analysis of spike recoveries. Another important quality assessment tool, which provides an ongoing evaluation of an analysis, is a control chart. A control chart plots a property, such as a spike recovery, as a function of time. Results that exceed warning and control limits, or unusual patterns of results indicate that an analysis is no longer under statistical control. Key Terms control chart good laboratory practices proficiency standard quality assurance program spike recovery trip blank duplicate samples good measurement practices protocol for a specific purpose quality control standard operations procedure field blank method blank quality assessment reagent blank statistical control
textbooks/chem/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/15%3A_Quality_Assurance/15.06%3A_Additional_Resources.txt
Normality expresses concentration in terms of the equivalents of one chemical species that react stoichiometrically with another chemical species. Note that this definition makes an equivalent, and thus normality, a function of the chemical reaction. Although a solution of H2SO4 has a single molarity, its normality depends on its reaction. We define the number of equivalents, n, using a reaction unit, which is the part of a chemical species that participates in the chemical reaction. In a precipitation reaction, for example, the reaction unit is the charge of the cation or the anion that participates in the reaction; thus, for the reaction $\ce{Pb^{2+}}(aq) + 2\ce{I-}(aq) \ce{<=>} \ce{PbI2}(s) \nonumber$ n = 2 for Pb2+ and n = 1 for I. In an acid–base reaction, the reaction unit is the number of H+ ions that an acid donates or that a base accepts. For the reaction between sulfuric acid and ammonia $\ce{H2SO4}(aq) + 2\ce{NH3}(aq) \ce{<=>} 2\ce{NH4+}(aq) + \ce{SO4^{2-}} \nonumber$ n = 2 for H2SO4 because sulfuric acid donates two protons, and n = 1 for NH3 because each ammonia accepts one proton. For a complexation reaction, the reaction unit is the number of electron pairs that the metal accepts or that the ligand donates. In the reaction between Ag+ and NH3 $\ce{Ag+}(aq) + 2\ce{NH3}(aq) \ce{<=>} \ce{Ag(NH3)2+}(aq) \nonumber$ n = 2 for Ag+ because the silver ion accepts two pairs of electrons, and n = 1 for NH3 because each ammonia has one pair of electrons to donate. Finally, in an oxidation–reduction reaction the reaction unit is the number of electrons released by the reducing agent or accepted by the oxidizing agent; thus, for the reaction $2\ce{Fe^{3+}}(aq) + \ce{Sn^{2+}}(aq) \ce{<=>} \ce{Sn^{4+}}(aq) + 2\ce{Fe^{2+}}(aq) \nonumber$ n = 1 for Fe3+ and n = 2 for Sn2+. Clearly, determining the number of equivalents for a chemical species requires an understanding of how it reacts. Normality is the number of equivalent weights, EW, per unit volume. An equivalent weight is the ratio of a chemical species’ formula weight, FW, to the number of its equivalents, n. $EW = \frac {FW} {n} \nonumber$ The following simple relationship exists between normality, N, and molarity, M, $N = n \times M \nonumber$ 16.02: Propagation of Uncertainty In Chapter 4 we considered the basic mathematical details of a propagation of uncertainty, limiting our treatment to the propagation of measurement error. This treatment is incomplete because it omits other sources of uncertainty that contribute to the overall uncertainty in our results. Consider, for example, Exercise 4.3.1, in which we determined the uncertainty in a standard solution of Cu2+ prepared by dissolving a known mass of Cu wire with HNO3, diluting to volume in a 500-mL volumetric flask, and then diluting a 1-mL portion of this stock solution to volume in a 250-mL volumetric flask. To calculate the overall uncertainty we included the uncertainty in weighing the sample and the uncertainty in using the volumetric glassware. We did not consider other sources of uncertainty, including the purity of the Cu wire, the effect of temperature on the volumetric glassware, and the repeatability of our measurements. In this appendix we take a more detailed look at the propagation of uncertainty, using the standardization of NaOH as an example. Standardizing a Solution of NaOH Because solid NaOH is an impure material, we cannot directly prepare a stock solution by weighing a sample of NaOH and diluting to volume. Instead, we determine the solution’s concentration through a process called a standardization. A fairly typical procedure is to use the NaOH solution to titrate a carefully weighed sample of previously dried potassium hydrogen phthalate, C8H5O4K, which we will write here, in shorthand notation, as KHP. For example, after preparing a nominally 0.1 M solution of NaOH, we place an accurately weighed 0.4-g sample of dried KHP in the reaction vessel of an automated titrator and dissolve it in approximately 50 mL of water (the exact amount of water is not important). The automated titrator adds the NaOH to the KHP solution and records the pH as a function of the volume of NaOH. The resulting titration curve provides us with the volume of NaOH needed to reach the titration's endpoint. The example below is adapted from Ellison, S. L. R.; Rosslein, M.; Williams, A. EURACHEM/CITAC Guide: Quantifying Uncertainty in Analytical Measurement, 3nd Edition, 2012. See Chapter 5 for further details about standardizations and see Chapter 9 for further details about titrations The end point of the titration is the volume of NaOH that corresponds to the stoichiometric reaction between NaOH and KHP. $\ce{NaOH}(aq) + \ce{C8H5O4K}(aq) \ce{->} \ce{C8H4O4^{2-}}(aq) + \ce{K+}(aq) + \ce{Na+}(aq) + \ce{H2O}(l) \nonumber$ Knowing the mass of KHP and the volume of NaOH needed to reach the endpoint, we use the following equation to calculate the molarity of the NaOH solution. $C_\ce{NaOH} = \frac {1000 \times m_\ce{KHP} \times P_\ce{KHP}} {FW_\ce{KHP} \times V_\ce{NaOH}} \nonumber$ where CNaOH is the concentration of NaOH (in mol KHP/L), mKHP is the mass of KHP taken (in g), PKHP is the purity of the KHP (where PKHP = 1 means the KHP is pure and has no impurities), FWKHP is the molar mass of KHP (in g KHP/mol KHP), and VNaOH is the volume of NaOH (in mL). The factor of 1000 simply converts the volume in mL to L. Identifying and Analyzing Sources of Uncertainty Although it seems straightforward, identifying sources of uncertainty requires care as it easy to overlook important sources of uncertainty. One approach is to use a cause-and-effect diagram, also known as an Ishikawa diagram—named for its inventor, Kaoru Ishikawa—or a fish bone diagram. To construct a cause-and-effect diagram, we first draw an arrow that points to the desired result; this is the diagram's trunk. We then add five main branch lines to the trunk, one for each of the four parameters that determine the concentration of NaOH (mKHP, PKHP, FWKHP, and VNaOH) and one for the method's repeatability, R. Next we add additional branches to the main branch for each of these five factors, continuing until we account for all potential sources of uncertainty. Figure 16.2.1 shows the complete cause-and-effect diagram for this analysis. Before we continue, let's take a closer look at Figure 16.2.1 to make sure that we understand each branch of the diagram. To determine the mass of KHP, mKHP, we make two measurements: taring the balance and weighing the gross sample. Each of these measurements is subject to a calibration uncertainty. When we calibrate a balance, we essentially are creating a calibration curve of the balance's signal as a function of mass. Any calibration curve is subject to an uncertainty in the y-intercept (bias) and an uncertainty in the slope (linearity). We can ignore the calibration bias because it contributes equally to both (mKHP)gross and (mKHP)tare, and because we determine the mass of KHP by difference. $m_\ce{KHP} = \left( m_\ce{KHP} \right)_\text{gross} - \left( m_\ce{KHP} \right)_\text{tare} \nonumber$ The volume of NaOH, VNaOH, at the end point has three sources of uncertainty. First, an automated titrator uses a piston to deliver NaOH to the reaction vessel, which means the volume of NaOH is subject to an uncertainty in the piston's calibration. Second, because a solution’s volume varies with temperature, there is an additional source of uncertainty due to any fluctuation in the ambient temperature during the analysis. Finally, there is a bias in the titration’s end point if the NaOH reacts with any species other than the KHP. Repeatability, R, is a measure of how consistently we can repeat the analysis. Each instrument we use—the balance and the automated titrator—contributes to this uncertainty. In addition, our ability to consistently detect the end point also contributes to repeatability. Finally, there are no secondary factors that affect the uncertainty of the KHP's purity, PKHP, or its molar mass, FWKHP. Estimating the Standard Deviation for Measurements To complete a propagation of uncertainty we must express each measurement’s uncertainty in the same way, usually as a standard deviation. Measuring the standard deviation for each measurement requires time and is not always practical. Fortunately, most manufacture provides a tolerance range for glassware and instruments. A 100-mL volumetric glassware, for example, has a tolerance of $\pm 0.1 \text{ mL}$ at a temperature of 20 oC. We can convert a tolerance range to a standard deviation using one of the following three approaches. Assume a Uniform Distribution Figure 16.2.2 a shows a uniform distribution between the limits of $\pm x$, in which each result between the limits is equally likely. A uniform distribution is the choice when the manufacturer provides a tolerance range without specifying a level of confidence and when there is no reason to believe that results near the center of the range are more likely than results at the ends of the range. For a uniform distribution the estimated standard deviation, s, is $s = \frac {x} {\sqrt{3}} \nonumber$ This is the most conservative estimate of uncertainty as it gives the largest estimate for the standard deviation. Assume a Triangular Distribution Figure 16.2.2 b shows a triangular distribution between the limits of $\pm x$, in which the most likely result is at the center of the distribution, decreasing linearly toward each limit. A triangular distribution is the choice when the manufacturer provides a tolerance range without specifying a level of confidence and when there is a good reason to believe that results near the center of the range are more likely than results at the ends of the range. For a triangular distribution the estimated standard deviation, s, is $s = \frac {x} {\sqrt{6}} \nonumber$ This is a less conservative estimate of uncertainty as, for any value of x, the standard deviation is smaller than that for a uniform distribution. Assume a Normal Distribution Figure 16.2.2 c shows a normal distribution that extends, as it must, beyond the limits of $\pm x$, and which is centered at the mid-point between –x and +x. A normal distribution is the choice when we know the confidence interval for the range. For a normal distribution the estimated standard deviation, s, is $s = \frac {x} {z} \nonumber$ where z is 1.96 for a 95% confidence interval and 3.00 for a 99.7% confidence interval. Completing the Propagation of Uncertainty Now we are ready to return to our example and determine the uncertainty for the standardization of NaOH. First we establish the uncertainty for each of the five primary sources—the mass of KHP, the volume of NaOH at the end point, the purity of the KHP, the molar mass for KHP, and the titration’s repeatability. Having established these, we can combine them to arrive at the final uncertainty. Uncertainty in the Mass of KHP After drying the KHP, we store it in a sealed container to prevent it from readsorbing moisture. To find the mass of KHP we first weigh the container, obtaining a value of 60.5450 g, and then weigh the container after removing a portion of KHP, obtaining a value of 60.1562 g. The mass of KHP, therefore, is 60.5450 – 60.1562 = 0.3888 g, or 388.8 mg. To find the uncertainty in this mass we examine the balance’s calibration certificate, which indicates that its tolerance for linearity is $\pm 0.15 \text{ mg}$. We will assume a uniform distribution because there is no reason to believe that any result within this range is more likely than any other result. Our estimate of the uncertainty for any single measurement of mass, u(m), is $u(m) = \frac {0.15 \text{ mg}} {\sqrt{3}} = \pm 0.087 \text{ mg} \nonumber$ Because we determine the mass of KHP by subtracting the container’s final mass from its initial mass, the uncertainty in the mass of KHP u(mKHP), is given by the following propagation of uncertainty. $u(m_{\text{KHP}}) = \sqrt{\left( 0.087 \text{ mg} \right)^2 + \left( 0.087 \text{ mg} \right)^2} =\pm 0.12 \text{ mg} \nonumber$ Uncertainty in the Volume of NaOH After we place the sample of KHP in the automated titrator’s reaction vessel and dissolve the KHP with water, we complete the titration and find that it takes 18.64 mL of NaOH to reach the end point. To find the uncertainty in this volume we need to consider, as shown in Figure 16.2.1 , three sources of uncertainty: the automated titrator’s calibration, the ambient temperature, and any bias in determining the end point. To find the uncertainty from the automated titrator’s calibration we examine the instrument’s certificate, which indicates a range of $\pm 0.03 \text{ mL}$ for a 20-mL piston. Because we expect that an effective manufacturing process is more likely to produce a piston that operates near the center of this range than at the extremes, we will assume a triangular distribution. Our estimate of the uncertainty due to the calibration, u(Vcal) is $u(V_\text{cal}) = \frac {0.03 \text{ mL}} {\sqrt{6}} = \pm 0.012 \text{ mL} \nonumber$ To determine the uncertainty due to the lack of temperature control, we draw on our prior work in the lab, which has established a temperature variation of $\pm 3 \text{°C}$ with a confidence level of 95%. To find the uncertainty, we convert the temperature range to a range of volumes using water’s coefficient of expansion $(2.1 \times 10^{-4} \text{°C}) \times (\pm 3 \text{°C}) \times 18.64 \text{ mL} = \pm 0.012 \text{ mL} \nonumber$ and then estimate the uncertainty due to temperature, u(Vtemp) as $u(V_\text{temp}) = \frac {\pm 0.012 \text{ mL}} {1.96} = \pm 0.006 \text{ mL} \nonumber$ Titrations using NaOH are subject to a bias due to the adsorption of CO2, which can react with OH, as shown here. $\ce{CO2}(aq) + \ce{2OH-}(aq) \ce{->} \ce{CO3^{2-}}(aq) + \ce{H2O}(l) \nonumber$ If CO2 is present, the volume of NaOH at the end point includes both the NaOH that reacts with the KHP and the NaOH that reacts with CO2. Rather than trying to estimate this bias, it is easier to bathe the reaction vessel in a stream of argon, which excludes CO2 from the automated titrator’s reaction vessel. Adding together the uncertainties for the piston’s calibration and the lab’s temperature gives the uncertainty in the uncertainty in the volume of NaOH, u(VNaOH) as $u(V_\ce{NaOH}) = \sqrt{(0.012 \text{ mL})^2 + (0.006 \text{ mL})^2} = \pm 0.013 \text{ mL} \nonumber$ Uncertainty in the Purity of KHP According to the manufacturer, the purity of KHP is $100 \% \pm 0.05 \%$, or $1.0 \pm 0.0005$. Assuming a rectangular distribution, we report the uncertainty, u(PKHP) as $u(P_\ce{KHP}) = \frac {\pm 0.0005} {\sqrt{3}} = \pm 0.00029 \nonumber$ Uncertainty in the Molar Mass of KHP The molar mass of C8H5O4K is 204.2212 g/mol, based on the following atomic weights: 12.0107 for carbon, 1.00794 for hydrogen, 15.9994 for oxygen, and 39.0983 for potassium. Each of these atomic weights has an quoted uncertainty that we can convert to a standard uncertainty assuming a rectangular distribution, as shown here (the details of the calculations are left to you). element quoted uncertainty (per atom) standard uncertainty (per atom) number of atoms total uncertainty carbon $\pm 0.008$ $\pm 0.00046$ 8 $\pm 0.00368$ hydrogen $\pm 0.00007$ $\ 0.000040$ 5 $\pm 0.00020$ oxygen $\pm 0.0003$ $\pm 0.00017$ 4 $\pm 0.00068$ potassium $\pm 0.0001$ $\pm 0.000058$ 1 $\pm 0.000058$ Adding together these uncertainties gives the uncertainty in the molar mass, u(MKHP), as $u(FW_\ce{KHP}) = \sqrt{(0.00368)^2 + (0.00020)^2 + (0.00068)^2 + (0.0.000058)^2} = \pm 0.0037 \text{ g/mL} \nonumber$ Uncertainty in the Titration's Repeatability To estimate the uncertainty due to repeatability we complete five titrations, obtaining the following results for the concentration of NaOH: 0.1021 M, 0.1022 M, 0.1022 M, 0.1021 M, and 0.1021 M. The relative standard deviation, srel, for these titrations is $s_{rel} = \frac {s} {\overline{X}} = \frac {5.48 \times 10^{-5}} {0.1021} = \pm 0.0005 \nonumber$ If we treat the ideal repeatability as 1.0, then the uncertainty due to repeatability, u(R), is the relative standard deviation, or, in this case, 0.0005. Combining the Uncertainties The table below summarizes the five primary sources of uncertainty. term source value, x uncertainty, u(x) $m_\ce{KHP}$ mass of KHP 0.3888 g $\pm 0.00012$ g $V_\ce{naOH}$ volume of NaOH at endpoint 18.64 mL $\pm 0.013$ mL $P_\ce{KHP}$ purity of KHP 1.0 $\pm 0.00029$ $M_\ce{KHP}$ molar mass of KHP 204.2212 g/mol $\pm 0.0037$ g/mol $R$ repeatability 1.0 $\pm 0.0005$ As described earlier, we calculate the concentration of NaOH we use the following equation, which is slightly modified to include a term for the titration’s repeatability, which, as described above, has a value of 1.0. $C_\ce{NaOH} = \frac {1000 \times m_\ce{KHP} \times P_\ce{KHP}} {FW_\ce{KHP} \times V_\ce{NaOH}} \times R \nonumber$ Using the values from our table, we find that the concentration of NaOH is $C_\ce{NaOH} = \frac {1000 \times 0.3888 \times 1.0} {204.2212 \times 18.64} \times 1.0 = 0.1021 \text{ M} \nonumber$ Because the calculation of CNaOH includes only multiplication and division, the uncertainty in the concentration, u(CNaOH) is given by the following propagation of uncertainty. $\frac {u(C_\ce{NaOH})} {C_\ce{NaOH}} = \frac {u(C_\ce{NaOH})} {0.1021} = \sqrt{\frac {(0.00012)^2} {(0.3888)^2} + \frac {(0.00029)^2} {(1.0)^2} + \frac {(0.0037)^2} {(204.2212)^2} + \frac {(0.013)^2} {(18.64)^2} + \frac {(0.0005)^2} {(1.0)^2}} \nonumber$ Solving for u(CNaOH) gives its value as $\pm 0.00010 \text{ M}$, which is the final uncertainty for the analysis. Evaluating the Sources of Uncertainty Figure 16.2.3 shows the relative uncertainty in the concentration of NaOH and the relative uncertainties for each of the five contributions to the total uncertainty. Of the contributions, the most important is the volume of NaOH, and it is here to which we should focus our attention if we wish to improve the overall uncertainty for the standardization.
textbooks/chem/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/16%3A_Appendix/16.01%3A_Normality.txt
Table 16.3.1 , at the bottom of this appendix, gives the proportion, P, of the area under a normal distribution curve that lies to the right of a deviation, z $z = \frac {X -\mu} {\sigma} \nonumber$ where X is the value for which the deviation is defined, $\mu$ is the distribution’s mean value and $\sigma$ is the distribution’s standard deviation. For example, the proportion of the area under a normal distribution to the right of a deviation of 0.04 is 0.4840 (see entry in red in the table), or 48.40% of the total area (see the area shaded blue in Figure 16.3.1 ). The proportion of the area to the left of the deviation is 1 – P. For a deviation of 0.04, this is 1 – 0.4840, or 51.60%. Figure 16.3.1 . Normal distribution curve showing the area under a curve greater than a deviation of +0.04 (blue) and with a deviation less than –0.04 (green). When the deviation is negative—that is, when X is smaller than $\mu$—the value of z is negative. In this case, the values in the table give the area to the left of z. For example, if z is –0.04, then 48.40% of the area lies to the left of the deviation (see area shaded green in Figure 16.3.1 . To use the single-sided normal distribution table, sketch the normal distribution curve for your problem and shade the area that corresponds to your answer (for example, see Figure 16.3.2 , which is for Example 4.4.2). This divides the normal distribution curve into three regions: the area that corresponds to our answer (shown in blue), the area to the right of this, and the area to the left of this. Calculate the values of z for the limits of the area that corresponds to your answer. Use the table to find the areas to the right and to the left of these deviations. Subtract these values from 100% and, voilà, you have your answer. Table 16.3.1 : Values for a Single-Sided Normal Distribution z 0.00 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.0 0.5000 0.4960 0.4920 0.4880 0.4840 0.4801 0.4761 0.4721 0.4681 0.4641 0.1 0.4602 0.4562 0.4522 0.4483 0.4443 0.4404 0.4365 0.4325 0.4286 0.4247 0.2 0.4207 0.4168 0.4129 0.4090 0.4502 0.4013 0.3974 0.3396 0.3897 0.3859 0.3 0.3821 0.3783 0.3745 0.3707 0.3669 0.3632 0.3594 0.3557 0.3520 0.3483 0.4 0.3446 0.3409 0.3372 0.3336 0.3300 0.3264 0.3228 0.3192 0.3156 0.3121 0.5 0.3085 0.3050 0.3015 0.2981 0.2946 0.2912 0.2877 0.2843 0.2810 0.2776 0.6 0.2743 0.2709 0.2676 0.2643 0.2611 0.2578 0.2546 0.2514 0.2483 0.2451 0.7 0.2420 0.2389 0.2358 0.2327 0.2296 0.2266 0.2236 0.2206 0.2177 0.2148 0.8 0.2119 0.2090 0.2061 0.2033 0.2005 0.1977 0.1949 0.1922 0.1894 0.1867 0.9 0.1841 0.1814 0.1788 0.1762 0.1736 0.1711 0.1685 0.1660 0.1635 0.1611 1.0 0.1587 0.1562 0.1539 0.1515 0.1492 0.1469 0.1446 0.1423 0.1401 0.1379 1.1 0.1357 0.1335 0.1314 0.1292 0.1271 0.1251 0.1230 0.1210 0.1190 0.1170 1.2 0.1151 0.1131 0.1112 0.1093 0.1075 0.1056 0.1038 0.1020 0.1003 0.0985 1.3 0.0968 0.0951 0.0934 0.0918 0.0901 0.0885 0.0869 0.0853 0.0838 0.0823 1.4 0.0808 0.0793 0.0778 0.0764 0.0749 0.0735 0.0721 0.0708 0.0694 0.0681 1.5 0.0668 0.0655 0.0643 0.0630 0.0618 0.0606 0.0594 0.0582 0.0571 0.0559 1.6 0.0548 0.0537 0.0526 0.0516 0.0505 0.0495 0.0485 0.0475 0.0465 0.0455 1.7 0.0466 0.0436 0.0427 0.0418 0.0409 0.0401 0.0392 0.0384 0.0375 0.0367 1.8 0.0359 0.0351 0.0344 0.0336 0.0329 0.0322 0.0314 0.0307 0.0301 0.0294 1.9 0.0287 0.0281 0.0274 0.0268 0.0262 0.0256 0.0250 0.0244 0.0239 0.0233 2.0 0.0228 0.0222 0.0217 0.0212 0.0207 0.0202 0.0197 0.0192 0.0188 0.0183 2.1 0.0179 0.0174 0.0170 0.0166 0.0162 0.0158 0.0154 0.0150 0.0146 0.0143 2.2 0.0139 0.0136 0.0132 0.0129 0.0125 0.0122 0.0119 0.0116 0.0113 0.0110 2.3 0.0107 0.0104 0.0102 0.00964 0.00914 0.00866 2.4 0.00820 0.00776 0.00734 0.00695 0.00657 2.5 0.00621 0.00587 0.00554 0.00523 0.00494 2.6 0.00466 0.00440 0.00415 0.00391 0.00368 2.7 0.00347 0.00326 0.00307 0.00289 0.00272 2.8 0.00256 0.00240 0.00226 0.00212 0.00199 2.9 0.00187 0.00175 0.00164 0.00154 0.00144 3.0 0.00135 3.1 0.000968 3.2 0.000687 3.3 0.000483 3.4 0.000337 3.5 0.000233 3.6 0.000159 3.7 0.000108 3.8 0.0000723 3.9 0.0000481 4.0 0.0000317 16.04: Critical Values for t-Test Assuming we have calculated texp, there are two approaches to interpreting a t-test. In the first approach we choose a value of $\alpha$ for rejecting the null hypothesis and read the value of $t(\alpha,\nu)$ from the table below. If $t_\text{exp} > t(\alpha,\nu)$, we reject the null hypothesis and accept the alternative hypothesis. In the second approach, we find the row in the table below that corresponds to the available degrees of freedom and move across the row to find (or estimate) the a that corresponds to $t_\text{exp} = t(\alpha,\nu)$; this establishes largest value of $\alpha$ for which we can retain the null hypothesis. Finding, for example, that $\alpha$ is 0.10 means that we retain the null hypothesis at the 90% confidence level, but reject it at the 89% confidence level. The examples in this textbook use the first approach. Table 16.4.1 : Critical Values of t for the t-Test Values of t for… …a confidence interval of: 90% 95% 98% 99% …an $\alpha$ value of: 0.10 0.05 0.02 0.01 Degrees of Freedom 1 6.314 12.706 31.821 63.657 2 2.920 4.303 6.965 9.925 3 2.353 3.182 4.541 5.841 4 2.132 2.776 3.747 4.604 5 2.015 2.571 3.365 4.032 6 1.943 2.447 3.143 3.707 7 1.895 2.365 2.998 3.499 8 1.860 2.306 2.896 3.255 9 1.833 2.262 2.821 3.250 10 1.812 2.228 2.764 3.169 12 1.782 2.179 2.681 3.055 14 1.761 2.145 2.624 2.977 16 1.746 2.120 2.583 2.921 18 1.734 2.101 2.552 2.878 20 1.725 2.086 2.528 2.845 30 1.697 2.042 2.457 2.750 50 1.676 2.009 2.311 2.678 $\infty$ 1.645 1.960 2.326 2.576 The values in this table are for a two-tailed t-test. For a one-tailed test, divide the $\alpha$ values by 2. For example, the last column has an $\alpha$ value of 0.005 and a confidence interval of 99.5% when conducting a one-tailed t-test.
textbooks/chem/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/16%3A_Appendix/16.03%3A_Single-Sided_Normal_Distribution.txt
The following tables provide values for $F(0.05, \nu_\text{num}, \nu_\text{denom})$ for one-tailed and for two-tailed F-tests. To use these tables, we first decide whether the situation calls for a one-tailed or a two-tailed analysis and calculate Fexp $F_\text{exp} = \frac {s_A^2} {s_B^2} \nonumber$ where $S_A^2$ is greater than $s_B^2$. Next, we compare Fexp to $F(0.05, \nu_\text{num}, \nu_\text{denom})$ and reject the null hypothesis if $F_\text{exp} > F(0.05, \nu_\text{num}, \nu_\text{denom})$. You may replace s with $\sigma$ if you know the population’s standard deviation. Table 16.5.1 : Critical Values of F for a One-Tailed F-Test $\frac {\nu_\text{num}\ce{->} }{\nu_{denom} \ce{ v }}$ 1 2 3 4 5 6 7 8 9 10 15 20 $\infty$ 1 161.4 199.5 215.7 224.6 230.2 234.0 236.8 238.9 240.5 241.9 245.9 248.0 254.3 2 18.51 19.00 19.16 19.25 19.30 19.33 19.35 19.37 19.38 19.40 19.43 19.45 19.50 3 10.13 9.552 9.277 9.117 9.013 8.941 8.887 8.845 8.812 8.786 8.703 8.660 8.526 4 7.709 6.994 6.591 6.388 6.256 6.163 6.094 6.041 5.999 5.964 5.858 5.803 5.628 5 6.608 5.786 5.409 5.192 5.050 4.950 4.876 4.818 4.722 4.753 4.619 4.558 4.365 6 5.987 5.143 4.757 4.534 4.387 4.284 4.207 4.147 4.099 4.060 3.938 3.874 3.669 7 5.591 4.737 4.347 4.120 3.972 3.866 3.787 3.726 3.677 3.637 3.511 3.445 3.230 8 5.318 4.459 4.066 3.838 3.687 3.581 3.500 3.438 3.388 3.347 3.218 3.150 2.928 9 5.117 4.256 3.863 3.633 3.482 3.374 3.293 3.230 3.179 3.137 3.006 2.936 2.707 10 4.965 4.103 3.708 3.478 3.326 3.217 3.135 3.072 3.020 2.978 2.845 2.774 2.538 11 4.844 3.982 3.587 3.257 3.204 3.095 3.012 2.948 2.896 2.854 2.719 2.646 2.404 12 4.747 3.885 3.490 3.259 3.106 2.996 2.913 2.849 2.796 2.753 2.617 2.544 2.296 13 4.667 3.806 3.411 3.179 3.025 2.915 2.832 2.767 2.714 2.671 2.533 2.459 2.206 14 4.600 3.739 3.344 3.112 2.958 2.848 2.764 2.699 2.646 2.602 2.463 2.388 2.131 15 4.534 3.682 3.287 3.056 2.901 2.790 2.707 2.641 2.588 2.544 2.403 2.328 2.066 16 4.494 3.634 3.239 3.007 2.852 2.741 2.657 2.591 2.538 2.494 2.352 2.276 2.010 17 4.451 3.592 3.197 2.965 2.810 2.699 2.614 2.548 2.494 2.450 2.308 2.230 1.960 18 4.414 3.555 3.160 2.928 2.773 2.661 2.577 2.510 2.456 2.412 2.269 2.191 1.917 19 4.381 3.552 3.127 2.895 2.740 2.628 2.544 2.477 2.423 2.378 2.234 2.155 1.878 20 4,351 3.493 3.098 2.866 2.711 2.599 2.514 2.447 2.393 2.348 2.203 2.124 1.843 $\infty$ 3.842 2.996 2.605 2.372 2.214 2.099 2.010 1.938 1.880 1.831 1.666 1.570 1.000 Table 16.5.2 : Critical Values of F for a Two-Tailed F-Test $\frac {\nu_\text{num}\ce{->} }{\nu_{denom} \ce{ v }}$ 1 2 3 4 5 6 7 8 9 10 15 20 $\infty$ 1 647.8 799.5 864.2 899.6 921.8 937.1 948.2 956.7 963.3 968.6 984.9 993.1 1018 2 38.51 39.00 39.17 39.25 39.30 39.33 39.36 39.37 39.39 39.40 39.43 39.45 39.50 3 17.44 16.04 15.44 15.10 14.88 14.73 14.62 14.54 14.47 14.42 14.25 14.17 13.90 4 12.22 10.65 9.979 9.605 9.364 9.197 9.074 8.980 8.905 8.444 8.657 8.560 8.257 5 10.01 8.434 7.764 7.388 7.146 6.978 6.853 6.757 6.681 6.619 6.428 6.329 6.015 6 8.813 7.260 6.599 6.227 5.988 5.820 5.695 5.600 5.523 5.461 5.269 5.168 4.894 7 8.073 6.542 5.890 5.523 5.285 5.119 4.995 4.899 4.823 4.761 4.568 4.467 4.142 8 7.571 6.059 5.416 5.053 4.817 4.652 4.529 4.433 4.357 4.259 4.101 3.999 3.670 9 7.209 5.715 5.078 4.718 4.484 4.320 4.197 4.102 4.026 3.964 3.769 3.667 3.333 10 6.937 5.456 4.826 4.468 4.236 4.072 3.950 3.855 3.779 3.717 3.522 3.419 3.080 11 6.724 5.256 4.630 4.275 4.044 3.881 3.759 3.644 3.588 3.526 3.330 3.226 2.883 12 6.544 5.096 4.474 4.121 3.891 3.728 3.607 3.512 3.436 3.374 3.177 3.073 2.725 13 6.414 4.965 4.347 3.996 3.767 3.604 3.483 3.388 3.312 3.250 3.053 2.948 2.596 14 6.298 4.857 4.242 3.892 3.663 3.501 3.380 3.285 3.209 3.147 2.949 2.844 2.487 15 6.200 4.765 4.153 3.804 3.576 3.415 3.293 3.199 3.123 3.060 2.862 2.756 2.395 16 6.115 4.687 4.077 3.729 3.502 3.341 3.219 3.125 3.049 2.986 2.788 2.681 2.316 17 6.042 4.619 4.011 3.665 3.438 3.277 3.156 3.061 2.985 2.922 2.723 2.616 2.247 18 5.978 4.560 3.954 3.608 3.382 3.221 3.100 3.005 2.929 2.866 2.667 2.559 2.187 19 5.922 4.508 3.903 3.559 3.333 3.172 3.051 2.956 2.880 2.817 2.617 2.509 2.133 20 5.871 4.461 3.859 3.515 3.289 3.128 3.007 2.913 2.837 2.774 2.573 2.464 2.085 $\infty$ 5.024 3.689 3.116 2.786 2.567 2.408 2.288 2.192 2.114 2.048 1.833 1.708 1.000
textbooks/chem/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/16%3A_Appendix/16.05%3A_Critical_Values_for_F-Test.txt
The following table provides critical values for $Q(\alpha, n)$, where $\alpha$ is the probability of incorrectly rejecting the suspected outlier and $n$ is the number of samples in the data set. There are several versions of Dixon’s Q-Test, each of which calculates a value for Qij where i is the number of suspected outliers on one end of the data set and j is the number of suspected outliers on the opposite end of the data set. The critical values for Q here are for a single outlier, Q10, where $Q_\text{exp} = Q_{10} = \frac {|\text{outlier's value} - \text{nearest value}|} {\text{largest value} - \text{smallest value}} \nonumber$ The suspected outlier is rejected if Qexp is greater than $Q(\alpha, n)$. For additional information consult Rorabacher, D. B. “Statistical Treatment for Rejection of Deviant Values: Critical Values of Dixon’s ‘Q’ Parameter and Related Subrange Ratios at the 95% confidence Level,” Anal. Chem. 1991, 63, 139–146. Table 16.6.1 : Critical Values for Dixon's Q-Test $\frac {\alpha \ce{->}} {n \ce{ v }}$ 0.1 0.05 0.04 0.02 0.01 3 0.941 0.970 0.976 0.988 0.994 4 0.765 0.829 0.846 0.889 0.926 5 0.642 0.710 0.729 0.780 0.821 6 0.560 0.625 0.644 0.698 0.740 7 0.507 0.568 0.586 0.637 0.680 8 0.468 0.526 0.543 0.590 0.634 9 0.437 0.493 0.510 0.555 0.598 10 0.412 0.466 0.483 0.527 0.568 16.07: Critical Values for Grubb's Test The following table provides critical values for $G(\alpha, n)$, where $\alpha$ is the probability of incorrectly rejecting the suspected outlier and n is the number of samples in the data set. There are several versions of Grubb’s Test, each of which calculates a value for Gij where i is the number of suspected outliers on one end of the data set and j is the number of suspected outliers on the opposite end of the data set. The critical values for G given here are for a single outlier, G10, where $G_\text{exp} = G_{10} = \frac {|X_{out} - \overline{X}|} {s} \nonumber$ The suspected outlier is rejected if Gexp is greater than $G(\alpha, n)$. Table 16.7.1 : Critical Values for the Grubb's Test $\frac {\alpha \ce{->}} {n \ce{ v }}$ 0.05 0.01 3 1.155 1.155 4 1.481 1.496 5 1.715 1.764 6 1.887 1.973 7 2.020 2.139 8 2.126 2.274 9 2.215 2.387 10 2.290 2.482 11 2.355 2.564 12 2.412 2.636 13 2.462 2.699 14 2.507 2.755 15 2.549 2.755 16.08: Recommended Primary Standards All compounds are of the highest available purity. Metals are cleaned with dilute acid to remove any surface impurities and rinsed with distilled water. Unless otherwise indicated, compounds are dried to a constant weight at 110 oC. Most of these compounds are soluble in dilute acid (1:1 HCl or 1:1 HNO3), with gentle heating if necessary; some of the compounds are water soluble. Element Compound FW (g/mol) Comments aluminum Al metal 26.982 antimony Sb metal 121.760 \(\ce{KSbOC4H4O6}\) 324.92 prepared by drying \(\ce{KSbC4H4O6 * 1/2H2O}\) at 100 °C and storing in a desicator arsenic As metal 74.922 \(\ce{As2O3}\) 197.84 toxic barium \(\ce{BaCO3}\) 197.84 dry at 200 oC for 4 h bismuth Bi metal 208.98 boron \(\ce{H3BO3}\) 61.83 do not dry bromine KBr 119.01 cadmium Cd metal 112.411 CdO 128.40 calcium \(\ce{CaCO3}\) 100.09 cerium Ce metal 140.116 \(\ce{(NH4)2Ce(NO3)4}\) 548.23 cesium \(\ce{Cs2CO3}\) 325.82 \(\ce{Cs2SO4}\) 361.87 chlorine NaCl 58.44 chromium Cr metal 51.996 \(\ce{K2Cr2O7}\) 294.19 cobalt Co metal 58.933 copper Cu metal 63.546 CuO 79.54 fluorine NaF 41.99 do not store solutions in glass containers iodine KI 166.00 \(\ce{KIO3}\) 214.00 iron Fe metal 55.845 lead Pb metal 207.2 lithium \(\ce{Li2CO3}\) 73.89 magnesium Mg metal 24.305 manganese Mn metal 54.938 mercury Hg metal 200.59 molybdenum Mo metal 95.94 nickel Ni metal 58.693 phosphorous \(\ce{KH2PO4}\) 136.09 \(\ce{P2O5}\) 141.94 potassium KCl 74.56 \(\ce{K2CO3}\) 138.21 \(\ce{K2Cr2O7}\) 294.19 \(\ce{KHC8H4O2}\) 204.23 silicon Si metal 28.085 \(\ce{SiO2}\) 60.08 silver Ag metal 107.868 \(\ce{AgNO3}\) 169.87 sodium NaCl 58.44 \(\ce{Na2CO3}\) 106.00 \(\ce{Na2C2O4}\) 134.00 strontium \(\ce{SrCO3}\) 147.63 sulfur elemental S 32.066 \(\ce{K2SO4}\) 174.27 \(\ce{Na2SO4}\) 142.04 tin Sn metal 118.710 titanium Ti metal 47.867 tungsten W metal 183.84 uranium U metal 238.029 \(\ce{U3O8}\) 842.09 vanadium V metal 50.942 zinc Zn metal 81.37 Sources: • Smith, B. W.; Parsons, M. L. J. Chem. Educ. 1973, 50, 679–681 • Moody, J. R.; Greenburg, P. R.; Pratt, K. W.; Rains, T. C. Anal. Chem. 1988, 60, 1203A–1218A.
textbooks/chem/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/16%3A_Appendix/16.06%3A_Critical_Values_for_Dixon%27s_Q-Test.txt
Calibrating a balance does not eliminate all sources of determinate error that might affect the signal. Because of the buoyancy of air, an object always weighs less in air than it does in a vacuum. If there is a difference between the object’s density and the density of the weights used to calibrate the balance, then we can make a correction for buoyancy [Battino, R.; Williamson, A. G. J. Chem. Educ. 1984, 61, 51–52]. An object’s true weight in vacuo, Wv, is related to its weight in air, Wa, by the equation $W_v = W_a \times \left[ 1 + \left( \frac {1} {D_o} - \frac {1} {D_w} \right) \times 0.0012 \right] \label{16.1}$ where Do is the object’s density, Dw is the density of the calibration weight, and 0.0012 is the density of air under normal laboratory conditions (all densities are in units of g/cm3). The greater the difference between Do and Dw the more serious the error in the object’s measured weight. The buoyancy correction for a solid is small and frequently ignored. The correction may be significant, however, for low density liquids and gases. This is particularly important when calibrating glassware. For example, we can calibrate a volumetric pipet by carefully filling the pipet with water to its calibration mark, dispensing the water into a tared beaker, and determining the water’s mass. After correcting for the buoyancy of air, we use the water’s density to calculate the volume dispensed by the pipet. Example 16.9.1 A 10-mL volumetric pipet is calibrated following the procedure outlined above, using a balance calibrated with brass weights with a density of 8.40 g/cm3. At 25oC the pipet dispenses 9.9736 g of water. What is the actual volume dispensed by the pipet and what is the determinate error in this volume if we ignore the buoyancy correction? At 25oC the density of water is 0.997 05 g/cm3. Solution Using Equation \ref{16.1} the water’s true weight is $W_v = 9.9736 \text{ g} \times \left[ 1 + \left( \frac {1} {0.99705} - \frac {1} {8.40} \right) \times 0.0012 \right] = 9.9842 \text{ g} \nonumber$ and the actual volume of water dispensed by the pipet is $\frac {9.9842 \text{ g}} {0.99705 \text{ g/cm}^{3}} = 10.014 \text{ cm}^{3} \nonumber$ If we ignore the buoyancy correction, then we report the pipet’s volume as $\frac {9.9736 \text{ g}} {0.99705 \text{ g/cm}^{3}} = 10.003 \text{ cm}^{3} \nonumber$ introducing a negative determinate error of –0.11%. Exercise 16.9.1 To calibrate a 10-mL pipet a measured volume of water is transferred to a tared flask and weighed, yielding a mass of 9.9814 grams. (a) Calculate, with and without correcting for buoyancy, the volume of water delivered by the pipet. Assume the density of water is 0.99707 g/cm3 and that the density of the weights is 8.40 g/cm3. (b) What is the absolute error and the relative error introduced if we fail to account for the effect of buoyancy? Is this a significant source of determinate error for the calibration of a pipet? Explain. Answer For (a), without accounting for buoyancy, the volume of water is $\frac {9.9814 \text{ g}} {0.99707 \text{ g/cm}^3} = 10.011 \text{ cm}^3 = 10.011 \text{ mL} \nonumber$ When we correct for buoyancy, however, the volume is $W_v = 9.9814 \text{ g} \times \left[ 1 + \left( \frac {1} {0.99707 \text{ g/cm}^3} - \frac {1} {8.40 \text{ g/cm}^3} \right) \times 0.0012 \text{ g/cm}^3 \right] = 9.920 \text{ g} \nonumber$ For (b), the absolute and relative errors in the mass are $10.011 \text{ mL} - 10.021 \text{ mL} = -0.010 \text{ mL} \nonumber$ $\frac {- 0.010 \text{ mL}} {10.021 \text{ mL}} \times 100 = -0.10\% \nonumber$ Table 4.2.8 shows us that the standard deviation for the calibration of a 10-mL pipet is on the order of ±0.006 mL. Failing to correct for the effect of buoyancy gives a determinate error of –0.010 mL that is slightly larger than ±0.006 mL, suggesting that it introduces a small, but significant determinate error. Exercise 16.9.2 Repeat the questions in Exercise 16.9.1 for the case where a mass of 0.2500 g is measured for a solid that has a density of 2.50 g/cm3. Answer The sample’s true weight is $W_v = 0.2500 \text{ g} \times \left[ 1 + \left( \frac {1} {2.50 \text{ g/cm}^3} - \frac {1} {8.40 \text{ g/cm}^3} \right) \times 0.0012 \text{ g/cm}^3 \right] = 0.2501 \text{ g} \nonumber$ In this case the absolute and relative errors in mass are –0.0001 g and –0.040%. Exercise 16.9.3 Is the failure to correct for buoyancy a constant or proportional source of determinate error? Answer The true weight is the product of the weight measured in air and the buoyancy correction factor, which makes this a proportional error. The percentage error introduced when we ignore the buoyancy correction is independent of mass and a function only of the difference between the density of the object being weighed and the density of the calibration weights. Exercise 16.9.4 What is the minimum density of a substance necessary to keep the buoyancy correction to less than 0.01% when using brass calibration weights with a density of 8.40 g/cm3? Answer To determine the minimum density, we note that the buoyancy correction factor equals 1.00 if the density of the calibration weights and the density of the sample are the same. The correction factor is greater than 1.00 if Do is smaller than Dw; thus, the following inequality applies $\left( \frac {1} {D_o} - \frac {1} {8.40} \right) \times 0.0012 \le (1.00)(0.0001) \nonumber$ Solving for Do shows that the sample’s density must be greater than 4.94 g/cm3 to ensure an error of less than 0.01%.
textbooks/chem/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/16%3A_Appendix/16.09%3A_Correcting_Mass_for_the_Buoyancy_of_Air.txt
The following table provides pKsp and Ksp values for selected compounds, organized by the anion. All values are from Martell, A. E.; Smith, R. M. Critical Stability Constants, Vol. 4. Plenum Press: New York, 1976. Unless otherwise stated, values are for 25 oC and zero ionic strength. Bromide ($\ce{Br-}$) pKsp Ksp $\ce{CuBr}$ 8.3 $5. \times10^{-9}$ $\ce{AgBr}$ 12.3 $5.0 \times 10^{-13}$ $\ce{Hg2Br2}$ 22.25 $5.6 \times 10^{-23}$ $\ce{HgBr2} \: (\mu = 0.5 \text{ M})$ 18.9 $1.3 \times 10^{-19}$ $\ce{PbBr2} \: (\mu = 0.5 \text{ M})$ 5.68 $2.1 \times 10^{-6}$ Carbonate ($\ce{CO3^{2-}}$) pKsp Ksp $\ce{MgCO3}$ 7.46 $3.5 \times 10^{-8}$ $\ce{CaCO3}$ (calcite) 8.35 $4.5 \times 10^{-9}$ $\ce{CaCO3}$ (aragonite) 8.22 $6.0 \times 10^{-9}$ $\ce{SrCO3}$ 9.03 $9.3 \times 10^{-10}$ $\ce{BaCO3}$ 8.3 $5.0 \times 10^{-9}$ $\ce{MnCO3}$ 9.3 $5.0 \times 10^{-10}$ $\ce{FeCO3}$ 10.68 $2.1 \times 10^{-11}$ $\ce{CoCO3}$ 9.98 $1.0 \times 10^{-10}$ $\ce{NiCO3}$ 6.87 $1.3 \times 10^{-7}$ $\ce{Ag2CO3}$ 11.09 $8.1 \times 10^{-12}$ $\ce{Hg2CO3}$ 16.05 $8.9 \times 10^{-17}$ $\ce{ZnCO3}$ 10 $1.0 \times 10^{-10}$ $\ce{CdCO3}$ 13.74 $1.8 \times 10^{-14}$ $\ce{PbCO3}$ 13.13 $7.4 \times 10^{-14}$ Chloride ($\ce{Cl-}$) pKsp Ksp $\ce{CuCl}$ 6.73 $1.9 \times 10^{-7}$ $\ce{AgCl}$ 9.74 $1.8\times 10^{-10}$ $\ce{Hg2Cl2}$ 17.91 $1.2 \times 10^{-18}$ $\ce{PbCl2}$ 4.78 $2.0 \times 10^{-19}$ Chromate ($\ce{CrO4^{2-}}$) pKsp Ksp $\ce{BaCrO4}$ 9.67 $2.1 \times 10^{-10}$ $\ce{CuCrO4}$ 5.44 $3.6 \times 10^{-6}$ $\ce{Ag2CrO4}$ 11.92 $1.2 \times 10^{-12}$ $\ce{Hg2CrO4}$ 8.7 $2.0 \times 10^{-9}$ Cyanide ($\ce{CN-}$) pKsp Ksp $\ce{AgCN}$ 15.66 $2.2 \times 10^{-16}$ $\ce{Zn(CN)2} \: (\mu = 3.0 \text{ M})$ 15.5 $3. \times 10^{-16}$ $\ce{Hg2(CN)2}$ 39.3 $5. \times 10^{-40}$ Ferrocyanide ($\ce{Fe(CN)6^{4-}}$) pKsp Ksp $\ce{Zn2[Fe(CN)6]}$ 15.68 $2.1 \times 10^{-16}$ $\ce{Cd2[Fe(CN)6]}$ 17.38 $4.2 \times 10^{-18}$ $\ce{Pb2[Fe(CN)6]}$ 18.02 $9.5 \times 10^{-19}$ Fluoride ($\ce{F-}$) pKsp Ksp $\ce{MgF2}$ 8.18 $6.6 \times 10^{-9}$ $\ce{CaF2}$ 10.41 $3.9 \times 10^{-11}$ $\ce{SrF2}$ 8.54 $2.9 \times 10^{-9}$ $\ce{BaF2}$ 5.76 $1.7 \times 10^{-6}$ $\ce{PbF2}$ 7.44 $3.6 \times 10^{-8}$ Hydroxide ($\ce{OH-}$) pKsp Ksp $\ce{Mg(OH)2}$ 11.15 $7.1 \times 10^{-12}$ $\ce{Ca(OH)2}$ 5.19 $6.5 \times 10^{-6}$ $\ce{Ba(OH)2 * 8 H2O}$ 3.6 $3. \times 10^{-4}$ $\ce{La(OH)3}$ 20.7 $2. \times 10^{-21}$ $\ce{Mn(OH)2}$ 12.8 $1.6 \times 10^{-13}$ $\ce{Fe(OH)2}$ 15.1 $8. \times 10^{-16}$ $\ce{Co(OH)2}$ 14.9 $1.3 \times 10^{-15}$ $\ce{Ni(OH)2}$ 15.2 $6. \times 10^{-16}$ $\ce{Cu(OH)2}$ 19.32 $4.8 \times 10^{-20}$ $\ce{Fe(OH)3}$ 38.8 $1.6 \times 10^{-39}$ $\ce{Co(OH)3} \: (T = 19 \text{°C})$ 44.5 $3. \times 10^{-45}$ $\ce{Ag2O} \: ( + \: \ce{H2O} \ce{<=>} \ce{2 Ag+} + \ce{2 OH-})$ 15.42 $3.8 \times 10^{-16}$ $\ce{Cu2O} \: ( + \: \ce{H2O} \ce{<=>} \ce{2 Cu+} + \ce{2 OH-})$ 29.4 $4.u \times 10^{-30}$ $\ce{Zn(OH)2}$ (amorphous) 15.52 $3.0 \times 10^{-16}$ $\ce{Cd(OH)2 (\beta)}$ 14.35 $4.5 \times 10^{-15}$ $\ce{HgO} \text{ (red)} \: ( + \: \ce{H2O} \ce{<=>} \ce{Hg^{2+}} + \ce{2 OH-})$ 25.44 $3.6 \times 10^{-26}$ $\ce{SnO} \: ( + \: \ce{H2O} \ce{<=>} \ce{Hg^{2+}} + \ce{2 OH-})$ 26.2 $6. \times 10^{-27}$ $\ce{PbO} \text{ (yellow)} \: ( + \: \ce{H2O} \ce{<=>} + \ce{Pb^{2+}} + \ce{2 OH-}$ 15.1 $8. \times 10^{-16}$ $\ce{Al(OH)3 \: (\alpha)}$ 33.5 $3. \times 10^{-34}$ Iodate ($\ce{IO3-}$) pKsp Ksp $\ce{Ca(IO3)2}$ 6.15 $7.1 \times 10^{-7}$ $\ce{Ba(IO3)2}$ 8.81 $1.5 \times 10^{-9}$ $\ce{AgIO3}$ 7.51 $3.1 \times 10^{-8}$ $\ce{Hg2(IO3)2}$ 17.89 $1.3 \times 10^{-18}$ $\ce{Zn(IO3)2}$ 5.41 $3.9 \times 10^{-6}$ $\ce{Cd(IO3)2}$ 7.64 $2.3 \times 10^{-8}$ $\ce{Pb(IO3)2}$ 12.61 $2.5 \times 10^{-13}$ Iodide ($\ce{I-}$) pKsp Ksp $\ce{AgI}$ 16.08 $8.3 \times 10^{-17}$ $\ce{Hg2I2}$ 28.33 $4.7 \times 10^{-29}$ $\ce{HgI2} \: (\mu = 0.5 \text{ M})$ 27.95 $1.1 \times 10^{-28}$ $\ce{PbI2}$ 8.1 $7.9 \times 10^{-9}$ Oxalate ($\ce{C2O4^{2-}}$) pKsp Ksp $\ce{CaC2O4} \: (\mu = 0.1 \text{ M, } T = 20 \text{°C})$ 7.9 $1.3 \times 10^{-8}$ $\ce{BaC2O4} \: (\mu = 0.1 \text{ M, } T = 20 \text{°C})$ 6 $1. \times 10^{-6}$ $\ce{SrC2O4} \: (\mu = 0.1 \text{ M, } T = 20 \text{°C})$ 6.4 $4. \times 10^{-7}$ Phosphate ($\ce{PO4^{3-}}$) pKsp Ksp $\ce{Fe3(PO4)2 * 8 H2O}$ 36 $1. \times 10^{-36}$ $\ce{Zn3(PO4)2 * 4 H2O}$ 35.3 $5. \times 10^{-36}$ $\ce{Ag3PO4}$ 17.55 $2.8 \times 10^{-18}$ $\ce{Pb3(PO4)2} \: (T = 38 \text{ °C})$ 43.55 $3.0 \times 10^{-44}$ Sulfate ($\ce{SO4^{2-}}$) pKsp Ksp $\ce{CaSO4}$ 4.62 $2.4 \times 10^{-5}$ $\ce{SrSO4}$ 6.5 $3.2 \times 10^{-7}$ $\ce{BaSO4}$ 9.96 $1.1 \times 10^{-10}$ $\ce{Ag2SO4}$ 4.83 $1.5 \times 10^{-5}$ $\ce{Hg2SO4}$ 6.13 $7.4 \times 10^{-7}$ $\ce{PbSO4}$ 7.79 $1.6 \times 10^{-8}$ Sulfide ($\ce{S^{2-}}$) pKsp Ksp $\ce{MnS} \: (\text{green})$ 13.5 $3. \times 10^{-14}$ $\ce{FeS}$ 18.1 $8. \times 10^{-19}$ $\ce{CoS} \: (\beta)$ 25.6 $3. \times 10^{-26}$ $\ce{NiS} \: (\gamma)$ 26.6 $3. \times 10^{-27}$ $\ce{CuS}$ 36.1 $8. \times 10^{-37}$ $\ce{Cu2S}$ 48.5 $3. \times 10^{-49}$ $\ce{Ag2S}$ 50.1 $8. \times 10^{-51}$ $\ce{ZnS} \: (\alpha)$ 24.7 $2. \times 10^{-25}$ $\ce{CdS}$ 27 $1. \times 10^{-27}$ $\ce{Hg2S} \: (\text{red})$ 53.3 $5. \times 10^{-54}$ $\ce{PbS}$ 27.5 $3. \times 10^{-28}$ Thiocyanate ($\ce{SCN-}$) pKsp Ksp $\ce{CuSCN} \: (\mu = 5.0 \text{ M})$ 13.4 $4.0\times 10^{-14}$ $\ce{AgSCN}$ 11.97 $1.1\times 10^{-12}$ $\ce{Hg2(SCN)2}$ 19.52 $3.0\times 10^{-20}$ $\ce{Hg2(SCN)2} \: (\mu = 1.0 \text{ M})$ 19.56 $2.8\times 10^{-20}$
textbooks/chem/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/16%3A_Appendix/16.10%3A_Solubility_Products.txt
The following table provides pKa and Ka values for selected weak acids. All values are from Martell, A. E.; Smith, R. M. Critical Stability Constants, Vols. 1–4. Plenum Press: New York, 1976. Unless otherwise stated, values are for 25 oC and for zero ionic strength. Those values in brackets are considered less reliable. Weak acids are arranged alphabetically by the names of the neutral compounds from which they are derived. In some cases—such as acetic acid—the compound is the weak acid. In other cases—such as for the ammonium ion—the neutral compound is the conjugate base. Chemical formulas or structural formulas are shown for the fully protonated weak acid. Successive acid dissociation constants are provided for polyprotic weak acids; where there is ambiguity, the specific acidic proton is identified. To find the Kb value for a conjugate weak base, recall that $K_\text{a} \times K_\text{b} = K_\text{w} \nonumber$ for a conjugate weak acid, HA, and its conjugate weak base, A. compound conjugate acid pKa Ka acetic acid $\ce{CH3COOH}$ 4.757 $1.75 \times 10^{-5}$ adipic acid 4.42 5.42 $3.8 \times 10^{-5}$ $3.8 \times 10^{-6}$ alanine 2.348 ($\ce{COOH}$) 9.867 ($\ce{NH3}$) $4.49 \times 10^{-3}$ $1.36 \times 10^{-10}$ aminobenzene 4.601 $2.51 \times 10^{-5}$ 4-aminobenzene sulfonic acid 3.232 $5.86 \times 10^{-4}$ 2-aminobenzoic acid 2.08 ($\ce{COOH}$) 4.96 ($\ce{NH3}$) $8.3 \times 10^{-3}$ $1.1 \times 10^{-5}$ 2-aminophenol ($T = 20 \text{°C}$) 4.78 ($\ce{NH3}$) 9.97 (OH) $1.7 \times 10^{-5}$ $1.05 \times 10^{-10}$ ammonia $\ce{NH4+}$ 9.244 $5.70 \times 10^{-10}$ arginine 1.823 (COOH) 8.991 ($\ce{NH3}$) [12.48] ($\ce{NH2}$) $1.50 \times 10^{-2}$ $1.02 \times 10^{-9}$ [$3.3 \times 10^{-13}$] arsenic acid $\ce{H3AsO4}$ 2.24 6.96 11.50 $5.8 \times 10^{-3}$ $1.1 \times 10^{-7}$ $3.2 \times 10^{-12}$ asparagine ($\mu = 0.1 \text{ M}$) 2.14 (COOH) 8.72 ($\ce{NH3}$) $7.2 \times 10^{-3}$ $1.9 \times 10^{-9}$ aspartic acid 1.990 ($\alpha$-COOH) 3.900 ($\beta$-COOH) 10.002 ($\ce{NH3}$) $1.02 \times 10^{-2}$ $1.26 \times 10^{-4}$ $9.95 \times 10^{-11}$ benzoic acid 4.202 $6.28 \times 10^{-5}$ benzylamine 9.35 $4.5 \times 10^{-10}$ boric acid ($pK_\text{a2}, pK_\text{a3: } T = 20 \text{°C}$) $\ce{H3BO3}$ 9.236 [12.74] [13.80] $5.81 \times 10^{-10}$ [$1.82 \times 10^{-13}$] [$1.58 \times 10^{-14}$] carbonic acid $\ce{H2CO3}$ 6.352 10.329 $4.45 \times 10^{-7}$ $4.69 \times 10^{-11}$ catechol 9.40 12.8 $4.0 \times 10^{-10}$ $1.6 \times 10^{-13}$ chloracetic acid $\ce{ClCH2COOH}$ 2.865 $1.36 \times 10^{-3}$ chromic acid ($pK_\text{a1: } T = 20 \text{°C}$) $\ce{H2CrO4}$ –0.2 6.51 1.6 $3.1 \times 10^{-7}$ citric acid 3.128 (COOH) 4.761 (COOH) 6.396 (COOH) $7.45 \times 10^{-4}$ $1.73 \times 10^{-5}$ $4.02 \times 10^{-7}$ cupferron ($\mu = 0.1 \text{ M}$) 4.16 $6.9 \times 10^{-5}$ cysteine [1.71] (COOH) 8.36 (SH) 10.77 ($\ce{NH3}$) [$1.9 \times 10^{-2}$] $4.4 \times 10^{-9}$ $1.7 \times 10^{-11}$ dichloracetic acid $\ce{Cl2CHCOOH}$ 1.30 $5.0 \times 10^{-2}$ diethylamine $\ce{(CH3CH2)2NH2+}$ 10.933 $1.17 \times 10^{-11}$ dimethylamine $\ce{(CH3)2NH2+}$ 10.774 $1.68 \times 10^{-11}$ dimethylglyoxime 10.66 12.0 $2.2 \times 10^{-11}$ $1. \times 10^{-12}$ ethylamine $\ce{CH3CH2NH3+}$ 10.636 $2.31 \times 10^{-11}$ ethylenediamine $\ce{+H3NCH2CH2NH3+}$ 6.848 9.928 $1.42 \times 10^{-7}$ $1.18 \times 10^{-10}$ ethylenediaminetetracetic acid (EDTA) ($\mu = 0.1 \text{ M}$) 0.0 (COOH) 1.5 (COOH) 2.0 (COOH) 2.66 (COOH) 6.16 (NH) 10.24 (NH) 1.0 $3.2 \times 10^{-2}$ $1.0 \times 10^{-2}$ $2.2 \times 10^{-3}$ $6.9 \times 10^{-7}$ $5.8 \times 10^{-11}$ formic acid $\ce{HCOOH}$ 3.745 $1.80 \times 10^{-4}$ fumaric acid 3.053 4.494 $8.85 \times 10^{-4}$ $3.21 \times 10^{-5}$ glutamic acid 2.33 ($\alpha$-COOH) 4.42 ($\lambda$-COOH) 9.95 ($\ce{NH3}$) $5.9 \times 10^{-3}$ $3.8\times 10^{-5}$ $1.12 \times 10^{-10}$ glutamine 2.17 (COOH) 9.01 ($\ce{NH3}$) $6.8 \times 10^{-3}$ $9.8 \times 10^{-10}$ glycine 2.350 (COOH) 9.778 ($\ce{NH3}$) $4.47 \times 10^{-3}$ $1.67 \times 10^{-10}$ glycolic acid $\ce{HOOCH2COOH}$ 3.881 (COOH) $1.48 \times 10^{-4}$ histidine ($\mu = 0.1 \text{ M}$) 1.7 (COOH) 6.02 (NH) 9.08 ($\ce{NH3}$) $2. \times 10^{-2}$ $9.5 \times 10^{-7}$ $8.3 \times 10^{-10}$ hydrogen cyanide $\ce{HCN}$ 9.21 $6.2 \times 10^{-10}$ hydrogen fluroride $\ce{HF}$ 3.17 $6.8 \times 10^{-4}$ hydrogen peroxide $\ce{H2O2}$ 11.65 $2.2 \times 10^{-12}$ hydrogen sulfide $\ce{H2S}$ 7.02 13.9 $9.5 \times 10^{-8}$ $1.3 \times 10^{-14}$ hydrogen thiocyanate $\ce{HSCN}$ 0.9 $1.3 \times 10^{-1}$ 8-hydroxyquinoline 4.9 (NH) 9.81 (OH) $1.2 \times 10^{-5}$ $1.6 \times 10^{-10}$ hydroxylamine $\ce{HONH3+}$ 5.96 $1.1 \times 10^{-6}$ hypobromous acid $\ce{HOBr}$ 8.63 $2.3 \times 10^{-9}$ hypochlorous acid $\ce{HOCl}$ 7.53 $3.0\times 10^{-8}$ hypoiodous acid $\ce{HOI}$ 10.64 $2.3 \times 10^{-11}$ iodic acid $\ce{HIO3}$ 0.77 $1.7 \times 10^{-1}$ isoleucine 2.319 (COOH) 9.754 ($\ce{NH3}$) $4.8 \times 10^{-3}$ $1.76 \times 10^{-10}$ leucine 2.329 (COOH) 9.747 ($\ce{NH3}$) $4.69 \times 10^{-3}$ $1.79 \times 10^{-10}$ lysine ($\mu = 0.1 \text{ M}$) 2.04 (COOH) 9.08 ($\alpha \text{-} \ce{NH3}$) 10.69 ($\epsilon \text{-} \ce{NH3}$) $9.1 \times 10^{-3}$ $8.3 \times 10^{-10}$ $2.0 \times 10^{-11}$ maleic acid 1.910 6.332 $1.23 \times 10^{-2}$ $4.66 \times 10^{-7}$ malic acid 3.459 (COOH) 5.097 (COOH) $3.48 \times 10^{-4}$ $8.00 \times 10^{-6}$ malonic acid $\ce{HOOCCH2COOH}$ 2.847 5.696 $1.42 \times 10^{-3}$ $2.01 \times 10^{-6}$ methionine ($\mu = 0.1 \text{ M}$) 2.20 (COOH) 9.05 ($\ce{NH3}$) $6.3 \times 10^{-3}$ $8.9 \times 10^{-10}$ methylamine $\ce{CH3NH3+}$ 10.64 $2.3 \times 10^{-11}$ 2-methylanaline 4.447 $3.57 \times 10^{-5}$ 4-methylanaline 5.084 $8.24 \times 10^{-6}$ 2-methylphenol 10.28 $5.2 \times 10^{-11}$ 4-methylphenol 10.26 $5.5 \times 10^{-11}$ nitrilotriacetic acid ($T = 20 \text{°C}), pK_\text{a1: } \mu = 0.1 \text{ M}$) 1.1 (COOH) 1.650 (COOH) 2.940 (COOH) 10.334 ($\ce{NH3}$) $8. \times 10^{-2}$ $2.24 \times 10^{-2}$ $1.15 \times 10^{-3}$ $4.63 \times 10^{-11}$ 2-nitrobenzoic acid 2.179 $6.62 \times 10^{-3}$ 3-nitrobenzoic acid 3.449 $3.56 \times 10^{-4}$ 4-nitrobenzoic acid 3.442 $3.61 \times 10^{-4}$ 2-nitrophenol 7.21 $6.2 \times 10^{-8}$ 3-nitrophenol 8.39 $4.1 \times 10^{-9}$ 4-nitrophenol 7.15 $7.1 \times 10^{-8}$ nitrous acid $\ce{HNO2}$ 3.15 $7.1 \times 10^{-4}$ oxalic acid $\ce{H2C2O4}$ 1.252 4.266 $5.60 \times 10^{-2}$ $5.42 \times 10^{-5}$ 1,10-phenanthroline 4.86 $1.38 \times 10^{-5}$ phenol 9.98 $1.05 \times 10^{-10}$ phenylalanine 2.20 (COOH) 9.31 ($\ce{NH3}$) $6.3 \times 10^{-3}$ $4.9 \times 10^{-10}$ phosphoric acid $\ce{H3PO4}$ 2.148 7.199 12.35 $7.11 \times 10^{-3}$ $6.32 \times 10^{-8}$ $4.5 \times 10^{-13}$ phthalic acid 2.950 5.408 $1.12 \times 10^{-3}$ $3.91 \times 10^{-6}$ piperdine 11.123 $7.53 \times 10^{-12}$ proline 1.952 (COOH) 10.650 (NH) $1.12 \times 10^{-2}$ $2.29 \times 10^{-11}$ propanoic acid $\ce{CH3CH2COOH}$ 4.874 $1.34 \times 10^{-5}$ propylamine $\ce{CH3CH2CH2NH3+}$ 10.566 $2.72 \times 10^{-11}$ pyridine 5.229 $5.90 \times 10^{-6}$ resorcinol 9.30 11.06 $5.0 \times 10^{-10}$ $8.7 \times 10^{-12}$ salicylic acid 2.97 (COOH) 13.74 (OH) $1.1 \times 10^{-3}$ $1.8 \times 10^{-14}$ serine 2.187 (COOH) 9.209 ($\ce{NH3}$) $6.50 \times 10^{-3}$ $6.18 \times 10^{-10}$ succinic acid 4.207 5.636 $6.21 \times 10^{-5}$ $2.31 \times 10^{-6}$ sulfuric acid $\ce{H2SO4}$ strong 1.99 $1.0 \times 10^{-2}$ sulfurous acid $\ce{H2SO3}$ 1.91 7.18 $1.2 \times 10^{-2}$ $6.6 \times 10^{-8}$ D-tartaric acid 3.036 (COOH) 4.366 (COOH) $9.20 \times 10^{-4}$ $4.31 \times 10^{-5}$ threonine 2.088 (COOH) 9.100 ($\ce{NH3}$) $8.17 \times 10^{-3}$ $7.94 \times 10^{-10}$ thiosulfuric acid $\ce{H2S2O3}$ 0.6 1.6 $3. \times 10^{-1}$ $3. \times 10^{-2}$ trichloracetic acid ($\mu = 0.1 \text{ M}$) $\ce{Cl3CCOOH}$ 0.66 $2.2 \times 10^{-1}$ triethanolamine $\ce{(HOCH2CH2)3NH+}$ 7.762 $1.73 \times 10^{-8}$ triethylamine $\ce{(CH3CH2)3NH+}$ 10.715 $1.93 \times 10^{-11}$ trimethylamine $\ce{(CH3)3NH+}$ 9.800 $1.58 \times 10^{-10}$ tris(hydroxymethyl)amino methane (TRIS or THAM) $\ce{(HOCH2)3CNH3+}$ 8.075 $8.41 \times 10^{-9}$ tryptophan ($\mu = 0.1 \text{ M}$) 2.35 (COOH) 9.33 ($\ce{NH3}$) $4.5 \times 10^{-3}$ $4.7 \times 10^{-10}$ tyrosine ($pK_\text{a1: } \mu = 0.1 \text{ M}$) 2.17 (COOH) 9.19 ($\ce{NH3}$) 10.47 (OH) $6.8 \times 10^{-3}$ $6.5 \times 10^{-10}$ $3.4 \times 10^{-11}$ valine 2.286 (COOH) 9.718 ($\ce{NH3}$) $5.18 \times 10^{-3}$ $1.91 \times 10^{-10}$
textbooks/chem/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/16%3A_Appendix/16.11%3A__Acid_Dissociation_Constants.txt
The following table provides $K_i$ and $\beta_i$ values for selected metal–ligand complexes, arranged by the ligand. All values are from Martell, A. E.; Smith, R. M. Critical Stability Constants, Vols. 1–4. Plenum Press: New York, 1976. Unless otherwise stated, values are for 25 oC and zero ionic strength. Those values in brackets are considered less reliable. Acetate $\ce{CH3COO-}$ log $K_1$ log $K_2$ log $K_3$ log $K_4$ log $K_5$ log $K_6$ Mg2+ 1.27 Ca2+ 1.18 Ba2+ 1.07 Mn2+ 1.40 Fe2+ 1.40 Co2+ 1.46 Ni2+ 1.43 Cu2+ 2.22 1.41 Ag+ 0.73 –0.09 Zn2+ 1.57 Cd2+ 1.93 1.22 –0.89 Pb2+ 2.68 1.40 Ammonia $\ce{NH3}$ log $K_1$ log $K_2$ log $K_3$ log $K_4$ log $K_5$ log $K_6$ Ag+ 3.31 3.91 Co2+ (T = 20 °C) 1.99 1.51 0.93 0.64 0.06 –0.73 Ni2+ 2.72 2.17 1.66 1.12 0.67 –0.03 Cu2+ 4.04 3.43 2.80 1.48 Zn2+ 2.21 2.29 2.36 2.03 Cd2+ 2.55 2.01 1.34 0.84 Chloride $\ce{Cl-}$ log $K_1$ log $K_2$ log $K_3$ log $K_4$ log $K_5$ log $K_6$ Cu2+ 0.40 Fe3+ 1.48 0.65 Ag+ ($\mu = 5.0 \text{ M}$) 3.70 1.92 0.78 –0.3 Zn2+ 0.43 0.18 –0.11 –0.3 Cd2+ 1.98 1.62 –0.2 –0.7 Pb2+ 1.59 0.21 –0.1 –0.3 Cyanide $\ce{CN-}$ log $K_1$ log $K_2$ log $K_3$ log $K_4$ log $K_5$ log $K_6$ Fe2+ 35.4 ($\beta_6$) Fe3+ 43.6 ($\beta_6$) Ag+ 20.48 ($\beta_2$) 0.92 Zn2+ 11.07 ($\beta_2$) 4.98 3.57 Cd2+ 6.01 5.11 4.53 2.27 Hg2+ 17.00 15.75 3.56 2.66 Ni2+ 30.22 ($\beta_4$) Ethylenediamine $\ce{H2NCH2CH2NH2}$ log $K_1$ log $K_2$ log $K_3$ log $K_4$ log $K_5$ log $K_6$ Ni2+ 7.38 6.18 4.11 Cu2+ 10.48 9.07 Ag+ (T = 20 °C, $\mu = 0.1 \text{ M}$) 4.700 3.00 Zn2+ 5.66 4.98 3.25 Cd2+ 5.41 4.50 2.78 EDTA log $K_1$ log $K_2$ log $K_3$ log $K_4$ log $K_5$ log $K_6$ Mg2+ (T = 20 °C, $\mu = 0.1 \text{ M}$) 8.79 Ca2+ (T = 20 °C, $\mu = 0.1 \text{ M}$) 10.69 Ba2+ (T = 20 °C, $\mu = 0.1 \text{ M}$) 7.86 Bi3+ (T = 20 °C, $\mu = 0.1 \text{ M}$) 27.8 Co2+ (T = 20 °C, $\mu = 0.1 \text{ M}$) 16.31 Ni2+ (T = 20 °C, $\mu = 0.1 \text{ M}$) 18.62 Cu2+ (T = 20 °C, $\mu = 0.1 \text{ M}$) 18.80 Cr3+ (T = 20 °C, $\mu = 0.1 \text{ M}$) [23.4] Fe3+ (T = 20 °C, $\mu = 0.1 \text{ M}$) 25.1 Ag+ (T = 20 °C, $\mu = 0.1 \text{ M}$) 7.32 Zn2+ (T = 20 °C, $\mu = 0.1 \text{ M}$) 16.50 Cd2+ (T = 20 °C, $\mu = 0.1 \text{ M}$) 16.46 Hg2+ (T = 20 °C, $\mu = 0.1 \text{ M}$) 21.7 Pb2+ (T = 20 °C, $\mu = 0.1 \text{ M}$) 18.04 Al3+ (T = 20 °C, $\mu = 0.1 \text{ M}$) 16.3 Fluoride $\ce{F-}$ log $K_1$ log $K_2$ log $K_3$ log $K_4$ log $K_5$ log $K_6$ Al3+ 6.11 5.01 3.88 3.0 1.4 0.4 Hydroxide $\ce{OH-}$ log $K_1$ log $K_2$ log $K_3$ log $K_4$ log $K_5$ log $K_6$ Al3+ 9.01 [9.69] [8.3] 6.0 Co2+ 4.3 4.1 1.3 0.5 Fe2+ 4.5 ]2.9] 2,6 –0.4 Fe3+ 11.81 10.5 12.1 Ni2+ 4.1 3.9 3. Pb2+ 6.3 4.6 3.0 Zn2+ 5.0 [6.1] 2.5 [1.2] Iodide $\ce{I-}$ log $K_1$ log $K_2$ log $K_3$ log $K_4$ log $K_5$ log $K_6$ Ag+ 6.58 [5.12] [1.4] Cd2+ (T = 18 °C) 2.28 1.64 1.08 1.0 Pb2+ 1.92 1.28 0.7 0.6 Nitriloacetate log $K_1$ log $K_2$ log $K_3$ log $K_4$ log $K_5$ log $K_6$ Mg2+ (T = 20 °C, $\mu = 0.1 \text{ M}$) 5.41 Ca2+ (T = 20 °C, $\mu = 0.1 \text{ M}$) 6.41 Ba2+ (T = 20 °C, $\mu = 0.1 \text{ M}$) 4.82 Mn2+ (T = 20 °C, $\mu = 0.1 \text{ M}$) 7.44 Fe2+ (T = 20 °C, $\mu = 0.1 \text{ M}$) 8.33 Co2+ (T = 20 °C, $\mu = 0.1 \text{ M}$) 10.38 Ni2+ (T = 20 °C, $\mu = 0.1 \text{ M}$) 11.53 Cu2+ (T = 20 °C, $\mu = 0.1 \text{ M}$) 12.96 Fe3+ (T = 20 °C, $\mu = 0.1 \text{ M}$) 15.9 Zn2+ (T = 20 °C, $\mu = 0.1 \text{ M}$) 10.67 Cd2+ (T = 20 °C, $\mu = 0.1 \text{ M}$) 9.83 Pb2+ (T = 20 °C, $\mu = 0.1 \text{ M}$) 11.39 Oxalate $\ce{C2O4^{2-}}$ log $K_1$ log $K_2$ log $K_3$ log $K_4$ log $K_5$ log $K_6$ Ca2+ ($\mu = 1 \text{ M}$) 1.66 1.03 Fe2+ ($\mu = 1 \text{ M}$) 3.05 2.10 Co2+ 4.72 2.28 Ni2+ 5.16 Cu2+ 6.23 4.04 Fe3+ ($\mu = 0.5 \text{ M}$) 7.53 6.11 4.83 Zn2+ 4.87 2.78 1,10-phenanthroline log $K_1$ log $K_2$ log $K_3$ log $K_4$ log $K_5$ log $K_6$ Fe2+ 20.7 ($\beta_3$) Mn2+ ($\mu = 0.1 \text{ M}$) 4.0 3.3 3.0 Cu2+ ($\mu = 0.1 \text{ M}$) 7.08 6.64 6.08 Ni2+ 8.6 8.1 7.6 Fe3+ 13.8 ($\beta_3$) Ag+ ($\mu = 0.1 \text{ M}$) 5.02 7.04 Zn2+ 6.2 [5.9] [5.2] Thiosulfate $\ce{S2O3^{2-}}$ log $K_1$ log $K_2$ log $K_3$ log $K_4$ log $K_5$ log $K_6$ Ag+ (T = 20 °C) 8.82 4.85 0.53 Thiocyanate $\ce{SCN-}$ log $K_1$ log $K_2$ log $K_3$ log $K_4$ log $K_5$ log $K_6$ Mn2+ 1.23 Fe2+ 1.31 Co2+ 1.71 Ni2+ 1.76 Cu2+ 2.33 Fe3+ 3.02 Ag+ 4.8 3.43 1.27 0.2 Zn2+ 1.33 0.58 0.09 –0.4 Cd2+ 1.89 0.89 0.02 –0.5 Hg2+ 17.26 ($\beta_2$) 2.71 1.83
textbooks/chem/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/16%3A_Appendix/16.12%3A_Formation_Constants.txt
The following table provides Eo and Eo ́ values for selected reduction reactions. Values are from the following sources (primarily the first two): • Bard, A. J.; Parsons, B.; Jordon, J., eds. Standard Potentials in Aqueous Solutions, Dekker: New York, 1985 • Milazzo, G.; Caroli, S.; Sharma, V. K. Tables of Standard Electrode Potentials, Wiley: London, 1978; • Swift, E. H.; Butler, E. A. Quantitative Measurements and Chemical Equilibria, Freeman: New York, 1972. • Bratsch, S. G. "Standard Electrode Potentials and Temperature Coefficients in Water at 298.15K, J. Phys. Chem. Ref. Data, 1989, 18, 1–21. • Latimer, W. M. Oxidation Potentials, 2nd. Ed., Prentice-Hall: Englewood Cliffs, NJ, 1952 Solids, gases, and liquids are identified; all other species are aqueous. Reduction reactions in acidic solution are written using H+ in place of H3O+. You may rewrite a reaction by replacing H+ with H3O+ and adding to the opposite side of the reaction one molecule of H2O per H+; thus H3AsO4 + 2H+ +2e $\rightleftharpoons$ HAsO2 +2H2O becomes H3AsO4 + 2H3O+ +2e $\rightleftharpoons$ HAsO2 +4H2O Conditions for formal potentials (Eo ́) are listed next to the potential. For most of the reduction half-reactions gathered here, there are minor differences in values provided by the references above. In most cases, these differences are small and will not affect calculations. In a few cases the differences are not insignificant and the user may find discrepancies in calculations. For example, Bard, Parsons, and Jordon report an Eo value of –1.285 V for $\text{Zn(OH)}_4^{2-} + 2e^- \rightleftharpoons \text{Zn}(s) + 4\text{OH}^-\nonumber$ while Milazzo, Caroli, and Sharma report the value as –1.214 V, Swift reports the value as –1.22, Bratsch reports the value as –1.199 V, and Latimer reports the value as –1.216 V. Aluminum E (V) Eo ́ (V) $\text{Al}^{3+} + 3e^- \rightleftharpoons \text{Al}(s)$ –1.676 $\text{Al(OH)}_4^- + 3e^- \rightleftharpoons \text{Al}(s) + 4\text{OH}^-$ –2.310 $\text{AlF}_6^{3-} + 3e^- \rightleftharpoons \text{Al}(s) + 6\text{F}^-$ –2.07 Antimony E (V) Eo ́ (V) $\text{Sb} + 3\text{H}^+ + 3e^- \rightleftharpoons \text{SbH}_3(g)$ –0.510 $\text{Sb}_2\text{O}_5 + 6\text{H}^+ + 4e^- \rightleftharpoons 2\text{SbO}^+ + 3\text{H}_2\text{O}(l)$ 0.605 $\text{SbO}^+ + 2\text{H}^+ + 3e^- \rightleftharpoons \text{Sb}(s) + \text{H}_2\text{O}(l)$ 0.212 Arsenic E (V) Eo ́ (V) $\text{As}(s) + 3\text{H}^+ + 3e^- \rightleftharpoons \text{AsH}_3(g)$ –0.225 $\text{H}_3\text{AsO}_4 + 2\text{H}^+ + 2e^- \rightleftharpoons \text{HAsO}_2 + 2\text{H}_2\text{O}(l)$ 0.560 $\text{HAsO}_2 + 3\text{H}^+ + 3e^- \rightleftharpoons \text{As}(s) + 2\text{H}_2\text{O}(l)$ 0.240 Barium E (V) Eo ́ (V) $\text{Ba}^{2+} + 2e^- \rightleftharpoons \text{Ba}(s)$ –2.92 $\text{BaO}(s) + 2\text{H}^+ + 2e^- \rightleftharpoons \text{Ba}(s) + \text{H}_2\text{O}(l)$ –2.166 Beryllium E (V) Eo ́ (V) $\text{Be}^{2+} + 2e^- \rightleftharpoons \text{Be}(s)$ –1.99 Bismuth E (V) Eo ́ (V) $\text{Bi}^{3+} + 3e^- \rightleftharpoons \text{Bi}(s)$ 0.317 $\text{BiCl}_4^- + 3e^- \rightleftharpoons \text{Bi}(s) + 4\text{Cl}^-$ 0.199 Boron E (V) Eo ́ (V) $\text{B(OH)}_3 + 3\text{H}^+ + 3e^- \rightleftharpoons \text{B}(s) + 3\text{H}_2\text{O}(l)$ –0.890 $\text{B(OH)}_4^- + 3e^- \rightleftharpoons \text{B}(s) + 4\text{OH}^-$ –1.811 Bromine E (V) Eo ́ (V) $\text{Br}_2(l) + 2e^- \rightleftharpoons 2\text{Br}^-$ 1.087 $\text{HOBr} + \text{H}^+ + 2e^- \rightleftharpoons \text{Br}^- + \text{H}_2\text{O}(l)$ 1.341 $\text{HOBr} + \text{H}^+ + e^- \rightleftharpoons \frac{1}{2} \text{Br}_2 + \text{H}_2\text{O}(l)$ 1.604 $\text{BrO}^- + \text{H}_2\text{O}(l) + 2e^- \rightleftharpoons \text{Br}^- + 2\text{OH}^-$ 0.76 in 1 M NaOH $\text{BrO}_3^- +6\text{H}^+ + 5e^- \rightleftharpoons \frac{1}{2} \text{Br}_2(l) + 3\text{H}_2\text{O}(l)$ 1.5 $\text{BrO}_3^- + 6\text{H}^+ +6e^- \rightleftharpoons \text{Br}^- + 3\text{H}_2\text{O}(l)$ 1.478 Cadmium E (V) Eo ́ (V) $\text{Cd}^{2+} + 2e^- \rightleftharpoons \text{Cd}(s)$ –0.4030 $\text{Cd(CN)}_4^{2-} + 2e^- \rightleftharpoons \text{Cd}(s) + 4\text{CN}^-$ –0.943 $\text{Cd(NH}_3)_4^{2+} + 2e^- \rightleftharpoons \text{Cd}(s) + 4\text{NH}_3$ –0.622 Calcium E (V) Eo ́ (V) $\text{Ca}^{2+} + 2e^- \rightleftharpoons \text{Ca}(s)$ –2.84 Carbon E (V) Eo ́ (V) $\text{CO}_2(g) + 2\text{H}^+ + 2e^- \rightleftharpoons \text{CO}(g) + \text{H}_2\text{O}(l)$ –0.106 $\text{CO}_2(g) + 2\text{H}^+ +2e^- \rightleftharpoons \text{HCO}_2\text{H}$ –0.20 $2\text{CO}_2(g) + 2\text{H}^+ +2e^- \rightleftharpoons \text{H}_2\text{C}_2\text{O}_4$ –0.481 $\text{HCHO} + 2\text{H}^+ + 2e^- \rightleftharpoons \text{CH}_3\text{OH}$ 0.2323 Cerium E (V) Eo ́ (V) $\text{Ce}^{3+} + 3e^- \rightleftharpoons \text{Ce}(s)$ –2.336 $\text{Ce}^{4+} + e^- \rightleftharpoons \text{Ce}^{3+}$ 1.72 1.70 in 1 M HClO4 1.44 in 1 M H2SO4 1.61 in 1 M HNO3 1.28 in 1 M HCl Chlorine E (V) Eo ́ (V) $\text{Cl}_2(g) + 2e^- \rightleftharpoons 2\text{Cl}^-$ 1.396 $\text{ClO}^- + \text{H}_2\text{O}(l) + e^- \rightleftharpoons \frac{1}{2} \text{Cl}_2(g) + 2\text{OH}^-$ 0.421 in 1 M NaOH $\text{ClO}^- + \text{H}_2\text{O}(l) + 2e^- \rightleftharpoons \text{Cl}^- + 2\text{OH}^-$ 0.890 in 1 M NaOH $\text{HClO}_2 + 2\text{H}^+ + 2e^- \rightleftharpoons \text{HOCl} + \text{H}_2\text{O}(l)$ 1.64 Chlorine E (V) Eo ́ (V) $\text{ClO}_3^- + 2\text{H}^+ + e^- \rightleftharpoons \text{ClO}_2(g) + \text{H}_2\text{O}(l)$ 1.175 $\text{ClO}_3^- + 3\text{H}^+ + 2e^- \rightleftharpoons \text{HClO}_2 + \text{H}_2\text{O}(l)$ 1.181 $\text{ClO}_4^- + 2\text{H}^+ +2e^- \rightleftharpoons \text{ClO}_3^- + \text{H}_2\text{O}(l)$ 1.201 Chromium E (V) Eo ́ (V) $\text{Cr}^{3+} + 3e^- \rightleftharpoons \text{Cr}(s)$ –0.424 $\text{Cr}^{2+} + 2e^- \rightleftharpoons \text{Cr}(s)$ –0.90 $\text{Cr}_2\text{O}_7^{2-} + 14\text{H}^+ + 6e^- \rightleftharpoons 2\text{Cr}^{3+} + 7\text{H}_2\text{O}(l)$ 1.36 $\text{CrO}_4^{2-} + 4\text{H}_2\text{O}(l) + 3e^- \rightleftharpoons \text{Cr(OH)}_4^- + 4\text{OH}^-$ –0.13 in 1 M NaOH Cobalt E (V) Eo ́ (V) $\text{Co}^{2+} + 2e^- \rightleftharpoons \text{Co}(s)$ –0.277 $\text{Co}^{3+} + 3e^- \rightleftharpoons \text{Co}(s)$ 1.92 $\text{Co(NH}_3)_6^{3+} + e^- \rightleftharpoons \text{Co(NH}_3)_6^{2+}$ 0.1 $\text{Co(OH)}_3(s) + e^- \rightleftharpoons \text{Co(OH)}_2(s) + \text{OH}^-$ 0.17 $\text{Co(OH)}_2(s) + 2e^- \rightleftharpoons \text{Co}(s) + 2\text{OH}^-$ –0.746 Copper E (V) Eo ́ (V) $\text{Cu}^+ + e^- \rightleftharpoons \text{Cu}(s)$ 0.520 $\text{Cu}^{2+} + e^- \rightleftharpoons \text{Cu}^+$ 0.159 $\text{Cu}^{2+} + 2e^- \rightleftharpoons \text{Cu}(s)$ 0.3419 $\text{Cu}^{2+} + \text{I}^- + e^- \rightleftharpoons \text{CuI}(s)$ 0.86 $\text{Cu}^{2+} + \text{Cl}^- + e^- \rightleftharpoons \text{CuCl}(s)$ 0.559 Fluorine E (V) Eo ́ (V) $\text{F}_2(g) + 2\text{H}^+ + 2e^- \rightleftharpoons 2\text{HF}(g)$ 3.053 $\text{F}_2(g) + 2e^- \rightleftharpoons 2\text{F}^-$ 2.87 Gallium E (V) Eo ́ (V) $\text{Ga}^{3+} + 3e^- \rightleftharpoons \text{Ga}(s)$ –0.529 Gold E (V) Eo ́ (V) $\text{Au}^+ + e^- \rightleftharpoons \text{Au}(s)$ 1.83 $\text{Au}^{3+} + 2e^- \rightleftharpoons \text{Au}^+$ 1.36 $\text{Au}^{3+} + 3e^- \rightleftharpoons \text{Au}(s)$ 1.52 $\text{AuCl}_4^- + 3e^- \rightleftharpoons \text{Au}(s) + 4\text{Cl}^-$ 1.002 Hydrogen E (V) Eo ́ (V) $2\text{H}^+ + 2e^- \rightleftharpoons \text{H}_2 (g)$ 0.00000 $\text{H}_2\text{O}(l) + e^- \rightleftharpoons \frac{1}{2} \text{H}_2(g) + \text{OH}^-$ –0.828 Iodine E (V) Eo ́ (V) $\text{I}_2(s) + 2e^- \rightleftharpoons 2\text{I}^-$ 0.5355 Iodine E (V) Eo ́ (V) $\text{I}_3^- + 2e^- \rightleftharpoons 3\text{I}^-$ 0.536 $\text{HIO} + \text{H}^+ + 2e^- \rightleftharpoons \text{I}^- + \text{H}_2\text{O}(l)$ 0.985 $\text{IO}_3^- + 6\text{H}^+ + 5e^- \rightleftharpoons \frac{1}{2} \text{I}_2(s) + 3\text{H}_2\text{O}(l)$ 1.195 $\text{IO}_3^- + 3\text{H}_2\text{O}(l) + 6e^- \rightleftharpoons \text{I}^- +6\text{OH}^-$ 0.257 Iron E (V) Eo ́ (V) $\text{Fe}^{2+} + 2e^- \rightleftharpoons \text{Fe}(s)$ –0.44 $\text{Fe}^{3+} + 3e^- \rightleftharpoons \text{Fe}(s)$ –0.037 $\text{Fe}^{3+} + e^- \rightleftharpoons \text{Fe}^{2+}$ 0.771 0.70 in 1 M HCl 0.767 in 1 M HClO4 0.746 in 1 M HNO3 0.68 in 1 M H2SO4 0.44 in 0.3 M H3PO4 $\text{Fe(CN)}_6^{3-} + e^- \rightleftharpoons \text{Fe(CN)}_6^{4-}$ 0.356 $\text{Fe(phen)}_3^{3+} + e^- \rightleftharpoons \text{Fe(phen)}_3^{2+}$ 1.147 Lanthanum E (V) Eo ́ (V) $\text{La}^{3+} + 3e^- \rightleftharpoons \text{La}(s)$ –2.38 Lead E (V) Eo ́ (V) $\text{Pb}^{2+} + 2e^- \rightleftharpoons \text{Pb}(s)$ –0.126 $\text{PbO}_2(s) + 4\text{H}^+ + 2e^- \rightleftharpoons \text{Pb}^{2+} + 2\text{H}_2\text{O}(l)$ 1.46 $\text{PbO}_2(s) + \text{SO}_4^{2-} + 4\text{H}^+ + 2e^- \rightleftharpoons \text{PbSO}_4(s) + 2\text{H}_2\text{O}(l)$ 1.690 $\text{PbSO}_4(s) + 2e^- \rightleftharpoons \text{Pb}(s) + \text{SO}_4^{2-}$ –0.356 Lithium E (V) Eo ́ (V) $\text{Li}^+ + e^- \rightleftharpoons \text{Li}(s)$ –3.040 Magnesium E (V) Eo ́ (V) $\text{Mg}^{2+} + 2e^- \rightleftharpoons \text{Mg}(s)$ –2.356 $\text{Mg(OH)}_2(s) + 2e^- \rightleftharpoons \text{Mg}(s) + 2\text{OH}^-$ –2.687 Manganese E (V) Eo ́ (V) $\text{Mn}^{2+} + 2e^- \rightleftharpoons \text{Mn}(s)$ –1.17 $\text{Mn}^{3+} + e^- \rightleftharpoons \text{Mn}^{2+}$ 1.5 $\text{MnO}_2(s) + 4\text{H}^+ + 2e^- \rightleftharpoons \text{Mn}^{2+} + 2\text{H}_2\text{O}(l)$ 1.23 $\text{MnO}_4^- + 4\text{H}^+ +3e^- \rightleftharpoons \text{MnO}_2(s) + 2\text{H}_2\text{O}(l)$ 1.70 $\text{MnO}_4^- + 8\text{H}^+ + 5e^- \rightleftharpoons \text{Mn}^{2+} + 4\text{H}_2\text{O}(l)$ 1.51 $\text{MnO}_4^- + 2\text{H}_2\text{O}(l) + 3e^- \rightleftharpoons \text{MnO}_2(s) + 4\text{OH}^-$ 0.60 Mercury E (V) Eo ́ (V) $\text{Hg}^{2+} + 2e^- \rightleftharpoons \text{Hg}(l)$ 0.8535 $2\text{Hg}^{2+} +2e^- \rightleftharpoons \text{Hg}_2^{2+}$ 0.911 Mercury E (V) Eo ́ (V) $\text{Hg}_2^{2+} + 2e^- \rightleftharpoons 2\text{Hg}(l)$ 0.7960 $\text{Hg}_2\text{Cl}_2(s) + 2e^- \rightleftharpoons 2\text{Hg}(l) + 2\text{Cl}^-$ 0.2682 $\text{HgO}(s) + 2\text{H}^+ + 2e^- \rightleftharpoons \text{Hg}(l) + \text{H}_2\text{O}(l)$ 0.926 $\text{Hg}_2\text{Br}_2(s) + 2e^- \rightleftharpoons 2\text{Hg}(l) + 2\text{Br}^-$ 1.392 $\text{Hg}_2\text{I}_2(s) + 2e^- \rightleftharpoons 2\text{Hg}(l) + 2\text{I}^-$ –0.0405 Molybdenum E (V) Eo ́ (V) $\text{Mo}^{3+} + 3e^- \rightleftharpoons \text{Mo}(s)$ –0.2 $\text{MoO}_2(s) + 4\text{H}^+ + 4e^- \rightleftharpoons \text{Mo}(s) + 2\text{H}_2\text{O}(l)$ –0.152 $\text{MoO}_4^{2-} + 4\text{H}_2\text{O}(l) + 6e^- \rightleftharpoons \text{Mo}(s) + 8\text{OH}^-$ –0.913 Nickel E (V) Eo ́ (V) $\text{Ni}^{2+} + 2e^- \rightleftharpoons \text{Ni}(s)$ –0.257 $\text{Ni(OH)}_2(s) + 2e^- \rightleftharpoons \text{Ni}(s) + 2\text{OH}^-$ –0.72 $\text{Ni(NH}_3)_6^{2+} + 2e^- \rightleftharpoons \text{Ni}(s) + 6\text{NH}_3$ –0.49 Nitrogen E (V) Eo ́ (V) $\text{N}_2(g) + 5\text{H}^+ + 4e^- \rightleftharpoons \text{N}_2\text{H}_5^+$ –0.23 $\text{N}_2\text{O}(g) + 2\text{H}^+ + 2e^- \rightleftharpoons \text{N}_2(g) + \text{H}_2\text{O}(l)$ 1.77 $2\text{NO}(g) + 2\text{H}^+ + 2e^- \rightleftharpoons \text{N}_2\text{O}(g) + \text{H}_2\text{O}(l)$ 1.59 $\text{HNO}_2 + \text{H}^+ + e^- \rightleftharpoons \text{NO}(g) + \text{H}_2\text{O}(l)$ 0.996 $2\text{HNO}_2 + 4\text{H}^+ + 4e^- \rightleftharpoons \text{N}_2\text{O}(g) + 3\text{H}_2\text{O}(l)$ 1.297 $\text{NO}_3^- + 3\text{H}^+ + 2e^- \rightleftharpoons \text{HNO}_2 + \text{H}_2\text{O}(l)$ 0.94 Oxygen E (V) Eo ́ (V) $\text{O}_2(g) + 2\text{H}^+ + 2e^- \rightleftharpoons \text{H}_2\text{O}_2$ 0.695 $\text{O}_2(g) + 4\text{H}^+ + 4e^- \rightleftharpoons 2\text{H}_2\text{O}(l)$ 1.229 $\text{H}_2\text{O}_2 + 2\text{H}^+ + 2e^- \rightleftharpoons 2\text{H}_2\text{O}(l)$ 1.763 $\text{O}_2(g) + 2\text{H}_2\text{O}(l) + 4e^- \rightleftharpoons 4\text{OH}^-$ 0.401 $\text{O}_3(g) + 2\text{H}^+ + 2e^- \rightleftharpoons \text{O}_2(g) + \text{H}_2\text{O}(l)$ 2.07 Phosphorous E (V) Eo ́ (V) $\text{P}(s, white) + 3\text{H}^+ + 3e^- \rightleftharpoons \text{PH}_3(g)$ –0.063 $\text{H}_3\text{PO}_3 + 2\text{H}^+ + 2e^- \rightleftharpoons \text{H}_3\text{PO}_2 + \text{H}_2\text{O}(l)$ –0.499 $\text{H}_3\text{PO}_4 + 2\text{H}^+ + 2e^- \rightleftharpoons \text{H}_3\text{PO}_3 + \text{H}_2\text{O}(l)$ –0.276 Platinum E (V) Eo ́ (V) $\text{Pt}^{2+} + 2e^- \rightleftharpoons \text{Pt}(s)$ 1.188 $\text{PtCl}_4^{2-} + 2e^- \rightleftharpoons \text{Pt}(s) + 4\text{Cl}^-$ 0.758 Potasium E (V) Eo ́ (V) $\text{K}^+ + e^- \rightleftharpoons \text{K}(s)$ –2.924 Ruthenium E (V) Eo ́ (V) $\text{Ru}^{3+} + 3e^- \rightleftharpoons \text{Ru}(s)$ 0.249 $\text{RuO}_2(s) + 4\text{H}^+ + 4e^- \rightleftharpoons \text{Ru}(s) + 2\text{H}_2\text{O}(l)$ 0.68 $\text{Ru(NH}_3)_6^{3+} + e^- \rightleftharpoons \text{Ru(NH}_3)_6^{2+}$ 0.10 $\text{Ru(CN)}_6^{3-} + e^- \rightleftharpoons \text{Ru(CN)}_6^{4-}$ 0.86 Selenium E (V) Eo ́ (V) $\text{Se}(s) + 2e^- \rightleftharpoons \text{Se}^{2-}$ –0.67 in 1 M NaOH $\text{Se}(s) + 2\text{H}^+ + 2e^- \rightleftharpoons \text{H}_2\text{Se}(g)$ –0.115 $\text{H}_2\text{SeO}_3 + 4\text{H}^+ + 4e^- \rightleftharpoons \text{Se}(s) + 3\text{H}_2\text{O}(l)$ 0.74 $\text{SeO}_4^{3-} + 4\text{H}^+ + e^- \rightleftharpoons \text{H}_2\text{SeO}_3 + \text{H}_2\text{O}(l)$ 1.151 Silicon E (V) Eo ́ (V) $\text{SiF}_6^{2-} + 4e^- \rightleftharpoons \text{Si}(s) + 6\text{F}^-$ –1.37 $\text{SiO}_2(s) + 4\text{H}^+ + 4e^- \rightleftharpoons \text{Si}(s) + 2\text{H}_2\text{O}(l)$ –0.909 $\text{SiO}_2(s) + 8\text{H}^+ + 8e^- \rightleftharpoons \text{SiH}_4(g) + 2\text{H}_2\text{O}(l)$ –0.516 Silver E (V) Eo ́ (V) $\text{Ag}^+ + e^- \rightleftharpoons \text{Ag}(s)$ 0.7996 $\text{AgBr}(s) + e^- \rightleftharpoons \text{Ag}(s) + \text{Br}^-$ 0.071 $\text{Ag}_2\text{C}_2\text{O}_4(s) + 2e^- \rightleftharpoons 2\text{Ag}(s) + \text{C}_2\text{O}_4^{2-}$ 0.47 $\text{AgCl}(s) + e^- \rightleftharpoons \text{Ag}(s) + \text{Cl}^-$ 0.2223 $\text{AgI}(s) + e^- \rightleftharpoons \text{Ag}(s) + \text{I}^-$ –0.152 $\text{Ag}_2\text{S}(s) + 2e^- \rightleftharpoons 2\text{Ag}(s) + \text{S}^{2-}$ –0.71 $\text{Ag(NH}_3)_2^+ + e^- \rightleftharpoons \text{Ag}(s) + 2\text{NH}_3$ –0.373 Sodium E (V) Eo ́ (V) $\text{Na}^+ + e^- \rightleftharpoons \text{Na}(s)$ –2.713 Strontium E (V) Eo ́ (V) $\text{Sr}^{2+} + 2e^- \rightleftharpoons \text{Sr}(s)$ –2.89 Sulfur E (V) Eo ́ (V) $\text{S}(s) + 2e^- \rightleftharpoons \text{S}^{2-}$ –0.407 $\text{S}(s) + 2\text{H}^+ + 2e^- \rightleftharpoons \text{H}_2\text{S}(g)$ 0.144 $\text{S}_2\text{O}_6^{2-} + 4\text{H}^+ + 2e^- \rightleftharpoons 2\text{H}_2\text{SO}_3$ 0.569 $\text{S}_2\text{O}_8^{2-} + 2e^- \rightleftharpoons 2\text{SO}_4^{2-}$ 1.96 $\text{S}_4\text{O}_6^{2-} + 2e^- \rightleftharpoons 2\text{S}_2\text{O}_3^{2-}$ 0.080 $2\text{SO}_3^{2-} + 2\text{H}_2\text{O}(l) + 2e^- \rightleftharpoons \text{S}_2\text{O}_4^{2-} + 4\text{OH}^-$ –1.13 $2\text{SO}_3^{2-} + 3\text{H}_2\text{O}(l) + 4e^- \rightleftharpoons \text{S}_2\text{O}_3^{2-} + 6\text{OH}^-$ –0.576 in 1 M NaOH $2\text{SO}_4^{2-} + 4\text{H}^+ + 2e^- \rightleftharpoons \text{S}_2\text{O}_6^{2-} + 2\text{H}_2\text{O}(l)$ –0.25 $\text{SO}_4^{2-} + \text{H}_2\text{O}(l) + 2e^- \rightleftharpoons \text{SO}_3^{2-} + 2\text{OH}^-$ –0.936 $\text{SO}_4^{2-} + 4\text{H}^+ + 2e^- \rightleftharpoons \text{H}_2\text{SO}_3 + \text{H}_2\text{O}(l)$ 0.172 Thallium E (V) Eo ́ (V) $\text{Tl}^{3+} + 2e^- \rightleftharpoons \text{Tl}^+$ 1.25 in 1 M HClO4 0.77 in 1 M HCl $\text{Tl}^{3+} + 3e^- \rightleftharpoons \text{Tl}(s)$ 0.742 Tin E (V) Eo ́ (V) $\text{Sn}^{2+} + 2e^- \rightleftharpoons \text{Sn}(s)$ –0.19 in 1 M HCl $\text{Sn}^{4+} + 2e^- \rightleftharpoons \text{Sn}^{2+}$ 0.154 0.139 in 1 M HCl Titanium E (V) Eo ́ (V) $\text{Ti}^{2+} + 2e^- \rightleftharpoons \text{Ti}(s)$ –0.163 $\text{Ti}^{3+} + e^- \rightleftharpoons \text{Ti}^{2+}$ –0.37 Tungsten E (V) Eo ́ (V) $\text{WO}_2(s) + 4\text{H}^+ + 4e^- \rightleftharpoons \text{W}(s) + 2\text{H}_2\text{O}(l)$ –0.119 $\text{WO}_3(s) + 6\text{H}^+ + 6e^- \rightleftharpoons \text{W}(s) + 3\text{H}_2\text{O}(l)$ –0.090 Uranium E (V) Eo ́ (V) $\text{U}^{3+} + 3e^- \rightleftharpoons \text{U}(s)$ –1.66 $\text{U}^{4+} + e^- \rightleftharpoons \text{U}^{3+}$ –0.52 $\text{UO}_2^+ + 4\text{H}^+ + e^- \rightleftharpoons \text{U}^{4+} + 2\text{H}_2\text{O}(l)$ 0.27 $\text{UO}_2^{2+} + e^- \rightleftharpoons \text{UO}_2^+$ 0.16 $\text{UO}_2^{2+} + 4\text{H}^+ + 2e^- \rightleftharpoons \text{U}^{4+} + 2\text{H}_2\text{O}(l)$ 0.327 Vanadium E (V) Eo ́ (V) $\text{V}^{2+} + 2e^- \rightleftharpoons \text{V}(s)$ –1.13 $\text{V}^{3+} + e^- \rightleftharpoons \text{V}^{2+}$ –0.255 $\text{VO}^{2+} + 2\text{H}^+ + e^- \rightleftharpoons \text{V}^{3+} + \text{H}_2\text{O}(l)$ 0.337 $\text{VO}_2^{+} + 2\text{H}^+ + e^- \rightleftharpoons \text{VO}^{2+} + \text{H}_2\text{O}(l)$ 1.000 Zinc E (V) Eo ́ (V) $\text{Zn}^{2+} + 2e^- \rightleftharpoons \text{Zn}(s)$ –0.7618 $\text{Zn(OH)}_4^{2-} + 2e^- \rightleftharpoons \text{Zn}(s) + 4\text{OH}^-$ –1.285 $\text{Zn(NH}_3)_4^{2+} + 2e^- \rightleftharpoons \text{Zn}(s) + 4\text{NH}_3$ –1.04 $\text{Zn(CN)}_4^{2-} + 2e^- \rightleftharpoons \text{Zn}(s) + 4\text{CN}^-$ –1.34
textbooks/chem/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/16%3A_Appendix/16.13%3A_Standard_Reduction_Potentials.txt
The following table provides a list of random numbers in which the digits 0 through 9 appear with approximately equal frequency. Numbers are arranged in groups of five to make the table easier to view. This arrangement is arbitrary, and you can treat the table as a sequence of random individual digits (1, 2, 1, 3, 7, 4...going down the first column of digits on the left side of the table), as a sequence of three digit numbers (111, 212, 104, 367, 739... using the first three columns of digits on the left side of the table), or in any other similar manner. Let’s use the table to pick 10 random numbers between 1 and 50. To do so, we choose a random starting point, perhaps by dropping a pencil onto the table. For this exercise, we will assume that the starting point is the fifth row of the third column, or 12032 (highlighted in red below). Because the numbers must be between 1 and 50, we will use the last two digits, ignoring all two-digit numbers less than 01 or greater than 50. Proceeding down the third column, and moving to the top of the fourth column if necessary, gives the following 10 random numbers: 32, 01, 05, 16, 15, 38, 24, 10, 26, 14. These random numbers (1000 total digits) are a small subset of values from the publication Million Random Digits (Rand Corporation, 2001) and used with permission. Information about the publication, and a link to a text file containing the million random digits is available at http://www.rand.org/pubs/monograph_reports/MR1418/. 11164 36318 75061 37674 26320 75100 10431 20418 19228 91792 21215 91791 76831 58678 87054 31687 93205 43685 19732 08468 10438 44482 66558 37649 08882 90870 12462 41810 01806 02977 36792 26236 33266 66583 60881 97395 20461 36742 02852 50564 73944 04773 12032 51414 82384 38370 00249 80709 72605 67497 49563 12872 14063 93104 78483 72717 68714 18048 25005 04151 64208 48237 41701 73117 33242 42314 83049 21933 92813 04763 51486 72875 38605 29341 80749 80151 33835 52602 79147 08868 99756 26360 64516 17971 48478 09610 04638 17141 09227 10606 71325 55217 13015 72907 00431 45117 33827 92873 02953 85474 65285 97198 12138 53010 95601 15838 16805 61004 43516 17020 17264 57327 38224 29301 31381 38109 34976 65692 98566 29550 95639 99754 31199 92558 68368 04985 51092 37780 40261 14479 61555 76404 86210 11808 12841 45147 97438 60022 12645 62000 78137 98768 04689 87130 79225 08153 84967 64539 79493 74917 62490 99215 84987 28759 19177 14733 24550 28067 68894 38490 24216 63444 21283 07044 92729 37284 13211 37485 10415 36457 16975 95428 33226 55903 31605 43817 22250 03918 46999 98501 59138 39542 71168 57609 91510 77904 74244 50940 31553 62562 29478 59652 50414 31966 87912 87514 12944 49862 96566 48825 16.15: Polarographic Half-Wave Potentials The following table provides E1/2 values for selected reduction reactions. Values are from Dean, J. A. Analytical Chemistry Handbook, McGraw-Hill: New York, 1995. Element \(E_{1/2}\) (volts vs. SCE) Matrix \(\ce{Al^{3+}}(aq) + \ce{3 e-} \ce{<=>} \ce{Al}(s)\) –0.5 0.2 M acetate (pH 4.5–4.7) \(\ce{Cd^{2+}}(aq) + \ce{2 e-} \ce{<=>} \ce{Cd}(s)\) –0.6 0.1 M KCl 0.050 M H2SO4 1 M HNO3 \(\ce{Cr^{3+}}(aq) + \ce{3 e-} \ce{<=>} \ce{Cr}(s)\) –0.35 \((+3 \ce{->} +2)\) –1.70 \((+2 \ce{->} 0)\) 1 M NH4Cl plus 1 M NH3 1 M NH4+/NH3 buffer (pH 8–9) \(\ce{Co^{3+}}(aq)+ \ce{3 e-} \ce{<=>} \ce{Co}(s)\) –0.5 \((+3 \ce{->} +2)\) –1.3 \((+2 \ce{->} 0)\) 1 M NH4Cl plus 1 M NH3 \(\ce{Co^{2+}}(aq) + \ce{2 e-} \ce{<=>} \ce{Co}(s)\) –1.03 1 M KSCN \(\ce{Cu^{2+}}(aq) + \ce{2 e-} \ce{<=>} \ce{Cu}(s)\) 0.04 –0.22 0.1 M KSCN 0.1 M NH4ClO4 1 M Na2SO4 0.5 M potassium citrate (pH 7.5) \(\ce{Fe^{3+}}(aq) + \ce{3 e-} \ce{<=>} \ce{Fe}(s)\) –0.17 \((+3 \ce{->} +2)\) –1.52 \((+2 \ce{->} 0)\) 0.5 M sodium tartrate (pH 5.8) \(\ce{Fe^{3+}}(aq) + \ce{e-} \ce{<=>} \ce{Fe^{2+}}(aq)\) –0.27 0.2 M Na2C2O4 (pH < 7.9) \(\ce{Pb^{2+}}(aq) + \ce{2 e-} \ce{<=>} \ce{Pb}(s)\) –0.405 –0.435 1 M HNO3 1 M KCl \(\ce{Mn^{2+}}(aq) + \ce{2 e-} \ce{<=>} \ce{Mn}(s)\) –1.65 1 M NH4Cl plus 1 M NH3 \(\ce{Ni^{2+}}(aq) + \ce{2 e-} \ce{<=>} \ce{Ni}(s)\) –0.70 –1.09 1 M KSCN 1 M NH4Cl plus 1 M NH3 \(\ce{Zn^{2+}}(aq) + \ce{2 e-} \ce{<=>} \ce{Zn}(s)\) –0.995 –1.33 0.1 M KCl 1 M NH4Cl plus 1 M NH3
textbooks/chem/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/16%3A_Appendix/16.14%3A_Random_Number_Table.txt
In 1949, Lyman Craig introduced an improved method for separating analytes with similar distribution ratios [Craig, L. C. J. Biol. Chem. 1944, 155, 519–534]. The technique, which is known as a countercurrent liquid–liquid extraction, is outlined in Figure 16.16.1 and discussed in detail below. In contrast to a sequential liquid–liquid extraction, in which we repeatedly extract the sample containing the analyte, a countercurrent extraction uses a serial extraction of both the sample and the extracting phases. Although countercurrent separations are no longer common—chromatographic separations are far more efficient in terms of resolution, time, and ease of use—the theory behind a countercurrent extraction remains useful as an introduction to the theory of chromatographic separations. To track the progress of a countercurrent liquid-liquid extraction we need to adopt a labeling convention. As shown in Figure 16.16.1 , in each step of a countercurrent extraction we first complete the extraction and then transfer the upper phase to a new tube that contains a portion of the fresh lower phase. Steps are labeled sequentially beginning with zero. Extractions take place in a series of tubes that also are labeled sequentially, starting with zero. The upper and lower phases in each tube are identified by a letter and number, with the letters U and L representing, respectively, the upper phase and the lower phase, and the number indicating the step in the countercurrent extraction in which the phase was first introduced. For example, U0 is the upper phase introduced at step 0 (during the first extraction), and L2 is the lower phase introduced at step 2 (during the third extraction). Finally, the partitioning of analyte in any extraction tube results in a fraction p remaining in the upper phase, and a fraction q remaining in the lower phase. Values of q are calculated using Equation \ref{16.1}, which is identical to Equation 7.7.6 in Chapter 7. $(q_\text{aq})_1 = \frac {(\text{mol aq})_1} {(\text{mol aq})_0} = \frac {V_\text{aq}} {D V_\text{org} + V_\text{aq}} \label{16.1}$ The fraction p, of course is equal to 1 – q. Typically Vaq and Vorg are equal in a countercurrent extraction, although this is not a requirement. Let’s assume that the analyte we wish to isolate is present in an aqueous phase of 1 M HCl, and that the organic phase is benzene. Because benzene has the smaller density, it is the upper phase, and 1 M HCl is the lower phase. To begin the countercurrent extraction we place the aqueous sample that contains the analyte in tube 0 along with an equal volume of benzene. As shown in Figure $1\text{a}$, before the extraction all the analyte is present in phase L0. When the extraction is complete, as shown in Figure $1\text{b}$, a fraction p of the analyte is present in phase U0, and a fraction q is in phase L0. This completes step 0 of the countercurrent extraction. If we stop here, there is no difference between a simple liquid–liquid extraction and a countercurrent extraction. After completing step 0, we remove phase U0 and add a fresh portion of benzene, U1, to tube 0 (see Figure $1\text{c}$). This, too, is identical to a simple liquid-liquid extraction. Here is where the power of the countercurrent extraction begins—instead of setting aside the phase U0, we place it in tube 1 along with a portion of analyte-free aqueous 1 M HCl as phase L1 (see Figure $1\text{c}$). Tube 0 now contains a fraction q of the analyte, and tube 1 contains a fraction p of the analyte. Completing the extraction in tube 0 results in a fraction p of its contents remaining in the upper phase, and a fraction q remaining in the lower phase. Thus, phases U1 and L0 now contain, respectively, fractions pq and q2 of the original amount of analyte. Following the same logic, it is easy to show that the phases U0 and L1 in tube 1 contain, respectively, fractions p2 and pq of analyte. This completes step 1 of the extraction (see Figure $1\text{d}$). As shown in the remainder of Figure 16.16.1 , the countercurrent extraction continues with this cycle of phase transfers and extractions. In a countercurrent liquid–liquid extraction, the lower phase in each tube remains in place, and the upper phase moves from tube 0 to successively higher numbered tubes. We recognize this difference in the movement of the two phases by referring to the lower phase as a stationary phase and the upper phase as a mobile phase. With each transfer some of the analyte in tube r moves to tube $r + 1$, while a portion of the analyte in tube $r - 1$ moves to tube r. Analyte introduced at tube 0 moves with the mobile phase, but at a rate that is slower than the mobile phase because, at each step, a portion of the analyte transfers into the stationary phase. An analyte that preferentially extracts into the stationary phase spends proportionally less time in the mobile phase and moves at a slower rate. As the number of steps increases, analytes with different values of q eventually separate into completely different sets of extraction tubes. We can judge the effectiveness of a countercurrent extraction using a histogram that shows the fraction of analyte present in each tube. To determine the total amount of analyte in an extraction tube we add together the fraction of analyte present in the tube’s upper and lower phases following each transfer. For example, at the beginning of step 3 (see Figure $1\text{g}$) the upper and lower phases of tube 1 contain fractions pq2 and 2pq2 of the analyte, respectively; thus, the total fraction of analyte in the tube is 3pq2. Table 16.16.1 summarizes this for the steps outlined in Figure 16.16.1 . A typical histogram, calculated assuming distribution ratios of 5.0 for analyte A and 0.5 for analyte B, is shown in Figure 16.16.2 . Although four steps is not enough to separate the analytes in this instance, it is clear that if we extend the countercurrent extraction to additional tubes, we will eventually separate the analytes. Table 16.16.1 . Fraction of Analyte Remaining in Tube r After Extraction Step n for a Countercurrent Extraction $\ce{n} \ce{v} \ce{r} \ce{->}$ 0 1 2 3 } \)">0 1 } \)">1 q p } \)">2 q2 2pq p2 } \)">3 q3 3pq 3p2q p3 Figure 16.16.1 and Table 16.16.1 show how an analyte’s distribution changes during the first four steps of a countercurrent extraction. Now we consider how we can generalize these results to calculate the amount of analyte in any tube, at any step during the extraction. You may recognize the pattern of entries in Table 16.16.1 as following the binomial distribution $f(r, n) = \frac {n!} {(n - r)! r!} p^{r} q^{n - r} \label{16.2}$ where f(r, n) is the fraction of analyte present in tube r at step n of the countercurrent extraction, with the upper phase containing a fraction $p \times f(r, n)$ of analyte and the lower phase containing a fraction $q \times f(r, n)$ of the analyte. Example 16.16.1 The countercurrent extraction shown in Figure 16.16.2 is carried out through step 30. Calculate the fraction of analytes A and B in tubes 5, 10, 15, 20, 25, and 30. Solution To calculate the fraction, q, for each analyte in the lower phase we use Equation \ref{16.1}. Because the volumes of the lower and upper phases are equal, we get $q_\text{A} = \frac {1} {D_\text{A} + 1} = \frac {1} {5 + 1} = 0.167 \quad \quad q_\text{B} = \frac {1} {D_\text{B} + 1} = \frac {1} {0.5 + 1} = 0.667 \nonumber$ Because we know that $p + q = 1$, we also know that pA is 0.833 and that pB is 0.333. For analyte A, the fraction in tubes 5, 10, 15, 20, 25, and 30 after the 30th step are $f(5,30) = \frac {30!} {(30 - 5)! 5!} (0.833)^{5} (0.167)^{30 - 5} = 2.1 \times 10^{-15} \approx 0 \nonumber$ $f(10,30) = \frac {30!} {(30 - 10)! 10!} (0.833)^{10} (0.167)^{30 - 10} = 1.4 \times 10^{-9} \approx 0 \nonumber$ $f(15,30) = \frac {30!} {(30 - 15)! 5!} (0.833)^{15} (0.167)^{30 - 15} = 2.2 \times 10^{-5} \approx 0 \nonumber$ $f(20,30) = \frac {30!} {(30 - 20)! 20!} (0.833)^{20} (0.167)^{30 - 20} = 0.013 \nonumber$ $f(25,30) = \frac {30!} {(30 - 25)! 25!} (0.833)^{25} (0.167)^{30 - 25} = 0.192 \nonumber$ $f(30,30) = \frac {30!} {(30 - 30)! 30!} (0.833)^{30} (0.167)^{30 - 30} = 0.004 \nonumber$ The fraction of analyte B in tubes 5, 10, 15, 20, 25, and 30 is calculated in the same way, yielding respective values of 0.023, 0.153, 0.025, 0, 0, and 0. Figure 16.16.3 , which provides the complete histogram for the distribution of analytes A and B, shows that 30 steps is sufficient to separate the two analytes. Constructing a histogram using Equation \ref{16.2} is tedious, particularly when the number of steps is large. Because the fraction of analyte in most tubes is approximately zero, we can simplify the histogram’s construction by solving Equation \ref{16.2} only for those tubes containing an amount of analyte that exceeds a threshold value. For a binomial distribution, we can use the mean and standard deviation to determine which tubes contain a significant fraction of analyte. The properties of a binomial distribution were covered in Chapter 4, with the mean, $\mu$, and the standard deviation, $\sigma$, given as $\mu = np \nonumber$ $\sigma = \sqrt{np(1 - p)} = \sqrt{npq} \nonumber$ Furthermore, if both np and nq are greater than 5, then a binomial distribution closely approximates a normal distribution and we can use the properties of a normal distribution to determine the location of the analyte and its recovery [see Mark, H.; Workman, J. Spectroscopy 1990, 5(3), 55–56]. Example 16.16.2 Two analytes, A and B, with distribution ratios of 9 and 4, respectively, are separated using a countercurrent extraction in which the volumes of the upper and lower phases are equal. After 100 steps determine the 99% confidence interval for the location of each analyte. Solution The fraction, q, of each analyte that remains in the lower phase is calculated using Equation \ref{16.1}. Because the volumes of the lower and upper phases are equal, we find that $q_\text{A} = \frac {1} {D_\text{A} + 1} = \frac {1} {9 + 1} = 0.10 \quad \quad q_\text{B} = \frac {1} {D_\text{B} + 1} = \frac {1} {4 + 1} = 0.20 \nonumber$ Because we know that $p + q = 1$, we also know that pA is 0.90 and pB is 0.80. After 100 steps, the mean and the standard deviation for the distribution of analytes A and B are $\mu_\text{A} = np_\text{A} = (100)(0.90) = 90 \text{ and } \sigma_\text{A} = \sqrt{np_\text{A}q_\text{A}} = \sqrt{(100)(0.90)(0.10)} = 3 \nonumber$ $\mu_\text{B} = np_\text{B} = (100)(0.80) = 80 \text{ and } \sigma_\text{A} = \sqrt{np_\text{A}q_\text{A}} = \sqrt{(100)(0.80)(0.20)} = 4 \nonumber$ Given that npA, npB, nqA, and nqB are all greater than 5, we can assume that the distribution of analytes follows a normal distribution and that the confidence interval for the tubes containing each analyte is $r = \mu \pm z \sigma \nonumber$ where r is the tube’s number and the value of z is determined by the desired significance level. For a 99% confidence interval the value of z is 2.58 (see Appendix 4); thus, $r_\text{A} = 90 \pm (2.58)(3) = 90 \pm 8 \nonumber$ $r_\text{B} = 80 \pm (2.58)(4) = 80 \pm 10 \nonumber$ Because the two confidence intervals overlap, a complete separation of the two analytes is not possible using a 100 step countercurrent extraction. The complete distribution of the analytes is shown in Figure 16.16.4 . Example 16.16.3 For the countercurrent extraction in Example 16.16.2 , calculate the recovery and the separation factor for analyte A if the contents of tubes 85–99 are pooled together. Solution From Example 16.16.2 we know that after 100 steps of the countercurrent extraction, analyte A is normally distributed about tube 90 with a standard deviation of 3. To determine the fraction of analyte A in tubes 85–99, we use the single-sided normal distribution in Appendix 3 to determine the fraction of analyte in tubes 0–84, and in tube 100. The fraction of analyte A in tube 100 is determined by calculating the deviation z $z = \frac {r - \mu} {\sigma} = \frac {99 - 90} {3} = 3 \nonumber$ and using the table in Appendix 3 to determine the corresponding fraction. For z = 3 this corresponds to 0.135% of analyte A. To determine the fraction of analyte A in tubes 0–84 we again calculate the deviation $z = \frac {r - \mu} {\sigma} = \frac {84 - 90} {3} = -1.67 \nonumber$ From Appendix 3 we find that 4.75% of analyte A is present in tubes 0–84. Analyte A’s recovery, therefore, is $100\% - 4.75\% - 0.135\% \approx 95\% \nonumber$ To calculate the separation factor we determine the recovery of analyte B in tubes 85–99 using the same general approach as for analyte A, finding that approximately 89.4% of analyte B remains in tubes 0–84 and that essentially no analyte B is in tube 100. The recovery for B, therefore, is $100\% - 89.4\% - 0\% \approx 10.6\% \nonumber$ and the separation factor is $S_\text{B/A} = \frac {R_\text{A}} {R_\text{B}} = \frac {10.6} {95} = 0.112 \nonumber$
textbooks/chem/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/16%3A_Appendix/16.16%3A_Countercurrent_Separations.txt
A reaction’s equilibrium position defines the extent to which the reaction can occur. For example, we expect a reaction with a large equilibrium constant, such as the dissociation of HCl in water $\ce{HCl}(aq) + \ce{H2O}(l) \ce{->} \ce{H3O+}(aq) + \ce{Cl-}(aq) \nonumber$ to proceed nearly to completion. A large equilibrium constant, however, does not guarantee that a reaction will reach its equilibrium position. Many reactions with large equilibrium constants, such as the reduction of $\ce{MnO4-}$ by $\ce{H2O}$ $\ce{4 MnO4-}(aq) + \ce{2 H2O}(l) \ce{->} \ce{4 MnO2}(s) + \ce{3 O2}(g) + \ce{4 OH-}(aq) \nonumber$ do not occur to an appreciable extent. The study of the rate at which a chemical reaction approaches its equilibrium position is called kinetics. Chemical Reaction Rates A study of a reaction’s kinetics begins with the measurement of its reaction rate. Consider, for example, the general reaction shown below, involving the aqueous solutes A, B, C, and D, with stoichiometries of a, b, c, and d. $a \ce{A} + b \ce{B} \ce{<=>} c \ce{C} + d \ce{D} \label{16.1}$ The rate, or velocity, at which this reaction approaches its equilibrium position is determined by following the change in concentration of one reactant or one product as a function of time. For example, if we monitor the concentration of reactant A, we express the rate as $R = - \frac {d[\ce{A}]} {dt} \label{16.2}$ where R is the measured rate expressed as a change in concentration of A as a function of time. Because a reactant’s concentration decreases with time, we include a negative sign so that the rate has a positive value. We also can determine the rate by following the change in concentration of a product as a function of time, which we express as $R^{\prime} = + \frac {d[\ce{C}]} {dt} \label{16.3}$ Rates determined by monitoring different species do not necessarily have the same value. The rate R in Equation \ref{16.2} and the rate $R^{\prime}$ in Equation \ref{16.3} have the same value only if the stoichiometric coefficients of A and C in reaction \ref{16.1} are identical. In general, the relationship between the rates R and $R^{\prime}$ is $R = \frac {a} {c} \times R^{\prime} \nonumber$ The Rate Law A rate law describes how a reaction’s rate is affected by the concentration of each species in the reaction mixture. The rate law for Reaction \ref{16.1} takes the general form of $R = k[\ce{A}]^{\alpha} [\ce{B}]^{\beta} [\ce{C}]^{\gamma} [\ce{D}]^{\delta} [\ce{E}]^{\epsilon} ... \label{16.4}$ where k is the rate constant, and $\alpha$, $\beta$, $\gamma$, $\delta$, and $\epsilon$ are the reaction orders of the reaction for each species present in the reaction. There are several important points about the rate law in Equation \ref{16.4}. First, a reaction’s rate may depend on the concentrations of both reactants and products, as well as the concentration of a species that does not appear in the reaction’s overall stoichiometry. Species E in Equation \ref{16.4}, for example, may be a catalyst that does not appear in the reaction’s overall stoichiometry, but which increases the reaction’s rate. Second, the reaction order for a given species is not necessarily the same as its stoichiometry in the chemical reaction. Reaction orders may be positive, negative, or zero, and may take integer or non-integer values. Finally, the reaction’s overall reaction order is the sum of the individual reaction orders for each species. Thus, the overall reaction order for Equation \ref{16.4} is $\alpha + \beta +\gamma + \delta + \epsilon$. Kinetic Analysis of Selected Reactions In this section we review the application of kinetics to several simple chemical reactions, focusing on how we can use the integrated form of the rate law to determine reaction orders. In addition, we consider how we can determine the rate law for a more complex system. First-Order Reactions The simplest case we can treat is a first-order reaction in which the reaction’s rate depends on the concentration of only one species. The simplest example of a first-order reaction is an irreversible thermal decomposition of a single reactant, which we represent as $\ce{A} \ce{->} \text{products} \label{16.5}$ with a rate law of $R = - \frac {d[\ce{A}]} {dt} = k[\ce{A}] \label{16.6}$ The simplest way to demonstrate that a reaction is first-order in A, is to double the concentration of A and note the effect on the reaction’s rate. If the observed rate doubles, then the reaction is first-order in A. Alternatively, we can derive a relationship between the concentration of A and time by rearranging Equation \ref{16.6} and integrating. $\frac {d[\ce{A}]} {[\ce{A}]} = -kdt \nonumber$ $\int_{[{A}]_0}^{[{A}]_t}\frac{1}{[A]}d[A] = - k \int_{o}^{t}dt \label{16.7}$ Evaluating the integrals in Equation \ref{16.7} and rearranging $\ln \frac {[\ce{A}]_t} {[\ce{A}]_0} = -kt \label{16.8}$ $\ln [\ce{A}]_t = \ln [\ce{A}]_0 - kt \label{16.9}$ shows that for a first-order reaction, a plot of $\ln[\ce{A}]_t$ versus time is linear with a slope of –k and a y-intercept of $\ln[\ce{A}]_0$. Equation \ref{16.8} and Equation \ref{16.9} are known as integrated forms of the rate law. Reaction \ref{16.5} is not the only possible form of a first-order reaction. For example, the reaction $\ce{A} + \ce{B} \ce{->} \text{products} \label{16.10}$ will follow first-order kinetics if the reaction is first-order in A and if the concentration of B does not affect the reaction’s rate, which may happen if the reaction’s mechanism involves at least two steps. Imagine that in the first step, A slowly converts to an intermediate species, C, which reacts rapidly with the remaining reactant, B, in one or more steps, to form the products. $\ce{A} \ce{->} \ce{B} (\text{slow}) \nonumber$ $\ce{B} + \ce{C} \ce{->} \text{products} \nonumber$ Because a reaction’s rate depends only on those species in the slowest step—usually called the rate-determining step—and any preceding steps, species B will not appear in the rate law. Second-Order Reactions The simplest reaction demonstrating second-order behavior is $\ce{2 A} \ce{->} \text{products} \nonumber$ for which the rate law is $R = - \frac {d[\ce{A}]} {dt} = k[\ce{A}]^2 \nonumber$ Proceeding as we did earlier for a first-order reaction, we can easily derive the integrated form of the rate law. $\frac {d[\ce{A}]} {[\ce{A}]^2} = -kdt \nonumber$ $\int_{[\ce{A}]_0}^{[\ce{A}]_t} = -k \int_0^t dt \nonumber$ $\frac {1} {[\ce{A}]_t} = kt + \frac {1} {[\ce{A}]_0} \nonumber$ For a second-order reaction, therefore, a plot of ([A]t)–1 versus t is linear with a slope of k and a y-intercept of ([A]0)–1. Alternatively, we can show that a reaction is second-order in A by observing the effect on the rate when we change the concentration of A. In this case, doubling the concentration of A produces a four-fold increase in the reaction’s rate. Example 16.17.1 The following data were obtained during a kinetic study of the hydration of p-methoxyphenylacetylene by measuring the relative amounts of reactants and products by NMR [data from Kaufman, D,; Sterner, C.; Masek, B.; Svenningsen, R.; Samuelson, G. J. Chem. Educ. 1982, 59, 885–886]. time (min) % p-methoxyphenylacetylene 67 85.9 161 70.0 241 57.6 381 40.7 479 32.4 545 27.7 604 24 Solution To determine the reaction’s order we plot ln(%p-methoxyphenylacetylene) versus time for a first-order reaction, and (%p-methoxyphenylacetylene)–1 versus time for a second-order reaction (see below). Because a straight-line for the first-order plot fits the data nicely, we conclude that the reaction is first-order in p-meth- oxyphenylacetylene. Note that when we plot the data using the equation for a second-order reaction, the data show curvature that does not fit the straight-line model. Pseudo-Order Reactions and the Method of Initial Rates Unfortunately, most reactions of importance in analytical chemistry do not follow the simple first-order or second-order rate laws discussed above. We are more likely to encounter the second-order rate law given in Equation \ref{16.11} than that in Equation \ref{16.10}. $R = k [\ce{A}] [\ce{B}] \label{16.11}$ Demonstrating that a reaction obeys the rate law in Equation \ref{16.11} is complicated by the lack of a simple integrated form of the rate law. Often we can simplify the kinetics by carrying out the analysis under conditions where the concentrations of all species but one are so large that their concentrations effectively remain constant during the reaction. For example, if the concentration of B is selected such that $[\ce{B}] >> [\ce{A}]$, then Equation \ref{16.11} simplifies to $R = k^{\prime} [\ce{A}] \nonumber$ where the rate constant k ́ is equal to k[B]. Under these conditions, the reaction appears to follow first-order kinetics in A; for this reason we identify the reaction as pseudo-first-order in A. We can verify the reaction order for A using either the integrated rate law or by observing the effect on the reaction’s rate of changing the concentration of A. To find the reaction order for B, we repeat the process under conditions where $[\ce{A}] >> [\ce{B}]$. A variation on the use of pseudo-ordered reactions is the initial rate method. In this approach we run a series of experiments in which we change one-at-a-time the concentration of each species that might affect the reaction’s rate and measure the resulting initial rate. Comparing the reaction’s initial rate for two experiments in which only the concentration of one species is different allows us to determine the reaction order for that species. The application of this method is outlined in the following example. Example 16.17.2 The following data was collected during a kinetic study of the iodation of acetone by measuring the concentration of unreacted I2 in solution [data from Birk, J. P.; Walters, D. L. J. Chem. Educ. 1992, 69, 585–587]. experiment number $[\ce{C3H6O}]$ (M) $[\ce{H3O+}]$ (M) $[\ce{I2}]$ (M) Rate (M s–1) 1 1.33 0.0404 $6.65 \times 10^{-3}$ $1.78 \times 10^{-6}$ 2 1.33 0.0809 $6.65 \times 10^{-3}$ $3.89 \times 10^{-6}$ 3 1.33 0.162 $6.65 \times 10^{-3}$ $8.11 \times 10^{-6}$ 4 1.33 0.323 $6.65 \times 10^{-3}$ $1.66 \times 10^{-5}$ 5 0.167 0.323 $6.65 \times 10^{-3}$ $1.64 \times 10^{-6}$ 6 0.333 0.323 $6.65 \times 10^{-3}$ $3.76 \times 10^{-6}$ 7 0.667 0.323 $6.65 \times 10^{-3}$ $7.55 \times 10^{-6}$ 8 0.333 0.323 $3.32 \times 10^{-3}$ $3.57 \times 10^{-6}$ Solution The order of the rate law with respect to the three reactants is determined by comparing the rates of two experiments in which there is a change in concentration for only one of the reactants. For example, in Experiments 1 and 2, only the $[\ce{H3O+}]$ changes; as doubling the $[\ce{H3O+}]$ doubles the rate, we know that the reaction is first-order in $\ce{H3O+}$. Working in the same manner, Experiments 6 and 7 show that the reaction is also first order with respect to $[\ce{C3H6O}]$, and Experiments 6 and 8 show that the rate of the reaction is independentof the $[\ce{I2}]$. Thus, the rate law is $R = k [\ce{C3H6O}] [\ce{H3O+}] \nonumber$ To determine the value of the rate constant, we substitute the rate, the $[\ce{H3O+}]$, and the $[\ce{H3O+}]$ for each experiment into the rate law and solve for k. Using the data from Experiment 1, for example, gives a rate constant of $3.31 \times 10^{-5} \text{ M}^{-1} \text{ s}^{-1}$. The average rate constant for the eight experiments is $3.49 \times 10^{-5} \text{ M}^{-1} \text{ s}^{-1}$.
textbooks/chem/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/16%3A_Appendix/16.17%3A_Review_of_Chemical_Kinetics.txt
The atomic weight of any isotope of an element is referenced to 12C, which is assigned an exact atomic weight of 12. The atomic weight of an element, therefore, is calculated using the atomic weights of its isotopes and the known abundance of those isotopes. For some elements the isotopic abundance varies slightly from material- to-material such that the element’s atomic weight in any specific material falls within a range of possible value; this is the case for carbon, for which the range of atomic masses is reported as [12.0096, 12.0116]. For such elements, a conventional, or representative atomic weight often is reported, chosen such that it falls within the range with an uncertainty of $\pm 1$ in the last reported digit; in the case of carbon, for example, the representative atomic weight is 12.011. The atomic weights reported here—most to five significant figures, but a few to just three or four significant figures—are taken from the IUPAC technical report (“Atomic Weights of the Elements 2011,” Pure Appl.Chem. 2013, 85, 1047–1078). Values in ( ) are uncertainties in the last significant figure quoted and values in [ ] are the mass number for the longest lived isotope for elements that have no stable isotopes. The atomic weights for the elements B, Br, C, Cl, H, Li, Mg, N, O, Si, S, Tl are representative values. At. No. Symbol Name At. Wt. At. No. Symbol Name At. Wt. 1 H hydrogen 1.008 60 Nd neodymium 144.24 2 He helium 4.0026 61 Pm promethium [145] 3 Li lithium 6.94 62 Sm samarium 150.36(2) 4 Be beryllium 9.0122 63 Eu europium 151.96 5 B boron 10.81 64 Gd gadolinium 157.25(3) 6 C carbon 12.011 65 Tb terbium 158.93 7 N nitrogen 14.007 66 Dy dysprosium 162.50 8 O oxygen 15.999 67 Ho holmium 164.93 9 F fluorine 18.998 68 Er erbium 167.26 10 Ne neon 20.180 69 Tm thulium 168.93 11 Na sodium 22.990 70 Yb ytterbium 173.05 12 Mg magnesium 24.305 71 Lu lutetium 174.97 13 Al aluminum 26.982 72 Hf halfnium 178.49(2) 14 Si silicon 28.085 73 Ta tantalum 180.95 15 P phosphorous 30.974 74 W tungsten 183.84 16 S sulfur 32.06 75 Re rhenium 186.21 17 Cl chlorine 35.45 76 Os osmium 190.23(3) 18 Ar argon 39.948 77 Ir iridium 192.22 19 K potassium 39.098 78 Pt platinum 195.08 20 Ca calcium 40.078(4) 79 Au gold 196.97 21 Sc scandium 44.956 80 Hg mercury 200.59 22 Ti titanium 47.867 81 Tl thallium 204.38 23 V vanadium 50.942 82 Pb lead 207.2 24 Cr chromium 51.996 83 Bi bismuth 208.98 25 Mn manganese 54.938 84 Po polonium [209] 26 Fe iron 55.845(2) 85 At astatine [210] 27 Co cobalt 58.933 86 Rn radon [222] 28 Ni nickel 58.693 87 Fr francium [223] 29 Cu copper 63.546(3) 88 Ra radium [226] 30 Zn zinc 65.38(2) 89 Ac actinium [227] 31 Ga gallium 69.723 90 T thoriium 232.04 32 Ge germanium 72.630 91 Pa protactinium 231.04 33 As arsenic 74.922 92 U uranium 238.03 34 Se selenium 78.96(3) 93 Np neptunium [237] 35 Br bromine 79.904 94 Pu plutonium [244] 36 Kr krypton 83.798(2) 95 Am americium [243] 37 Rb rubidium 85.468 96 Cm curium [247] 38 Sr strontium 87.62 97 Bk berkelium [247] 39 Y yttrium 88.906 98 Cf californium [251] 40 Zr zirconium 91.224(2) 99 Es einsteinium [252] 41 Nb niobium 92.906(2) 100 Fm fermium [257] 42 Mo molybdenum 95.96(2) 101 Md mendelevium [258] 43 Tc technetium [97] 102 No nobelium [259] 44 Ru ruthenium 101.07(2) 103 Lr lawrencium [262] 45 Rh rhodium 102.91 104 Rf futherfordium [267] 46 Pa palladium 106.42 105 Db dubnium [270] 47 Ag silver 107.87 106 Sg seaborgiuim [271] 48 Cd cadmium 112.41 107 Bh bohrium [270] 49 In indium 114.82 108 Hs hassium [277] 50 Sn tin 118.71 109 Mt meitnerium [276] 51 Sb antimony 121.76 110 Ds darmstadium [281] 52 Te tellurium 127.60(3) 111 Rg roentgenium [282] 53 I iodine 126.90 112 Cn copernicium [285] 54 Xe xenon 131.29 113 Uut ununtrium [285] 55 Cs cesium 132.91 114 Fl flerovium [289] 56 Ba barium 137.33 115 Uup ununpentium [289] 57 La lanthanum 138.91 116 Lv livermorium [293] 58 Ce cerium 140.12 117 Uus ununseptium [294] 59 Pr praseodymium 140.91 118 Uno ununoctium [294]
textbooks/chem/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/16%3A_Appendix/16.18%3A_Atomic_Weights_of_the_Elements.txt
As we move through this textbook, we will make frequent use of the statistical programming language R, accessing the program through the RStudio Desktop interface, which provides a useful environment for managing files and for writing code. There are many programs you can use in place of R and RStudio: some, such as Python, are free, and others, such as SPSS or Matlab, are commercial packages. We will use R and RStudio for four reasons: 1. Both R and RStudio are available at no cost. 2. As a programming language, R is designed specifically for the analysis of data; this is one its great strength. 3. The base installation of R comes with most of the tools we need, including tools for visualizing data. 4. When we need additional tools, packages of functions built by other users are available to us. To ensure that this textbook is not tied too directly to R—and, therefore, accessible to anyone interested in learning about chemometics—each chapter begins with a general treatment of a chemometric topic that is software-independent, followed by specific examples of how to implement the topic using R. 01: R and RStudio Installing R and RStudio You can download and install R from the R-Project website. On the left side of the page, click on the link to CRAN under the title “Downloads.” Scroll through the list of CRAN mirror sites and click on the link to a site located near you. Versions are available for Mac OS, for Windows, and for Linux. Follow the directions for your operating system. You can download and install the RStudio Desktop Interface from the RStudio website. Click on the Download button for the free version of RStudio Desktop. From the list of available installers, click on the link that is appropriate for your operating system and follow the directions. Navigating RStudio When you launch RStudio, the program opens with the four panes as shown in Figure \(1\) (although some panes may be minimized). Beginning in the lower left corner and moving clockwise, these panes are • the Console, which provides access to R; this is where you can directly enter commands as you work on problems. • the Source Pane, which provides access to a variety of different types of documents, including script files, which end with an extension of .R (more on these later). The source pane also provides a way to submit code to the console by highlighting the code and clicking on the Run button; this usually is a more efficient way to work. • the Environment & History Pane, which provides access to your data and the functions you create while using R. • the Files, Plots, Packages, Help & Viewer Pane, which provides access to your computer's file structure, to help files for R commands, to a list of R packages available to you (packages provide access to additional commands beyond those available to you when you first launch R; more on this in later chapters), to plots that you create, and to an internal web-like browser. As you work with R, take time to examine each pane so that you become comfortable with them. For example, Figure \(1\) shows my RStudio screen after I highlighted lines 15–21 in the script file "figures_11.R" and clicked Run, sending the lines of code to the console where R processed them to create the figure in the lower right pane.
textbooks/chem/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/01%3A_R_and_RStudio/1.01%3A_Installing_and_Accessing_R_and_RStudio.txt
Communicating With R The symbol `>` in the console is the command prompt, which indicates that R is awaiting your instructions. When you type a command in the console and hit Enter or Return, R executes the command and displays any appropriate output in the console; thus, this command adds the numbers 1 and 3 `1 + 3` and returns the number 4 as an answer. `[1] 4` Note The text above is a code block that contains the line of code to enter into the console and the output generated by R. The command prompt (`>`) is not included here so that you can, if you wish, copy and paste the code into R; if you are copying and pasting the code, do not include the output or R will return an error message. Note that the output here is preceded by the number 1 in brackets, which is the id number of the first value returned on that line. This is all well and good, but it is even less useful than a calculator because we cannot operate further on the result. If we assign this calculation to an object using an assignment operator, then the result of the calculation remains available to us. There are two common leftward assignment operators in R: an arrow that points from right-to-left, `<-`, which means the value on the right is assigned to the object on the left, and an equals sign, `=`. Most style guides for R favor `<-` over `=`, but as `=` is the more common option in most other programming languages—such as Python, C++, and Matlab—we will use it here. If we assign our calculation to the object `answer `then the result of the calculation is assigned to the object but not returned to us. To see an object’s value we can look for it in RStudio’s Environment Panel or enter the object’s name as a command in the Console, as shown here. `answer = 1 + 3` `answer` `[1] 4` Note that an object’s name is case-sensitive so answer and Answer are different objects. `Answer = 2 + 4` `Answer` `[1] 6` Note There are just a few limitations to the names you can assign to objects: they can include letters (both upper and lower case), numbers, dots (`.`), or underscores (`_`), but not spaces. A name can begin with a letter or with a dot followed by a letter (but not a dot followed by a number). Here are some examples of valid names answerone answer_one answer1 answerOne answer.one and examples of invalid names 1stanswer answer* first answer You will find it helpful to use names that remind you of the object's meaning and that are not overly long. My personal preference is to use all lowercase letters, to use a descriptive noun, and to separate words using an underscore as I find that these choices make my code easier to read. When I find it useful to use the same base name for several objects of different types, then I may append a two or three letter designation to the name similar to the extensions that designate, for example, a spreadsheet stored as a .csv file. For example, when I use R to run a linear regression based on Beer's law, I may store the concentrations and absorbances of my standards in a data frame (see below for a description of data frames) with a name such as zinc.df and store the output of the linear model (see Chapter 8 for a discussion of linear models) in an object with a name such as zinc.lm. Objects for Storing Data In the code above, `answer` and `Answer` are objects that store a single numerical value. There are several different types of objects we can use to store data, including vectors, data frames, matrices and arrays, and lists. Vectors A vector is an ordered collection of elements of the same type, which may be numerical values, integer values, logical values, or character strings. Note that ordered does not imply that the values are arranged from smallest-to-largest or from largest-to-smallest, or in alphabetical order; it simply means the vector’s elements are stored in the order in which we enter them into the object. The length of a vector is the number of elements it holds. The objects `answer` and `Answer`, for example, are vectors with lengths of 1. `length(answer) ` `[1] 1` Most of the vectors we will use include multiple elements. One way to create a vector with multiple elements is to use the concatenation function, `c( )`. Note In the code blocks below and elsewhere, any text that follows a hashtag, #, is a comment that explains what the line of code is accomplishing; comments are not executable code, so R simply ignores them. For example, we can create a vector of numerical values, `v00 = c(1.1, 2.2, 3.3) ` `v00` `[1] 1.1 2.2 3.3` or a vector of integers, `v01 = c(1, 2, 3)` `v01` `[1] 1 2 3` or a vector of logical values, `v02 = c(TRUE, TRUE, FALSE) # we also could enter this as c(T, T, F)` `v02` `[1] TRUE TRUE FALSE` or a vector of character strings `v03 = c("alpha", "bravo", "charley")` `v03` `[1] "alpha" "bravo" "charley"` You can view an object’s structure by examining it in the Environment Panel or by using R’s structure command, `str( ) `which, for example, identifies vector the `v02` as a logical vector with an index for its entries of 1, 2, and 3, and with values of TRUE, TRUE, and FALSE. `str(v02)` `logi [1:3] TRUE TRUE FALSE` We can use a vector’s index to correct errors, to add additional values, or to create a new vector using already existing vectors. Note that the number within the square brackets, `[ ]`, identifies the element in the vector of interest. For example, the correct spelling for the third element in `v03` is charlie, not charley; we can correct this using the following line of code. `v03[3] = "charlie" # correct the vector's third value` `v03` `[1] "alpha" "bravo" "charlie"` We can also use the square bracket to add a new element to an existing vector, `v00[4] = 4.4 # add a fourth element to the existing vector, increasing its length` `v00` `[1] 1.1 2.2 3.3 4.4` or to create a new vector using elements from other vectors. `v04 = c(v01[1], v02[2], v03[3])` `v04` `[1] "1" "TRUE" "charlie"` Note the the elements of `v04` are character strings even though `v01` contains integers and `v02` contains logical values. This is because the elements of a vector must be of the same type, so R coerces them to a common type, in this case a vector of character strings. Here are several ways to create a vector when its entries follow a defined sequence, `seq( )`, or use a repetitive pattern, `rep( )`. `v05 = seq(from = 0, to = 20, by = 4)` `v05` `[1] 0 4 8 12 16 20` `v06 = seq(0, 10, 2) # R assumes the values are provided in the order from, to, and by` `v06` `[1] 0 2 4 6 8 10` `v07 = rep(1:4, times = 2) # repeats the pattern 1, 2, 3, 4 twice` `v07` `[1] 1 2 3 4 1 2 3 4` `v08 = rep(1:4, each = 2) # repeats each element in the string twice before proceeding to next element` `v08` `[1] 1 1 2 2 3 3 4 4` Note Note that `1:4` is equivalent to `c(1, 2, 3, 4)` or `seq(1, 4, 1)`. In R it often is the case that there are multiple ways to accomplish the same thing! Finally, we can complete mathematical operations using vectors, make logical inquiries of vectors, and create sub-samples of vectors. `v09 = v08 - v07 # subtract two vectors, which must be of equal length` `v09` `[1] 0 -1 -1 -2 2 1 1 0` `v10 = (v09 == 0) # returns TRUE for each element in v10 that equals zero` `v10` `[1] TRUE FALSE FALSE FALSE FALSE FALSE FALSE TRUE` `v11 = which(v09 < 1) # returns the index for each elements in v09 that is less than 1` `v11` `[1] 1 2 3 4 8` `v12 = v09[!v09 < 1] # returns values for elements in v09 whose values are not less than 1` `v12` `[1] 2 1 1` Data Frames A data frame is a collection of vectors—all equal in length but not necessarily of a single type of element—arranged with the vectors as the data frame's columns. `df01 = data.frame(v07, v08, v09, v10)` `df01` `v07 v08 v09 v10 ` `1 1 1 0 TRUE ` `2 2 1 -1 FALSE ` `3 3 2 -1 FALSE ` `4 4 2 -2 FALSE ` `5 1 3 2 FALSE ` `6 2 3 1 FALSE ` `7 3 4 1 FALSE ` `8 4 4 0 TRUE` We can access the elements in a data frame using the data frame's index, which takes the form [row number(s), column number(s}], where `[` is the bracket operator. `df02 = df01[1, ] # returns all elements in the data frame's first row` `df02 ` `v07 v08 v09 v10 ` `1 1 1 0 TRUE ` `df03 = df01[ , 3:4] # returns all elements in the data frame's third and fourth columns` `df03 ` `v09 v10 ` `1 0 TRUE` `2 -1 FALSE ` `3 -1 FALSE ` `4 -2 FALSE ` `5 2 FALSE ` `6 1 FALSE ` `7 1 FALSE ` `8 0 TRUE ` `df04 = df01[4, 3] # returns the element in the data frame's fourth row and third column` `df04 ` `[1] -2` We can also extract a single column from a data frame using the dollar sign (`\$`) operator to designate the column's name `df05 = df01\$v08` `df05` `[1] 1 1 2 2 3 3 4 4` Note If you look carefully at the output above you will see that extracting a single row or multiple columns using the`[ `operator returns a new data frame. Extracting a single element from a data frame using the bracket operator, or a single column using the`\$`operator returns a vector. Matrices and Arrays A matrix is similar to a data frame, but every element in a matrix is of the same type, usually numerical. `m01 = matrix(1:10, nrow = 5) # places numbers 1:10 in matrix with five rows, filing by column` `m01 ` `[,1] [,2] ` `[1,] 1 6 ` `[2,] 2 7 ` `[3,] 3 8 ` `[4,] 4 9 ` `[5,] 5 10` `m02 = matrix(1:10, ncol = 5) # places numbers 1:10 in matrix with five columns, filling by row` `m02` `[,1] [,2] [,3] [,4] [,5] ` `[1,] 1 3 5 7 9 ` `[2,] 2 4 6 8 10` A matrix has two dimensions and an array has three or more dimensions. Lists A list is an object that holds other objects, even if those objects are of different types. `li01 = list(v00, df01, m01)` `li01` `[[1]] ` `[1] 1.1 2.2 3.3 4.4 ` `[[2]] ` `v07 v08 v09 v10 ` `1 1 1 0 TRUE ` `2 2 1 -1 FALSE ` `3 3 2 -1 FALSE ` `4 4 2 -2 FALSE ` `5 1 3 2 FALSE ` `6 2 3 1 FALSE ` `7 3 4 1 FALSE ` `8 4 4 0 TRUE` `[[3]] ` `[,1] [,2] ` `[1,] 1 6 ` `[2,] 2 7 ` `[3,] 3 8 ` `[4,] 4 9 ` `[5,] 5 10` Note that the double bracket, such as`[[1]]`, identifies an object in the list and that we can extract values from this list using this notation. `li01[[1]] # extract first object stored in the list` `[1] 1.1 2.2 3.3 4.4` `li01[[1]][1] # extract the first value of the first object stored in the list` `[1] 1.1` Script Files Although you can enter commands directly into RStudio’s Console Panel and execute them, you will find it much easier to write your commands in a script file and send them to the console line-by-line, as groups of two or more lines, or all at once by sourcing the file. You will make errors as you enter code. When your error is in one line of a multi-line script, you can fix the error and then rerun the script at once without the need to retype each line directly into the console. To open a script file, select File: New File: R Script from the main menu. To save your script file, which will have .R as an extension, select File: Save from the main menu and navigate to the folder where you wish to save the file. As an exercise, try entering the following sequence of commands in a script file ```x1 = runif(1000) # a vector of 1000 values drawn at random from a uniform distribution x2 = runif(1000) # another vector of 1000 values drawn at random from a uniform distribution y1 = rnorm(1000) # a vector of 1000 values drawn at random from a normal distribution y2 = rnorm(1000) # another vector of 1000 values drawn at random from a normal distribution old.par = par(mfrow = c(2,2)) # create a 2 x 2 grid for plots plot(x1, x2) # create a scatterplot of two vectors plot(y1, y2) plot(x1, y1) plot(x2, y2) par(old.par) # restore the initial plot conditions (more on this later)``` save it as `test_script.R`and then click the Source button; you should see the following plot appear in the Plot tab. Loading a Data File and Saving a Data File Although creating a small vector, data frame, matrix, array, or list is easy, creating one with hundreds of elements or creating dozens of individual data objects is tedious at best; thus, the ability to load data saved during an earlier session, or the ability to read in a spreadsheet file is helpful. To read in a spreadsheet file saved in .csv format (comma separated values), we use R's `read.csv()` function, which takes the general form `read.csv(file)` where `file` provides the absolute path to the file. This is easiest to manage if you navigate to the folder where your .csv file is stored using RStudio's file pane and then set it as the working directory by clicking on More and selecting Set As Working Directory. Download the file "element_data.csv" using this link and then store the file in a folder on your computer. Navigate to this folder and set it as your working directory. Enter the following line of code `elements = read.csv(file = "element_data.csv")` to read the file's data into a data frame named `elements` . To view the data frame's structure we use the `head() `function to display the first six rows of data. `head(elements)` `name symbol at_no at_wt mp bp phase electronegativity electron_affinity ` `1 Hydrogen H 1 1.007940 14.01 20.28 Gas 2.20 72.8 ` `2 Helium He 2 4.002602 NA 4.22 Gas NA 0.0 ` `3 Lithium Li 3 6.941000 453.69 1615.15 Solid 0.98 59.6 ` `4 Beryllium Be 4 9.012182 1560.15 2743.15 Solid 1.57 0.0 ` `5 Boron B 5 10.811000 2348.15 4273.15 Solid 2.04 26.7 ` `6 Carbon C 6 12.010700 3823.15 4300.15 Solid 2.55 153.9 ` `block group period at_radius covalent_radius ` `1 s 1 1 5.30e-11 3.70e-11 ` `2 p 18 1 3.10e-11 3.20e-11 ` `3 s 1 2 1.67e-10 1.34e-10 ` `4 s 2 2 1.12e-10 9.00e-11 ` `5 p 13 2 8.70e-11 8.20e-11 ` `6 p 14 2 6.70e-11 7.70e-11` Note that cells in the spreadsheet with missing values appear here as `NA` for not available. The melting points (mp) and boiling points (bp) are in Kelvin, and the electron affinities are in kJ/mol. You can save to your working directory the contents of data frame by using the `write.csv()` function; thus, we can save a copy of the data in `elements` using the following line of code `write.csv(elements, file = "element_data_copy.csv")` Another way to save multiple objects is to use the `save()` function to create an .RData file. For example, to save the vectors `v00`, `v01`, and `v02` to a file with the name `vectors.RData`, enter `save(v00, v01, v02, file = "vectors.RData") ` To read in the objects in an .RData file, navigate to the folder that contains the file, click on the file's name and RStudio will ask if you wish to load the file into your session. Using Packages of Functions The base installation of R provides many useful functions for working with data. The advantage of these functions is that they work (always a plus) and they are stable (which means they will continue to work even as R is updated to new versions). For the most part, we will rely on R’s built in functions for these two reasons. When we need capabilities that are not part of R’s base installation, then we must write our own functions or use packages of functions written by others. To install a package of functions, click on the Packages tab in the Files, Plots, Packages, Help & Viewer pane. Click on the button labeled Install, enter the name of the package you wish to install, and click on Install to complete the installation. You only need to install a package once. To use a package that is not part of R’s base installation, you need to bring it into your current session, which you do with the command `library(name of package)` or by clicking on the checkbox next to the name of the package in the list of your installed packages. Once you have loaded the package into your session, it remains available to you until you quit RStudio. Managing Your Environment One nice feature of RStudio is that the Environment Panel provides a list of the objects you create. If your environment becomes too cluttered, you can delete items by switching to the Grid view, clicking on the check-box next to the object(s) you wish to delete, and then clicking on the broom icon. You can remove all items from the List view by simply clicking on the broom icon. Getting Help There are extensive help files for R's functions that you can search for using the Help Panel or by using the `help()` command. A help file shows you the command’s proper syntax, including the types of values you can pass to the command and their default values, if any—more details on this later—and provides you with some examples of how the command is used. R's help files can be difficult to parse at times; you may find it more helpful to simply use a search engine to look for information about "how to use <command> in R." Another good source for finding help with R is stackoverflow. 1.03: Exercises 1. Gather the following information for the first 18 elements in the periodic table and create a vector for each: • name • symbol • atomic number • atomic weight • phase (gas, liquid, solid) • group number (1–18) • row number • atomic radius (in picometers) • electronegativity • first ionization potential (in electron volts) Combine these vectors into a single data frame and save it as a .csv file. In addition, save the data frame and the individual vectors as a single .RData file. You will use these files to complete exercises in some of the chapters that follow.
textbooks/chem/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/01%3A_R_and_RStudio/1.02%3A_The_Basics_of_Working_With_R.txt
At the heart of any analysis is data. Sometimes our data describes a category and sometimes it is numerical; sometimes our data conveys order and sometimes it does not; sometimes our data has an absolute reference and sometimes it has an arbitrary reference; and sometimes our data takes on discrete values and sometimes it takes on continuous values. Whatever its form, when we gather data our intent is to extract from it information that can help us solve a problem. 02: Types of Data If we are to consider how to describe data, then we need some data with which we can work. Ideally, we want data that is easy to gather and easy to understand. It also is helpful if you can gather similar data on your own so you can repeat what we cover here. A simple system that meets these criteria is to analyze the contents of bags of M&Ms. Although this system may seem trivial, keep in mind that reporting the percentage of yellow M&Ms in a bag is analogous to reporting the concentration of Cu2+ in a sample of an ore or water: both express the amount of an analyte present in a unit of its matrix. At the beginning of this chapter we identified four contrasting ways to describe data: categorical vs. numerical, ordered vs. unordered, absolute reference vs. arbitrary reference, and discrete vs. continuous. To give meaning to these descriptive terms, let’s consider the data in Table $1$, which includes the year the bag was purchased and analyzed, the weight listed on the package, the type of M&Ms, the number of yellow M&Ms in the bag, the percentage of the M&Ms that were red, the total number of M&Ms in the bag and their corresponding ranks. Table $1$. Distribution of Yellow and Red M&Ms in Bags of M&Ms. bag id year weight (oz) type number yellow % red total M&Ms rank (for total) a 2006 1.74 peanut 2 27.8 18 sixth b 2006 1.74 peanut 3 4.35 23 fourth c 2000 0.80 plain 1 22.7 22 fifth d 2000 0.80 plain 5 20.8 24 third e 1994 10.0 plain 56 23.0 331 second f 1994 10.0 plain 63 21.9 333 first The entries in Table $1$ are organized by column and by row. The first row—sometimes called the header row—identifies the variables that make up the data. Each additional row is the record for one sample and each entry in a sample’s record provides information about one of its variables; thus, the data in the table lists the result for each variable and for each sample. Categorical vs. Numerical Data Of the variables included in Table $1$, some are categorical and some are numerical. A categorical variable provides qualitative information that we can use to describe the samples relative to each other, or that we can use to organize the samples into groups (or categories). For the data in Table $1$, bag id, type, and rank are categorical variables. A numerical variable provides quantitative information that we can use in a meaningful calculation; for example, we can use the number of yellow M&Ms and the total number of M&Ms to calculate a new variable that reports the percentage of M&Ms that are yellow. For the data in Table $1$, year, weight (oz), number yellow, % red M&Ms, and total M&Ms are numerical variables. We can also use a numerical variable to assign samples to groups. For example, we can divide the plain M&Ms in Table $1$ into two groups based on the sample’s weight. What makes a numerical variable more interesting, however, is that we can use it to make quantitative comparisons between samples; thus, we can report that there are $14.4 \times$ as many plain M&Ms in a 10-oz. bag as there are in a 0.8-oz. bag. $\frac{333 + 331}{24 + 22} = \frac{664}{46} = 14.4 \nonumber$ Although we could classify year as a categorical variable—not an unreasonable choice as it could serve as a useful way to group samples—we list it here as a numerical variable because it can serve as a useful predictive variable in a regression analysis. On the other hand rank is not a numerical variable—even if we rewrite the ranks as numerals—as there are no meaningful calculations we can complete using this variable. Nominal vs. Ordinal Data Categorical variables are described as nominal or ordinal. A nominal categorical variable does not imply a particular order; an ordinal categorical variable, on the other hand, coveys a meaningful sense of order. For the categorical variables in Table $1$, bag id and type are nominal variables, and rank is an ordinal variable. Ratio vs. Interval Data A numerical variable is described as either ratio or interval depending on whether it has (ratio) or does not have (interval) an absolute reference. Although we can complete meaningful calculations using any numerical variable, the type of calculation we can perform depends on whether or not the variable’s values have an absolute reference. A numerical variable has an absolute reference if it has a meaningful zero—that is, a zero that means a measured quantity of none—against which we reference all other measurements of that variable. For the numerical variables in Table $1$, weight (oz), number yellow, % red, and total M&Ms are ratio variables because each has a meaningful zero; year is an interval variable because its scale is referenced to an arbitrary point in time, 1 BCE, and not to the beginning of time. For a ratio variable, we can make meaningful absolute and relative comparisons between two results, but only meaningful absolute comparisons for an interval variable. For example, consider sample e, which was collected in 1994 and has 331 M&Ms, and sample d, which was collected in 2000 and has 24 M&Ms. We can report a meaningful absolute comparison for both variables: sample e is six years older than sample d and sample e has 307 more M&Ms than sample d. We also can report a meaningful relative comparison for the total number of M&Ms—there are $\frac{331}{24} = 13.8 \times \nonumber$ as many M&Ms in sample e as in sample d—but we cannot report a meaningful relative comparison for year because a sample collected in 2000 is not $\frac{2000}{1994} = 1.003 \times \nonumber$ older than a sample collected in 1994. Discrete vs. Continuous Data Finally, the granularity of a numerical variable provides one more way to describe our data. For example, we can describe a numerical variable as discrete or continuous. A numerical variable is discrete if it can take on only specific values—typically, but not always, an integer value—between its limits; a continuous variable can take on any possible value within its limits. For the numerical data in Table $1$, year, number yellow, and total M&Ms are discrete in that each is limited to integer values. The numerical variables weight (oz) and % red, on the other hand, are continuous variables. Note that weight is a continuous variable even if the device we use to measure weight yields discrete values.
textbooks/chem/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/02%3A_Types_of_Data/2.01%3A_Ways_to_Describe_Data.txt
The data in Table \(1\) should remind you of a data frame, a way of organizing data in R that we introduced in Chapter 1. Here we will learn how to create a data frame that holds the data in Table \(1\) and learn how we can make us of the data frame. Creating a Data Frame To create a data frame we begin by creating vectors for each of the variables. Note that `letters` is a constant in R that contains the 26 lower case letters of the Roman alphabet: here we are using just the first six letters for the bag ids. ```bag_id = letters[1:6] year = c(2006, 2006, 2000, 2000, 1994, 1994) weight = c(1.74, 1.74, 0.80, 0.80, 10.0, 10.0) type = c("peanut", "peanut", "plain", "plain", "plain", "plain") number_yellow = c(2, 3, 1, 5, 56, 63) percent_red = c(27.8, 4.35, 22.7, 20.8, 23.0, 21.9) total = c(18, 23, 22, 24, 331, 333) rank = c("sixth", "fourth", "fifth", "third", "second", "first")``` To create the data frame, we use R’s`data.frame()`function, passing to it the names of our vectors, each of which must be of the same length. There is an option within this function to treat variables whose values are character strings as factors—another name for a categorical variable—by using the argument `stringsAsFactors`` = TRUE`. As the default value for this argument depends on your version of R, it is useful to make your choice explicit by including it in your code, as we do here. ```mm_data = data.frame(bag_id, year, weight, type, number_yellow, percent_red, total, rank, stringsAsFactors = TRUE) mm_data``` ` bag_id year weight type number_yellow percent_red total rank ` `1 a 2006 1.74 peanut 2 27.80 18 sixth ` `2 b 2006 1.74 peanut 3 4.35 23 fourth ` `3 c 2000 0.80 plain 1 22.70 22 fifth ` `4 d 2000 0.80 plain 5 20.80 24 third ` `5 e 1994 10.00 plain 56 23.00 331 second ` `6 f 1994 10.00 plain 63 21.90 333 first` If we examine the structure of this data set using R’s`str()`function, we see that bag_id, type, and rank are factors and year, weight, number_yellow, percent_red, and total arenumerical variables, assignments that are consistent with our earlier analysis of the data. `str(mm_data)` `'data.frame': 6 obs. of 8 variables: ` `\$ bag_id : Factor w/ 6 levels "a","b","c","d",..: 1 2 3 4 5 6 ` `\$ year : num 2006 2006 2000 2000 1994 ... ` `\$ weight : num 1.74 1.74 0.8 0.8 10 10 ` `\$ type : Factor w/ 2 levels "peanut","plain": 1 1 2 2 2 2 ` `\$ number_yellow: num 2 3 1 5 56 63 ` `\$ percent_red : num 27.8 4.35 22.7 20.8 23 21.9 ` `\$ total : num 18 23 22 24 331 333 ` `\$ rank : Factor w/ 6 levels "fifth","first",..: 5 3 1 6 4 2` Finally, we can use the function`as.factor()`to have R treat a numerical variable as a categorical variable, as we do here for year. Why we might wish to do this is a topic we will return to in later chapters. `mm_year_as_factor = data.frame(bag_id, as.factor(year), percent_red, total)` `str(mm_year_as_factor)` `'data.frame': 6 obs. of 4 variables: ` `\$ bag_id : Factor w/ 6 levels "a","b","c","d",..: 1 2 3 4 5 6 ` `\$ as.factor.year.: Factor w/ 3 levels "1994","2000",..: 3 3 2 2 1 1 ` `\$ percent_red : num 27.8 4.35 22.7 20.8 23 21.9 ` `\$ total : num 18 23 22 24 331 33` Creating a New Data Frame by Subsetting an Existing Data Frame In Chapter 1.2 we learned how to retrieve individual rows or columns from a data frame and assign them to a new object. Here we learn how to use R’s more flexible `subset()` function to accomplish the same thing. Here, for example, we retrieve only the data for plain M&Ms. `plain_mm = subset(mm_data, type == "plain")` `plain_mm` `bag_id year weight type number_yellow percent_red total rank ` `3 c 2000 0.8 plain 1 22.7 22 fifth ` `4 d 2000 0.8 plain 5 20.8 24 third ` `5 e 1994 10.0 plain 56 23.0 331 second ` `6 f 1994 10.0 plain 63 21.9 333 first` Note that` type == "plain"`uses a relational operator to choose only those rows in which the variable `type` has the value `plain`. Here is a list of relational operators: Table \(2\). Relational Operators in R. operator usage meaning < x < y x is less than y > x > y x is greater than y <= x <= y x is less than or equal to y >= x >= y x is greater than or equal to y == x == y x is exactly equal to y != x != y x is not equal to y We can string variables together using the logical & operator. `mm_plain10 = subset(mm_data, (weight == 10.0 & type == "plain")) ` `mm_plain10` ` bag_id year weight type number_yellow percent_red total rank ` `5 e 1994 10 plain 56 23.0 331 second ` `6 f 1994 10 plain 63 21.9 333 first` We also can narrow the number of variables returned using the `subset()` function’s `select` argument. In this example we exclude samples collected before the year 2000 and return only the year, the number of yellow M&Ms, and the percentage of red M&Ms. `mm_20xx = subset(mm_data, year >= 2000, select = c(year, number_yellow, percent_red))` `mm_20xx` ` year number_yellow percent_red ` `1 2006 2 27.80 ` `2 2006 3 4.35 ` `3 2000 1 22.70 ` `4 2000 5 20.80` 2.03: Exercises 1. In Exercise 1 of Chapter 1 you created a data frame with the following information about the first 18 elements. • name • symbol • atomic number • atomic weight • phase (gas, liquid, solid) • group number (1–18) • row number • atomic radius (in picometers) • electronegativity • first ionization potential (in electron volts) (a) Setting aside name and symbol, which of the remaining variables are categorical or numerical? (b) For those variables that are categorical, which are nominal and which are ordinal? (c) For those variables that are numerical, which are ratio and which are interval? (d) For those variables that are numerical, which are discrete and which are continuous? 2. Use this link to download and save the spreadsheet marlybone_2018.csv. The data in this file gives the daily average level of NOX (the combined concentrations of NO and of NO2) in µg/m3 and the daily average temperature in °C as recorded in 2018 at a roadside monitoring station located on Marylebone Road in Westminster, which is near Reagents Park, Madame Tussaud's Wax Museum, and Baker Street, the "home" of Sherlock Holmes. The data is made available by London Air, a website managed by Kings College in London that reports results from the continuous monitoring of air quality at hundreds of sites spread throughout the greater London area. As in most long-term monitoring project, some data is missing for various reasons, such as equipment failure; these values appear in the spreadsheet as empty cells. If you wish, you can visit the London Air web site here. (a) Use the `read.csv()` function to bring the data into R as a data frame and examine the dataset's structure using the `head()` function. (b) Add a new column to the data frame that contains the running day number (January 1st is day 1 and December 31st is day 365). (c) Use the `subset()` function to create separate data frames for each month. (d) Save all of your data frames in a single `.RData` file so that it is available to you when working problems in other chapters. 3. Use this link to access a case study on data analysis and complete the five investigations included in Part I: Ways to Describe Data.
textbooks/chem/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/02%3A_Types_of_Data/2.02%3A_Using_R_to_Organize_and_Manipulate_Data.txt
The old saying that "a picture is worth a 1000 words" may not be universally true, but it true when it comes to the analysis of data. A good visualization of data, for example, allows us to see patterns and relationships that are less evident when we look at data arranged in a table, and it provides a powerful way to tell our data's story. One of R's significant strengths as a statistical programming language is the ease with which we can generate useful visualizations. 03: Visualizing Data Suppose we want to study the composition of 1.69-oz (47.9-g) packages of plain M&Ms. We obtain 30 bags of M&Ms (ten from each of three stores) and remove the M&Ms from each bag one-by-one, recording the number of blue, brown, green, orange, red, and yellow M&Ms. We also record the number of yellow M&Ms in the first five candies drawn from each bag, and record the actual net weight of the M&Ms in each bag. Table $1$ summarizes the data collected on these samples. The bag id identifies the order in which the bags were opened and analyzed. Table $1$. Analysis of Plain M&Ms in 47.9 g Bags. bag store blue brown green orange red yellow yellow_first_five net_weight 1 CVS 3 18 1 5 7 23 2 49.287 2 CVS 3 14 9 7 8 15 0 48.870 3 Target 4 14 5 10 10 16 1 51.250 4 Kroger 3 13 5 4 15 16 0 48.692 5 Kroger 3 16 5 7 8 18 1 48.777 6 Kroger 2 12 6 10 17 7 1 46.405 7 CVS 13 11 2 8 6 17 1 49.693 8 CVS 13 12 7 10 7 8 2 49.391 9 Kroger 6 17 5 4 8 16 1 48.196 10 Kroger 8 13 2 5 10 17 1 47.326 11 Target 9 20 1 4 12 13 3 50.974 12 Target 11 12 0 8 4 23 0 50.081 13 CVS 3 15 4 6 14 13 2 47.841 14 Kroger 4 17 5 6 14 10 2 48.377 15 Kroger 9 13 3 8 14 8 0 47.004 16 CVS 8 15 1 10 9 15 1 50.037 17 CVS 10 11 5 10 7 13 2 48.599 18 Kroger 1 17 6 7 11 14 1 48.625 19 Target 7 17 2 8 4 18 1 48.395 20 Kroger 9 13 1 8 7 22 1 51.730 21 Target 7 17 0 15 4 15 3 50.405 22 CVS 12 14 4 11 9 5 2 47.305 23 Target 9 19 0 5 12 12 0 49.477 24 Target 5 13 3 4 15 16 0 48.027 25 CVS 7 13 0 4 15 16 2 48.212 26 Target 6 15 1 13 10 14 1 51.682 27 CVS 5 17 6 4 8 19 1 50.802 28 Kroger 1 21 6 5 10 14 0 49.055 29 Target 4 12 6 5 13 14 2 46.577 30 Target 15 8 9 6 10 8 1 48.317 Having collected our data, we next examine it for possible problems, such as missing values (Did we forget to record the number of brown M&Ms in any of our samples?), for errors introduced when we recorded the data (Is the decimal point recorded incorrectly for any of the net weights?), or for unusual results (Is it really the case that this bag has only yellow M&M?). We also examine our data to identify interesting observations that we may wish to explore (It appears that most net weights are greater than the net weight listed on the individual packages. Why might this be? Is the difference significant?) When our data set is small we usually can identify possible problems and interesting observations without much difficulty; however, for a large data set, this becomes a challenge. Instead of trying to examine individual values, we can look at our results visually. While it may be difficult to find a single, odd data point when we have to individually review 1000 samples, it often jumps out when we look at the data using one or more of the approaches we will explore in this chapter. Dot Plots A dot plot displays data for one variable, with each sample’s value plotted on the x-axis. The individual points are organized along the y-axis with the first sample at the bottom and the last sample at the top. Figure $1$ shows a dot plot for the number of brown M&Ms in the 30 bags of M&Ms from Table $1$. The distribution of points appears random as there is no correlation between the sample id and the number of brown M&Ms. We would be surprised if we discovered that the points were arranged from the lower-left to the upper-right as this implies that the order in which we open the bags determines whether they have many or a few brown M&Ms. Stripcharts A dot plot provides a quick way to give us confidence that our data are free from unusual patterns, but at the cost of space because we use the y-axis to include the sample id as a variable. A stripchart uses the same x-axis as a dot plot, but does not use the y-axis to distinguish between samples. Because all samples with the same number of brown M&Ms will appear in the same place—making it impossible to distinguish them from each other—we stack the points vertically to spread them out, as shown in Figure $2$. Both the dot plot in Figure $1$ and the stripchart in Figure $2$ suggest that there is a smaller density of points at the lower limit and the upper limit of our results. We see, for example, that there is just one bag each with 8, 16, 18, 19, 20, and 21 brown M&Ms, but there are six bags each with 13 and 17 brown M&Ms. Because a stripchart does not use the y-axis to provide meaningful categorical information, we can easily display several stripcharts at once. Figure $3$ shows this for the data in Table $1$. Instead of stacking the individual points, we jitter them by applying a small, random offset to each point. Among the things we learn from this stripchart are that only brown and yellow M&Ms have counts of greater than 20 and that only blue and green M&Ms have counts of three or fewer M&Ms. Box and Whisker Plots The stripchart in Figure $3$ is easy for us to examine because the number of samples, 30 bags, and the number of M&Ms per bag is sufficiently small that we can see the individual points. As the density of points becomes greater, a stripchart becomes less useful. A box and whisker plot provides a similar view but focuses on the data in terms of the range of values that encompass the middle 50% of the data. Figure $4$ shows the box and whisker plot for brown M&Ms using the data in Table $1$. The 30 individual samples are superimposed as a stripchart. The central box divides the x-axis into three regions: bags with fewer than 13 brown M&Ms (seven samples), bags with between 13 and 17 brown M&Ms (19 samples), and bags with more than 17 brown M&Ms (four samples). The box's limits are set so that it includes at least the middle 50% of our data. In this case, the box contains 19 of the 30 samples (63%) of the bags, because moving either end of the box toward the middle results in a box that includes less than 50% of the samples. The difference between the box's upper limit (19) and its lower limit (13) is called the interquartile range (IQR). The thick line in the box is the median, or middle value (more on this and the IQR in the next chapter). The dashed lines at either end of the box are called whiskers, and they extend to the largest or the smallest result that is within $\pm 1.5 \times \text{IQR}$ of the box's right or left edge, respectively. Because a box and whisker plot does not use the y-axis to provide meaningful categorical information, we can easily display several plots in the same frame. Figure $5$ shows this for the data in Table $1$. Note that when a value falls outside of a whisker, as is the case here for yellow M&Ms, it is flagged by displaying it as an open circle. One use of a box and whisker plot is to examine the distribution of the individual samples, particularly with respect to symmetry. With the exception of the single sample that falls outside of the whiskers, the distribution of yellow M&Ms appears symmetrical: the median is near the center of the box and the whiskers extend equally in both directions. The distribution of the orange M&Ms is asymmetrical: half of the samples have 4–7 M&Ms (just four possible outcomes) and half have 7–15 M&Ms (nine possible outcomes), suggesting that the distribution is skewed toward higher numbers of orange M&Ms (see Chapter 5 for more information about the distribution of samples). Figure $6$ shows box-and-whisker plots for yellow M&Ms grouped according to the store where the bags of M&Ms were purchased. Although the box and whisker plots are quite different in terms of the relative sizes of the boxes and the relative length of the whiskers, the dot plots suggest that the distribution of the underlying data is relatively similar in that most bags contain 12–18 yellow M&Ms and just a few bags deviate from these limits. These observations are reassuring because we do not expect the choice of store to affect the composition of bags of M&Ms. If we saw evidence that the choice of store affected our results, then we would look more closely at the bags themselves for evidence of a poorly controlled variable, such as type (Did we accidentally purchase bags of peanut butter M&Ms from one store?) or the product’s lot number (Did the manufacturer change the composition of colors between lots?). Bar Plots Although a dot plot, a stripchart and a box-and-whisker plot provide some qualitative evidence of how a variable’s values are distributed—we will have more to say about the distribution of data in Chapter 5—they are less useful when we need a more quantitative picture of the distribution. For this we can use a bar plot that displays a count of each discrete outcome. Figure $7$ shows bar plots for orange and for yellow M&Ms using the data in Table $1$. Here we see that the most common number of orange M&Ms per bag is four, which is also the smallest number of orange M&Ms per bag, and that there is a general decrease in the number of bags as the number of orange M&M per bag increases. For the yellow M&Ms, the most common number of M&Ms per bag is 16, which falls near the middle of the range of yellow M&Ms. Histograms A bar plot is a useful way to look at the distribution of discrete results, such as the counts of orange or yellow M&Ms, but it is not useful for continuous data where each result is unique. A histogram, in which we display the number of results that fall within a sequence of equally spaced bins, provides a view that is similar to that of a bar plot but that works with continuous data. Figure $8$, for example, shows a histogram for the net weights of the 30 bags of M&Ms in Table $1$. Individual values are shown by the vertical hash marks at the bottom of the histogram.
textbooks/chem/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/03%3A_Visualizing_Data/3.01%3A_Types_of_Visualizations.txt
One of the strengths of R is the ease with which you can plot data and the quality of the plots you can create. R has two pre-installed graphing packages: one is the `graphics` package, which is available to you when you launch R, and the second is the `lattice` package tat you can bring into your session by running `library(lattice)`in the console—and there are many additional graphics packages, such as `ggplot2`, developed by others. As our interest in this textbook is making R quickly and easily accessible, we will rely on R’s base graphics. See this chapter's resources for a list of other graphing packages. Note This section uses the M&M data in Table 1 of Chapter 3.1. You can download a copy of the data as a .csv spreadsheet using this link, and save it in your working directory. Bringing Your Data Into R Before we can create a visualization, we need to make our data available to R. The code below uses the` read.csv()`function to read in the file` MandM.csv `as a data frame with the name `mm_data`. The text`"MandM.csv"`assumes the file is located in your working directory. `mm_data = read.csv("MandM.csv") ` Creating a Dot Plot Using R To create a dot plot in R we use the function `dotchart(x,...)` where` x `is the object that holds our data, typically a vector or a single column from a data frame, and` ... `is a list of optional arguments that affects what we see. In the example below, `pch` sets the plotting symbol (19 is an solid circle),` col `is the color assigned to the plotting symbol,` labels `identifies the samples by name along the y-axis,` xlab `assigns a label to the x-axis,` ylab `assigns a label to the y-axis, and` cex `controls the size of the labels and points. See the last section of this chapter for a more general introduction to creating and displaying plots using R’s base graphics. `dotchart(mm_data\$brown, pch = 19, col = "brown", labels = mm_data\$bag, xlab = "number of brown M&Ms", ylab = "bag id", cex = 0.5)` Creating a Stripchart Using R To create a stripchart in R we use the function` stripchart(x, ...)`where` x `is the object that holds our data, typically a vector or a column from a data frame, and` ...` is a list of optional arguments that affects what we see. In the example below,`pch`sets the plotting symbol (19 is an solid circle),` col `is the color assigned to the plotting symbol,` method `defines how points with the same value for x are displayed on the y-axis, in this case stacking them one above the other by an amount defined by an `offset`, and` cex `controls the size of the individual data points. `stripchart(mm_data\$brown, pch = 19, col = "brown", method = "stack", offset = 0.5, cex = 0.6, xlab = "number of brown M&Ms")` Because a stripchart does not use the y-axis to provide information, we can easily display several stripcharts at once, as shown in the following example, where we use`mm``_data[3:8]`to identify the data for each stripchart and` col `to assign a color to each stripchart. Instead of stacking the individual points, they are jittered by applying a small, random offset to each point using `jitter`. The parameter` las `forces the labels to be displayed horizontally (`las = 0 `aligns labels parallel to the axis,` las = 1` aligns labels horizontally,` las = 2 `aligns labels perpendicular to the axis, and` las = 4 `aligns labels vertically). `stripchart(mm_data[3:8], pch = 19, cex = 0.5, xlab = "number of M&MS", col = c("blue", "brown", "green", "orange", "red", "yellow"), method = "jitter", jitter = 0.2, las = 1)` Creating a Box-and-Whisker Plot Using R To create a box-and-whisker plot in R we use the function` boxplot(x,...)`where` x `is the object that holds our data, typically a vector or a column from a data frame, and` ... `is a list of optional arguments that affects what we see. In the example below, the option` horizontal = TRUE `overrides the default, which is to display a vertical boxplot, and` range `specifies the length of the whisker as a multiple of the IQR. In this example, we also show the individual values using `stripchart()` with the option` add = TRUE `to overlay the stripchart on the boxplot. `boxplot(mm_data\$brown, horizontal = TRUE, range = 1.5, xlab = "number of brown M&Ms") ` `stripchart(mm_data\$brown, method = "jitter", jitter = 0.2, add = TRUE, col = "brown", pch = 19)` Because a box and whisker plot does not use the y-axis to provide information, we can easily display several plots at once, as shown in the following example, where we use` mm_data[3:8] `to identify the data for each plot and `col` to assign a color to each plot. `boxplot(mm_data[3:8], xlab = "number of M&MS", las = 1, horizontal = TRUE, col = c("blue", "brown", "green", "orange", "red", "yellow"))` In the example below, the code` mm_data\$yellow ~ mm_data\$store `is a formula, which takes the general form of y as a function of x; in this case, it uses the data in the column named `store` to divide the data into three groups. The option` outline = FALSE `in the` boxplot() `function suppresses the function’s default to plot an open circle for each sample that lies outside of the whiskers; by doing this we avoid plotting these points twice. `boxplot(mm_data\$yellow ~ mm_data\$store, horizontal = TRUE, las = 1, col = "yellow", outline = FALSE, xlab = "number of yellow M&Ms")``stripchart(mm_data\$yellow ~ mm_data\$store, add = TRUE, pch = 19, method = "jitter", jitter = 0.2)` Note See Chapter 8.5 for a discussion of the use of formulas in R. Creating a Bar Plot Using R To create a bar plot in R we use the function` barplot(x,...)` where` x `is the object that holds our data, typically a vector or a column from a data frame and` ... `is a list of optional arguments that affects what we see. Unlike the previous plots, we cannot pass to `barplot()` our raw data that consists of the number of orange M&Ms in each bag. Instead, we have to provide the data in the form of a table that gives the number of bags that contain 0, 1, 2, . . . up to the maximum number of orange M&Ms in any bag; we accomplish this using the `tabulate()` function. Because `tabulate() `only counts the frequency of positive integers, it will ignore any bags that do not have any orange M&Ms; adding one to each count by using `mm_data\$orange + 1` ensures they are counted. The argument` names.arg `allows us to provide categorical labels for the x-axis (and correct for the fact that we increased each index by 1). ```orange_table = tabulate(mm_data\$orange + 1) barplot(orange_table, col = "orange", names.arg = seq(0, max(mm_data\$orange), 1), xlab = "number of orange M&Ms", ylab = "number of bags")``` Creating a Histogram Using R To create a histogram in R we use the function` hist(x,...)` where` x `is the object that holds our data, typically a vector or a column from a data frame, and` ... `is a list of optional arguments that affects what we see. In the example below, the option` main = NULL `suppresses the placing of a title above the plot, which otherwise is included by default. The option` right = TRUE `means the right-most value of a bin is included in that bin. Finally, although a histogram shows how individual values are distributed, it does not show the individual values themselves. The` rug(x) `function adds tick marks along the x-axis that show each individual value. `hist(mm_data\$net_weight, col = "lightblue", xlab = "net weight of M&Ms (oz)", right = TRUE, main = NULL)` `rug(mm_data\$net_weight, lwd = 1.5)` By default, R uses an algorithm to determine how to set the size of bins. As shown in the following example, we can use the option` breaks `to specify the values of x where one bin ends and the next bin begins. `hist(mm_data\$net_weight, col = "lightblue", xlab = "net weight of M&Ms (oz)", breaks = seq(46, 52, 0.5), right = TRUE, main = NULL)` `rug(mm_data\$net_weight, lwd = 1.5)`
textbooks/chem/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/03%3A_Visualizing_Data/3.02%3A_Using_R_to_Visualize_Data.txt
As we saw in the last section, the functions to create dot charts, stripcharts, boxplots, barplots, and histograms have arguments that we can use to alter the appearance of the function’s output. For example, here is the full list of arguments available when we use` dotchart()`that control what the plot shows. `dotchart(x, labels = NULL, groups = NULL, gdata = NULL, cex = par("cex"), pt.cex = cex, pch = 21, gpch = 21, bg = par("bg"), color = par("fg"), gcolor = par("fg"), lcolor = "gray", xlim = range(x[is.finite(x)]), main = NULL, xlab = NULL, ylab = NULL, ...)` Each of the arguments has a default value, which means we need not specify the value for an argument unless we wish to change its value, as we did when we set` pch `to 19. The final argument of `...` indicates that we can change any of a long list of graphical parameters that control what we see when we use dotchart. Creating a Simple Scatterplot Using R One of the most common, and most important, visualizations in analytical chemistry is a scatterplot in which we are interested in the relationship, if any, between two measurement by plotting the values for one variable along the x-axis and the values for the other variable along y-axis. For this exercise, we will use some data from the Puget Sound Data Hoard that gives the mass and the diameter for 816 M&Ms obtained from a 14.0-oz bag of plain M&Ms, a 12.7-oz bag of peanut M&Ms, and a 12.7-oz bag of peanut butter M&Ms. Let’s read the data into R and store it in a data frame with the name `psmm_data`. You can download a copy of the data using this link saving it in your working directory. `psmm_data = read.csv("data/PugetSoundM&MData.csv")` We might expect that as the diameter of an M&M increases so will the mass of the M&M. We might also expect that the relationship between diameter and mass may depend on whether the M&Ms are plain, peanut, or peanut butter. So that we can access data for each type of M&M, let’s use the `which()` function to create vectors that designate the row numbers for each of the three types of M&Ms. `pb_id = which(psmm_data\$type == "peanut butter") ` `plain_id = which(psmm_data\$type == "plain") ` `peanut_id = which(psmm_data\$type == "peanut")` Typically we are interested in how one variable affects the other variable. We call the former the independent variable and place it on the x-axis and we call the latter the dependent variable and place it on the y-axis. Here we will use diameter as the independent variable and mass as the dependent variable. To create a scatterplot for the plain M&Ms we use the function` plot(x, y)` where` x` is the data to plot on the x-axis and` y `is the data to plot on the y-axis. `plot(x = psmm_data\$diameter[plain_id], y = psmm_data\$mass[plain_id])` Customizing a Plot Created Using R Although our scatterplot shows that the mass of a plain M&M increases as its diameter increases, it is not a particularly attractive plot. In addition to specifying x and y, the plot function allows us to pass additional arguments to customize our plot; here are some of these optional arguments: type = “option. This argument specifies how points are displayed; there are a number of options, but the most useful are “p” for points (this is the default), “l” for lines without points, “b” for both points and lines that do not touch the points, “o” for points and lines that pass through the points, “h” for histogram-like vertical lines, and “s” for stair steps; use “n” if you wish to suppress the points. pch = number. This argument selects the symbol used to plot the data, with the number assigned to each symbol shown below. The default option is 1, or an open circle. Symbols 15–20 are filled using the color of the symbol’s boundary, and symbols 21–25 can take a background color that is different from the symbol’s boundary. See later in this document for more details about setting colors. The figure below shows the different options. ```# code from http://www.sthda.com/english/wiki/r-...available-in-r oldPar = par() par(font = 2, mar = c(0.5, 0, 0, 0)) y = rev(c(rep(1, 6),rep(2, 5), rep(3, 5), rep(4, 5), rep(5, 5))) x = c(rep(1:5, 5), 6) plot(x, y, pch = 0:25, cex = 1.5, ylim = c(1, 5.5), xlim = c(1, 6.5), axes = FALSE, xlab = "", ylab = "", bg = "blue") text(x, y, labels = 0:25, pos = 3) par(mar = oldPar\$mar, font = oldPar\$font)``` lty = number. This argument specifies the type of line to draw; the options are 1 for a solid line (this is the default), 2 for a dashed line, 3 for a dotted line, 4 for a dot-dash line, 5 for a long-dash line, and 6 for a two-dash line. lwd = number. This argument sets the width of the line. The default is 1 and any other entry simply scales the width relative to the default; thus `lwd = 2` doubles the width and `lwd = 0.5` cuts the width in half. bty = “option. This argument specifies the type of box to draw around the plot; the options are “o” to draw all four sides (this is the default), “l” to draw on the left side and the bottom side only, “7” to draw on the top side and the right side only, “c” to draw all but the right side, “u” to draw all but the top side, “]” to draw all but the left side, and “n” to omit all four sides. axes = logical. This argument indicates whether the axes are drawn (TRUE) or not drawn (FALSE); the default is TRUE. xlim = c(begin, end). This argument sets the limits for the x-axis, overriding the default limits set by the `plot()` command. ylim = c(begin, end). This argument sets the limits for the y-axis, overriding the default limits set by the `plot()` command. xlab = “text. This argument specifies the label for the x-axis, overriding the default label set by the `plot()` command. ylab = “text. This argument specifies the label for the y-axis, overriding the default label set by the `plot()` command. main = “text. This argument specifies the main title, which is placed above the plot, overriding the default title set by the `plot()` command. sub = “text. This argument specifies the subtitle, which is placed below the plot, overriding the default subtitle set by the `plot()` command. cex = number. This argument controls the relative size of the symbols used to plot points. The default is 1 and any other entry simply scales the size relative to the default; thus `cex = 2` doubles the size and `cex = 0.5` cuts the size in half. cex.axis = number. This argument controls the relative size of the text used for the scale on both axes; see the entry above for cex for more details. cex.lab = number. This argument controls the relative size of the text used for the label on both axes; see the entry above for cex for more details. cex.main = number. This argument controls the relative size of the text used for the plot’s main title; see the entry above for cex for more details. cex.sub = number. This argument controls the relative size of the text used for the plot’s subtitle; see the entry above for cex for more details. col = number or “string. This argument controls the color of the symbols used to plot points. There are 657 available colors, for which the default is “black” or 24. You can see a list of colors (number and text string) by typing `colors()` in the console. col.axis = number or “string. This argument controls the color of the text used for the scale on both axes; see the entry above for col for more details. col.lab = number or “string. This argument controls the color of the text used for the label on both axes; see the entry above for col for more details. col.main = number or “string. This argument controls the color of the text used for the plot’s main title; see the entry above for col for more details. col.sub = number or “string. This argument controls the color of the text used for the plot’s subtitle; see the entry above for col for more details. bg = number or “string. This argument sets the background color for the plot symbols 21–25; see the entries above for pch and for col for more details. Let’s use some of these arguments to improve our scatterplot by adding some color to and adjusting the size of the symbols used to plot the data, and by adding a title and some more informative labels for the two axes. `plot(x = psmm_data\$diameter[plain_id], y = psmm_data\$mass[plain_id], xlab = "diameter of M&Ms", ylab = "mass of M&Ms", main = "Diameter and Mass of Plain M&Ms", pch = 19, cex = 0.5, col = "blue")` Modifying an Existing Plot Created Using R We can modify an existing plot in a number of useful ways, such as adding a new set of data, adding a reference line, adding a legend, adding text, and adding a set of grid lines; here are some of the things we can do: points(x, y, . . . ). This command is identical to the `plot()` command, but overlays the new points on the current plot instead of first erasing the previous plot. Note: the `points()` command can not re-scale the axes; thus, you must ensure that your original plot—created using the `plot()` command—has x-axis and y-axis limits that meet your needs. abline(h = number, . . . ). This command adds a horizontal line at `y = number` with the line’s color, type, and size set using the optional arguments. abline(v = number, . . . ). This command adds a vertical line at `x = number` with the line’s color, type, and size set using the optional arguments. abline(b = number, a = number, . . . ). This command adds a diagonal line defined by a slope (b) and a y-intercept (a); the line’s color, type, and size are set using the optional arguments. As we will see in Chapter 8, this is a useful command for displaying the results of a linear regression. legend(location, legend, . . . ). This command adds a legend to the current plot. The location is specified in one of two ways: • by giving the x and y coordinates for the legend’s upper-left corner using `x = number` and `y = number`) • by using location = “keyword” where the keyword is one of “topleft”, “top”, “topright”, “right”, “bottomright”, “bottom”, “bottomleft”, or “left”; the optional argument `inset = ``number`moves the legend in from the margin when using a keyword (it takes a value from 0 to 1 as a fraction of the plot’s area; the default is 0) The legend is added as a vector of character strings (one for each item in the legend), and any accompanying formatting, such as plot symbols, lines, or colors, are passed along as vectors of the same length; look carefully at the example at the end of this section to see how this command works. text(location, label, . . . ). This command adds the text given by “label” to the current plot. The location is specified by providing values for x and y using `x = number` and `y = number`. By default, the text is centered at its location; to set the text so that it is left-justified (which is easier to work with), add the argument `adj = c(0, NA)`. grid(col, lty, lwd). This command adds a set of grid lines to the plot using the color, line type, and line width defined by “col”, “lty”, and “lwd”, respectively. Here is an example of a figure in which we show how the diameter and mass vary as a function of the type of M&Ms, add a legend, add a grid, and add some text that identifies the source of the data. Note the use of the functions` max `and` min `to identify the limits needed to display results for all of the data. `# determine minimum and maximum values for diameter and mass so that we can ` `# set limits for the x-axis and y-axis that will allow plotting of all data ` `xmax = max(psmm_data\$diameter)` `xmin = min(psmm_data\$diameter)` `ymax = max(psmm_data\$mass) ` `ymin = min(psmm_data\$mass)` `# create the initial plot using data for plain M&Ms, xlim and ylim values ` `# ensure plot window will allow plotting of all ``data` `plot(x = psmm_data\$diameter[plain_id], y = psmm_data\$mass[plain_id], xlab = "diameter of M&Ms", ylab = "mass of M&Ms", main = "Diameter and Mass of M&Ms", pch = 19, cex = 0.65, col = "red", xlim = c(xmin, xmax), ylim = c(ymin, ymax))` `# add the data for the peanut and peanut butter M&Ms using points()` `points(x = psmm_data\$diameter[peanut_id], y = psmm_data\$mass[peanut_id], pch = 18, col = "brown", cex = 0.65)` `points(x = psmm_data\$diameter[pb_id], y = psmm_data\$mass[pb_id], pch = 17, col = "blue", cex = 0.65)` `# add a legend, gird, and explanatory text` `legend(x = "topleft", legend = c("plain", "peanut", "peanut butter"), col = c("red", "brown", "blue"), pch = c(19, 18, 17), bty = "n")` `grid(col = "gray")` `text(x = 16.5, y = 1, label = "data from University of Puget Sound Data Hoard", cex = 0.5)` Our new plot shows that the individual M&Ms are reasonably well separated from each other in the space created by the variables diameter and mass, although a few M&Ms encroach into the space occupied by other types of M&Ms. We also see that the distribution of plain M&Ms is much more compact than for peanut and peanut butter M&Ms, which makes sense given the likely variability in the size of individual peanuts and the softer consistency of peanut butter.
textbooks/chem/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/03%3A_Visualizing_Data/3.03%3A_Creating_Plots_From_Scratch_in_R_Using_Base_Graphics.txt
1. When copper metal and powdered sulfur are placed in a crucible and ignited, the product is a sulfide with an empirical formula of CuxS. The value of x is determined by weighing the Cu and the S before ignition and finding the mass of CuxS when the reaction is complete (any excess sulfur leaves as SO2). The following table shows the Cu/S ratios from 62 such experiments (note that the values are organized from smallest-to-largest by rows). A copy of the data is available as a .csv file with data organized in a single column. 1.764 1.838 1.865 1.866 1.872 1.877 1.890 1.891 1.891 1.897 1.899 1.900 1.906 1.908 1.910 1.911 1.916 1.919 1.920 1.922 1.927 1.931 1.935 1.936 1.936 1.937 1.939 1.939 1.940 1.941 1.941 1.942 1.943 1.948 1.953 1.955 1.957 1.957 1.957 1.959 1.962 1.963 1.963 1.963 1.966 1.968 1.969 1.973 1.975 1.976 1.977 1.981 1.981 1.988 1.993 1.993 1.995 1.995 1.995 2.017 2.029 2.042 (a) Construct a boxplot for this data and comment on your results. (b) Construct a histogram and comment on your results. 2. Mizutani, Yabuki and Asai developed an electrochemical method for analyzing l-malate. As part of their study they analyzed a series of beverages using both their method and a standard spectrophotometric procedure based on a clinical kit purchased from Boerhinger Scientific. The following table summarizes their results. All values are in ppm. Sample Electrode Spectrophotometric Apple Juice 1 34.0 33.4 Apple Juice 2 22.6 28.4 Apple Juice 3 29.7 29.5 Apple Juice 4 24.9 24.8 Grape Juice 1 17.8 18.3 Grape Juice 2 14.8 15.4 Mixed Fruit Juice 1 8.6 8.5 Mixed Fruit Juice 2 31.4 31.9 White Wine 1 10.8 11.5 White Wine 2 17.3 17.6 White Wine 3 15.7 15.4 White Wine 4 18.4 18.3 Construct a scatterplot of this data, placing values for the electrochemical method on the x-axis and values for the spectrophotometric method on the y-axis. Use different symbols for the four types of beverages. The data in this problem are from Mizutani, F.; Yabuki, S.; Asai, M. Anal. Chim. Acta 1991, 245,145–150. A copy of the data is available as a .csv file. 3. Ten laboratories were asked to determine an analyte’s concentration of in three standard test samples. Following are the results, in μg/ml. Laboratory Sample 1 Sample 2 Sample 3 1 22.6 13.6 16.0 2 23.0 14.2 15.9 3 21.5 13.9 16.9 4 21.9 13.9 16.9 5 21.3 13.5 16.7 6 22.1 13.5 17.4 7 23.1 13.5 17.5 8 21.7 13.5 16.8 9 22.2 12.9 17.2 10 21.7 13.8 16.7 (a) Construct a single plot that contains separate stripcharts for each of the three samples. (b) Construct a single plot that contains separate boxplots for each of the three samples. The data in this problem are adapted from Steiner, E. H. “Planning and Analysis of Results of Collaborative Tests,” in Statistical Manual of the Association of Official Analytical Chemists, Association of Official Analytical Chemists: Washington, D. C., 1975. A copy of the data is available as a .csv file. 4. Real-time quantitative PCR is an analytical method for determining trace amounts of DNA. During the analysis, each cycle doubles the amount of DNA. A probe species that fluoresces in the presence of DNA is added to the reaction mixture and the increase in fluorescence is monitored during the cycling. The cycle threshold, Ct, is the cycle when the fluorescence exceeds a threshold value. The data in the following table shows Ct values for three samples using real-time quantitative PCR. Each sample was analyzed 18 times. Sample X Sample Y Sample Z 24.24 25.14 24.41 28.06 22.97 23.43 23.97 24.57 27.21 27.77 22.93 23.66 24.44 24.49 27.02 28.74 22.95 28.79 24.79 24.68 26.81 28.35 23.12 23.77 23.92 24.45 26.64 28.80 23.59 23.98 24.53 24.48 27.63 27.99 23.37 23.56 24.95 24.30 28.42 28.21 24.17 22.80 24.76 24.60 25.16 28.00 23.48 23.29 25.18 24.57 28.53 28.21 23.80 23.86 Use two or more methods to analyze this data visually and write a brief report on your conclusions. The data in this problem is from Burns, M. J.; Nixon, G. J.; Foy, C. A.; Harris, N. BMC Biotechnol. 2005, 5:31 (open access publication). A copy of the data is available as a .csv file. 5. The file problem3_5.csv contains data for 1061 United States pennies organized into three columns: the year the penny was minted, the penny's mass (to four decimal places), and the location where the penny was minted (D = Denver and P = Philadelphia). Subset the data by year into three groups • pennies minted before 1982 • pennies minted during 1982 • pennies minuted after 1982 Plot separate histograms for the masses of the pennies in each group and comment on your results. The data in this problem was collected by Jordan Katz at Denison University and is available at the Analytical Sciences Digital Library's Active Learning website. 6. Use the element data you created in Exercise 1.3.1 to create several visualizations of your choosing. At least one of your visualizations should be a scatterplot and one should be a boxplot. 7. Use the data set you created in Exercise 2.3.2 on the daily average NOX concentrations and daily average temperatures recorded at a roadside monitoring station located on Marlybone Road in Westminster. Use this data to prepare a scatterplot that shows the daily average NOX concentrations for January on the y-axis and the daily average temperature for January on the x-axis. Add to this plot, a second scatterplot that shows the daily average NOX concentrations for July on the y-axis and the daily average temperature for July on the x-axis. Comment on your results. 8. Use this link to access a case study on data analysis and complete the nine investigations included in Part II: Ways to Visualize Data.
textbooks/chem/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/03%3A_Visualizing_Data/3.04%3A_Exercises.txt
In Chapter 3 we used data collected from 30 bags of M&Ms to explore ways to visualize data. Although a good visualization is a powerful tool for quickly examining our data qualitatively, inevitably we will need to be able to describe our data quantitatively as well. In this chapter we will consider ways to summarize our data using one or more statistical measures. 04: Summarizing Data In Chapter 3 we used data collected from 30 bags of M&Ms to explore different ways to visualize data. In this chapter we consider several ways to summarize data using the net weights of the same bags of M&Ms. Here is the raw data. Table $1$: Net Weights for 30 Bags of M&Ms. 49.287 48.870 51.250 48.692 48.777 46.405 49.693 49.391 48.196 47.326 50.974 50.081 47.841 48.377 47.004 50.037 48.599 48.625 48.395 51.730 50.405 47.305 49.477 48.027 48.212 51.682 50.802 49.055 46.577 48.317 Without completing any calculations, what conclusions can we make by just looking at this data? Here are a few: • All net weights are greater than 46 g and less than 52 g. • As we see in Figure $1$, a box-and-whisker plot (overlaid with a stripchart) and a histogram suggest that the distribution of the net weights is reasonably symmetric. • The absence of any points beyond the whiskers of the box-and-whisker plot suggests that there are no unusually large or unsually small net weights. Both visualizations provide a good qualitative picture of the data, suggesting that the individual results are scattered around some central value with more results closer to that central value that at distance from it. Neither visualization, however, describes the data quantitatively. What we need is a convenient way to summarize the data by reporting where the data is centered and how varied the individual results are around that center. Where is the Center? There are two common ways to report the center of a data set: the mean and the median. The mean, $\overline{Y}$, is the numerical average obtained by adding together the results for all n observations and dividing by the number of observations $\overline{Y} = \frac{ \sum_{i = 1}^n Y_{i} } {n} = \frac{49.287 + 48.870 + \cdots + 48.317} {30} = 48.980 \text{ g} \nonumber$ The median, $\widetilde{Y}$, is the middle value after we order our observations from smallest-to-largest, as we show here for our data. Table $2$: The data from Table $1$ Sorted From Smallest-to-Largest in Value. 46.405 46.577 47.004 47.305 47.326 47.841 48.027 48.196 48.212 48.317 48.377 48.395 48.599 48.625 48.692 48.777 48.870 49.055 49.287 49.391 49.477 49.693 50.037 50.081 50.405 50.802 50.974 51.250 51.682 51.730 If we have an odd number of samples, then the median is simply the middle value, or $\widetilde{Y} = Y_{\frac{n + 1}{2}} \nonumber$ where n is the number of samples. If, as is the case here, n is even, then $\widetilde{Y} = \frac {Y_{\frac{n}{2}} + Y_{\frac{n}{2}+1}} {2} = \frac {48.692 + 48.777}{2} = 48.734 \text{ g} \nonumber$ When our data has a symmetrical distribution, as we believe is the case here, then the mean and the median will have similar values. What is the Variation of the Data About the Center? There are five common measures of the variation of data about its center: the variance, the standard deviation, the range, the interquartile range, and the median average difference. The variance, s2, is an average squared deviation of the individual observations relative to the mean $s^{2} = \frac { \sum_{i = 1}^n \big(Y_{i} - \overline{Y} \big)^{2} } {n - 1} = \frac { \big(49.287 - 48.980\big)^{2} + \cdots + \big(48.317 - 48.980\big)^{2} } {30 - 1} = 2.052 \nonumber$ and the standard deviation, s, is the square root of the variance, which gives it the same units as the mean. $s = \sqrt{\frac { \sum_{i = 1}^n \big(Y_{i} - \overline{Y} \big)^{2} } {n - 1}} = \sqrt{\frac { \big(49.287 - 48.980\big)^{2} + \cdots + \big(48.317 - 48.980\big)^{2} } {30 - 1}} = 1.432 \nonumber$ The range, w, is the difference between the largest and the smallest value in our data set. $w = 51.730 \text{ g} - 46.405 \text{ g} = 5.325 \text{ g} \nonumber$ The interquartile range, IQR, is the difference between the median of the bottom 25% of observations and the median of the top 25% of observations; that is, it provides a measure of the range of values that spans the middle 50% of observations. There is no single, standard formula for calculating the IQR, and different algorithms yield slightly different results. We will adopt the algorithm described here: 1. Divide the sorted data set in half; if there is an odd number of values, then remove the median for the complete data set. For our data, the lower half is Table $3$: The Lower Half of the Data in Table $2$. 46.405 46.577 47.004 47.305 47.326 47.841 48.027 48.196 48.212 48.317 48.377 48.395 48.599 48.625 48.692 and the upper half is Table $4$: The Upper Half of the Data in Table $2$. 48.777 48.870 49.055 49.287 49.391 49.477 49.693 50.037 50.081 50.405 50.802 50.974 51.250 51.682 51.730 2. Find FL, the median for the lower half of the data, which for our data is 48.196 g. 3. Find FU , the median for the upper half of the data, which for our data is 50.037 g. 4. The IQR is the difference between FU and FL. $F_{U} - F_{L} = 50.037 \text{ g} - 48.196 \text{ g} = 1.841 \text{ g} \nonumber$ The median absolute deviation, MAD, is the median of the absolute deviations of each observation from the median of all observations. To find the MAD for our set of 30 net weights, we first subtract the median from each sample in Table $1$. Table $5$: The Results of Subtracting the Median From Each Value in Table $1$. 0.5525 0.1355 2.5155 -0.0425 0.0425 -2.3295 0.9585 0.6565 -0.5385 -1.4085 2.2395 1.3465 -0.8935 -0.3575 -1.7305 1.3025 -0.1355 -0.1095 -0.3395 2.9955 1.6705 -1.4295 0.7425 -0.7075 -0.5225 2.9475 2.0675 0.3205 -2.1575 -0.4175 Next we take the absolute value of each difference and sort them from smallest-to-largest. Table $6$: The Data in Table $5$ After Taking the Absolute Value. 0.0425 0.0425 0.1095 0.1355 0.1355 0.3205 0.3395 0.3575 0.4175 0.5225 0.5385 0.5525 0.6565 0.7075 0.7425 0.8935 0.9585 1.3025 1.3465 1.4085 1.4295 1.6705 1.7305 2.0675 2.1575 2.2395 2.3295 2.5155 2.9475 2.9955 Finally, we report the median for these sorted values as $\frac{0.7425 + 0.8935}{2} = 0.818 \nonumber$ Robust vs. Non-Robust Measures of The Center and Variation About the Center A good question to ask is why we might desire more than one way to report the center of our data and the variation in our data about the center. Suppose that the result for the last of our 30 samples was reported as 483.17 instead of 48.317. Whether this is an accidental shifting of the decimal point or a true result is not relevant to us here; what matters is its effect on what we report. Here is a summary of the effect of this one value on each of our ways of summarizing our data. Table $7$: Effect on Summary Statistics of Changing Last Value in Table $1$ From 48.317 g to 483.17 g. statistic original data new data mean 48.980 63.475 median 48.734 48.824 variance 2.052 6285.938 standard deviation 1.433 79.280 range 5.325 436.765 IQR 1.841 1.885 MAD 0.818 0.926 Note that the mean, the variance, the standard deviation, and the range are very sensitive to the change in the last result, but the median, the IQR, and the MAD are not. The median, the IQR, and the MAD are considered robust statistics because they are less sensitive to an unusual result; the others are, of course, non-robust statistics. Both types of statistics have value to us, a point we will return to from time-to-time.
textbooks/chem/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/04%3A_Summarizing_Data/4.01%3A_Ways_to_Summarize_Data.txt
One of R’s strengths is its `Stats` package, which provides access to a rich body of tools for analyzing data. The package is part of R’s base installation and is available whenever you use R without the need to use library() to make it available. Almost all of the statistical functions we will use in this textbook are included in the `Stats` package. Bringing Your Data Into R This section uses the M&M data in Table 1 of Chapter 3.1. You can download a copy of the data as a .csv spreadsheet using this link. Before we can summarize our data, we need to make it available to R. The code below uses the` read.csv `function to read in the data from the file` MandM.csv()`as a data frame. The text`"MandM.csv"`assumes the file is located in your working directory. `mm_data = read.csv("MandM.csv") ` Finding the Central Tendency of Data Using R To report the mean of a data set we use the function` mean(x) `where` x `is the object that holds our data, typically a vector or a single column from a data frame. An important argument to this, and to many other functions, is how to handle missing or NA values. The default is to keep them, which leads to an error when we try to calculate the mean. This is a reasonable default as it requires us to make note of the missing values and to set` na.rm = TRUE `if we wish to remove them from the calculation. As our vector of data is not missing any values, we do not need to include` na.rm = TRUE `here, but we do so to illustrate its importance. `mean(mm_data\$net_weight, na.rm = TRUE) ` `[1] 48.9803` To report the median of a data set we use the function` median(x) `where` x `is the object that holds our data, typically a vector or a single column from a data frame. `median(mm_data\$net_weight, na.rm = TRUE) ` `[1] 48.7345` Finding the Spread of Data Using R To report the variance of a data set we use the function` var(x) `where` x `is the object that holds our data, typically a vector or a single column from a data frame. `var(mm_data\$net_weight, na.rm = TRUE) ` `[1] 2.052068` To report the standard deviation we use the function` sd(x) `where` x `is the object that holds our data, typically a vector or a single column from a data frame. `sd(mm_data\$net_weight, na.rm = TRUE) ` `[1] 1.432504` To report the range we have to be creative as R’s` range()`function does not directly report the range. Instead, it returns the minimum as its first value and the maximum as its second value, which we can extract using the bracket operator and then use to compute the range. `range(mm_data\$net_weight, na.rm = TRUE)[2] - range(mm_data\$net_weight, na.rm = TRUE)[1] ` `[1] 5.325` Another approach for calculating the range is to use R's `max()` and `min()` functions. `max(mm_data\$net_weight) - min(mm_data\$net_weight)` `[1] 5.325` To report the interquartile range we use the function` IQR(x) `where` x `is the object that holds our data, typically a vector or a single column from a data frame. The function has nine different algorithms for calculating the IQR, identified using` type `as an argument. To obtain an IQR equivalent to that generated by R’s `boxplot()` function, we use` type = 5 `for an even number of values and` type = 7 `for an odd number of values. `IQR(mm_data\$net_weight, na.rm = TRUE, type = 5) ` `[1] 1.841` To find the median absolute deviation we use the function` mad(x) `where` x `is the object that holds our data, typically a vector or a single column from a data frame. The function includes a scaling constant, the default value for which does not match our description for calculating the MAD; the argument` constant = 1 `gives a result that is consistent with our description of the MAD. `mad(mm_data\$net_weight, na.rm = TRUE, constant = 1) ` `[1] 0.818` 4.03: Exercises 1. The following masses were recorded for 12 different U.S. quarters (all values given in grams): 5.683 5.549 5.548 5.552 5.620 5.536 5.539 5.684 5.551 5.552 5.554 5.632 Report the mean, median, variance, standard deviation, range, IQR, and MAD for this data. 2. A determination of acetaminophen in 10 separate tablets of Excedrin Extra Strength Pain Reliever gives the following results (in mg). The data in this problem are from Simonian, M. H.; Dinh, S.; Fray, L. A. Spectroscopy 1993, 8(6), 37–47. 224.3 240.4 246.3 239.4 253.1 261.7 229.4 255.5 235.5 249.7 Report the mean, median, variance, standard deviation, range, IQR, and MAD for this data. 3. Salem and Galan developed a new method to determine the amount of morphine hydrochloride in tablets. An analysis of tablets with different nominal dosages gave the following results (in mg/tablet). The data in this problem are from Salem, I. I.; Galan, A. C. Anal. Chim. Acta 1993, 283, 334–337. 100-mg tablets 60-mg tablets 30-mg tablets 10-mg tablets 99.17 54.21 28.51 9.06 94.31 55.62 26.25 8.83 95.92 57.40 25.92 9.08 94.55 57.51 28.62 93.83 52.59 24.93 For each dosage, report the mean, median, variance, standard deviation, range, IQR, and MAD for this data. 4. Use the data set you create in Exercise 2.32 for the daily roadside monitoring of NOX concentrations and air temperatures along Marlybone Road. Report the mean, median, variance, standard deviation, range, IQR, and MAD for the NOX concentrations in January. Examine a boxplot of the data and not that two values are flagged. Remove these values and recalculate the mean, median, variance, standard deviation, range, IQR, and MAD for this data. Compare these results to those calculated using all of the data and comment on your results. 5. Use this link to access a case study on data analysis and complete the three investigations included in Part III: Ways to Summarize Data.
textbooks/chem/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/04%3A_Summarizing_Data/4.02%3A_Using_R_to_Summarize_Data.txt
When we measure something, such as the percentage of yellow M&Ms in a bag of M&Ms, we expect two things: • that there is an underlying “true” value that our measurements should approximate, and • that the results of individual measurements will show some variation about that "true" value Visualizations of data—such as dot plots, stripcharts, boxplot-and-whisker plots, bar plots, histograms, and scatterplots—often suggest there is an underlying structure to our data. For example, we saw in Chapter 3 that the distribution of yellow M&Ms in bags of M&Ms is more or less symmetrical around its median, while the distribution of orange M&Ms was skewed toward higher values. This underlying structure, or distribution, of our data as it effects how we choose to analyze our data. In this chapter we will take a closer look at several ways in which data are distributed. 05: The Distribution of Data Before we consider different types of distributions, let's define some key terms. You may wish, as well, to review the discussion of different types of data in Chapter 2. Populations and Samples A population includes every possible measurement we could make on a system, while a sample is the subset of a population on which we actually make measurements. These definitions are fluid. A single bag of M&Ms is a population if we are interested only in that specific bag, but it is but one sample from a box that contains a gross (144) of individual bags. That box, itself, can be a population, or it can be one sample from a much larger production lot. And so on. Discrete Distributions and Continuous Distributions In a discrete distribution the possible results take on a limited set of specific values that are independent of how we make our measurements. When we determine the number of yellow M&Ms in a bag, the results are limited to integer values. We may find 13 yellow M&Ms or 24 yellow M&Ms, but we cannot obtain a result of 15.43 yellow M&Ms. For a continuous distribution the result of a measurement can take on any possible value between a lower limit and an upper limit, even though our measuring device has a limited precision; thus, when we weigh a bag of M&Ms on a three-digit balance and obtain a result of 49.287 g we know that its true mass is greater than 49.2865... g and less than 49.2875... g. 5.02: Theoretical Models for the Distribution of Data There are four important types of distributions that we will consider in this chapter: the uniform distribution, the binomial distribution, the Poisson distribution, and the normal, or Gaussian, distribution. In Chapter 3 and Chapter 4 we used the analysis of bags of M&Ms to explore ways to visualize data and to summarize data. Here we will use the same data set to explore the distribution of data. Uniform Distribution In a uniform distribution, all outcomes are equally probable. Suppose the population of M&Ms has a uniform distribution. If this is the case, then, with six colors, we expect each color to appear with a probability of 1/6 or 16.7%. Figure $1$ shows a comparison of the theoretical results if we draw 1699 M&Ms—the total number of M&Ms in our sample of 30 bags—from a population with a uniform distribution (on the left) to the actual distribution of the 1699 M&Ms in our sample (on the right). It seems unlikely that the population of M&Ms has a uniform distribution of colors! Binomial Distribution A binomial distribution shows the probability of obtaining a particular result in a fixed number of trials, where the odds of that result happening in a single trial are known. Mathematically, a binomial distribution is defined by the equation $P(X, N) = \frac {N!} {X! (N - X)!} \times p^{X} \times (1 - p)^{N - X} \nonumber$ where P(X,N) is the probability that the event happens X times in N trials, and where p is the probability that the event happens in a single trial. The binomial distribution has a theoretical mean, $\mu$, and a theoretical variance, $\sigma^2$, of $\mu = Np \quad \quad \quad \sigma^2 = Np(1 - p) \nonumber$ Figure $2$ compares the expected binomial distribution for drawing 0, 1, 2, 3, 4, or 5 yellow M&Ms in the first five M&Ms—assuming that the probability of drawing a yellow M&M is 435/1699, the ratio of the number of yellow M&Ms and the total number of M&Ms—to the actual distribution of results. The similarity between the theoretical and the actual results seems evident; in Chapter 6 we will consider ways to test this claim. Poisson Distribution The binomial distribution is useful if we wish to model the probability of finding a fixed number of yellow M&Ms in a sample of M&Ms of fixed size—such as the first five M&Ms that we draw from a bag—but not the probability of finding a fixed number of yellow M&Ms in a single bag because there is some variability in the total number of M&Ms per bag. A Poisson distribution gives the probability that a given number of events will occur in a fixed interval in time or space if the event has a known average rate and if each new event is independent of the preceding event. Mathematically a Poisson distribution is defined by the equation $P(X, \lambda) = \frac {e^{-\lambda} \lambda^X} {X !} \nonumber$ where $P(X, \lambda)$ is the probability that an event happens X times given the event’s average rate, $\lambda$. The Poisson distribution has a theoretical mean, $\mu$, and a theoretical variance, $\sigma^2$, that are each equal to $\lambda$. The bar plot in Figure $3$ shows the actual distribution of green M&Ms in 35 small bags of M&Ms (as reported by M. A. Xu-Friedman “Illustrating concepts of quantal analysis with an intuitive classroom model,” Adv. Physiol. Educ. 2013, 37, 112–116). Superimposed on the bar plot is the theoretical Poisson distribution based on their reported average rate of 3.4 green M&Ms per bag. The similarity between the theoretical and the actual results seems evident; in Chapter 6 we will consider ways to test this claim. Normal Distribution A uniform distribution, a binomial distribution, and a Poisson distribution predict the probability of a discrete event, such as the probability of finding exactly two green M&Ms in the next bag of M&Ms that we open. Not all of the data we collect is discrete. The net weights of bags of M&Ms is an example of continuous data as the mass of an individual bag is not restricted to a discrete set of allowed values. In many cases we can model continuous data using a normal (or Gaussian) distribution, which gives the probability of obtaining a particular outcome, P(x), from a population with a known mean, $\mu$, and a known variance, $\sigma^2$. Mathematically a normal distribution is defined by the equation $P(x) = \frac {1} {\sqrt{2 \pi \sigma^2}} e^{-(x - \mu)^2/(2 \sigma^2)} \nonumber$ Figure $4$ shows the expected normal distribution for the net weights of our sample of 30 bags of M&Ms if we assume that their mean, $\overline{X}$, of 48.98 g and standard deviation, s, of 1.433 g are good predictors of the population’s mean, $\mu$, and standard deviation, $\sigma$. Given the small sample of 30 bags, the agreement between the model and the data seems reasonable. 5.03: The Central Limit Theorem Suppose we have a population for which one of its properties has a uniform distribution where every result between 0 and 1 is equally probable. If we analyze 10,000 samples we should not be surprised to find that the distribution of these 10000 results looks uniform, as shown by the histogram on the left side of Figure $1$. If we collect 1000 pooled samples—each of which consists of 10 individual samples for a total of 10,000 individual samples—and report the average results for these 1000 pooled samples, we see something interesting as their distribution, as shown by the histogram on the right, looks remarkably like a normal distribution. When we draw single samples from a uniform distribution, each possible outcome is equally likely, which is why we see the distribution on the left. When we draw a pooled sample that consists of 10 individual samples, however, the average values are more likely to be near the middle of the distribution’s range, as we see on the right, because the pooled sample likely includes values drawn from both the lower half and the upper half of the uniform distribution. This tendency for a normal distribution to emerge when we pool samples is known as the central limit theorem. As shown in Figure $2$, we see a similar effect with populations that follow a binomial distribution or a Poisson distribution. You might reasonably ask whether the central limit theorem is important as it is unlikely that we will complete 1000 analyses, each of which is the average of 10 individual trials. This is deceiving. When we acquire a sample of soil, for example, it consists of many individual particles each of which is an individual sample of the soil. Our analysis of this sample, therefore, is the mean for a large number of individual soil particles. Because of this, the central limit theorem is relevant.
textbooks/chem/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/05%3A_The_Distribution_of_Data/5.01%3A_Terminology.txt
The base installation of R includes a variety of functions for working with uniform distributions, binomial distributions, Poisson distributions, and normal distributions. These functions come in four forms that take the general form xdist where dist is the type of distribution (unif for a uniform distribution, binom for a binomial distribution, pois for a Poisson distribution, and norm for a normal distribution), and where x defines the information we extract from the distribution. For example, the function dunif()returns the probability of obtaining a specific value drawn from a uniform distribution, the function pbinom() returns the probability of obtaining a result less than a defined value from a binomial distribution, the function qpois() returns the upper boundary that includes a defined percentage of results from a Poisson distribution, and the function rnomr() returns results drawn at random from a normal distribution. Modeling a Uniform Distribution Using R When you purchase a Class A 10.00-mL volumetric pipet it comes with a tolerance of ±0.02 mL, which is the manufacturer’s way of saying that the pipet’s true volume is no less than 9.98 mL and no greater than 10.02 mL. Suppose a manufacturer produces 10,000 pipets, how many might we expect to have a volume between 9.990 mL and 9.992 mL? A uniform distribution is the choice when the manufacturer provides a tolerance range without specifying a level of confidence and when there is no reason to believe that results near the center of the range are more likely than results at the ends of the range. To simulate a uniform distribution we use R’s runif(n, min, max) function, which returns n random values drawn from a uniform distribution defined by its minimum (min) and its maximum (max) limits. The result is shown in Figure $1$, where the dots, added using the points()function, show the theoretical uniform distribution at the midpoint of each of the histogram’s bins. # create vector of volumes for 10000 pipets drawn at random from uniform distribution pipet = runif(10000, 9.98, 10.02) # create histogram using 20 bins of size 0.002 mL pipet_hist = hist(pipet, breaks = seq(9.98, 10.02, 0.002), col = c("blue", "lightblue"), ylab = "number of pipets", xlab = "volume of pipet (mL)", main = NULL) # overlay points showing expected values for uniform distribution points(pipet_hist$mids, rep(10000/20, 20), pch = 19) Saving the histogram to the object pipet_hist allows us to retrieve the number of pipets in each of the histogram’s intervals; thus, there are 476 pipets with volumes between 9.990 mL and 9.992 mL, which is the sixth bar from the left edge of Figure $1$. pipet_hist$counts[6] [1] 476 Modeling a Binomial Distribution Using R Carbon has two stable, non-radioactive isotopes, 12C and 13C, with relative isotopic abundances of, respectively, 98.89% and 1.11%. Suppose we are working with cholesterol, C27H44O, which has 27 atoms of carbon. We can use the binomial distribution to model the expected distribution for the number of atoms 13C in 1000 cholesterol molecules. To simulate the distribution we use R’s rbinom(n, size, prob) function, which returns n random values drawn from a binomial distribution defined by the size of our sample, which is the number of possible carbon atoms, and the isotopic abundance of 13C, which is itsprobor probability. The result is shown in Figure $2$, where the dots, added using the points()function, show the theoretical binomial distribution. These theoretical values are calculated using the dbinom() function. The bar plot is assigned to the object chol_bar to provide access to the values of x when plotting the points. # create vector with 1000 values drawn at random from binomial distribution cholesterol = rbinom(1000, 27, 0.0111) # create bar plot of results; table(cholesterol) determines the number of cholesterol # molecules with 0, 1, 2... atoms of carbon-13; dividing by 1000 gives probability chol_bar = barplot(table(cholesterol)/1000, col = "lightblue", ylim = c(0,1), xlab = "number of atoms of carbon-13", ylab = "probability") # theoretical results for binomial distribution of carbon-13 in cholesterol chol_binom = dbinom(seq(0,27,1), 27, 0.0111) # overlay theoretical results for binomial distribution points(x = chol_bar, y = chol_binom[1:length(chol_bar)], cex = 1.25, pch = 19) Modeling a Poisson Distribution Using R One measure of the quality of water in lakes used for recreational purposes is a fecal coliform test. In a typical test a sample of water is passed through a membrane filter, which is then placed on a medium to encourage growth of the bacteria and incubated for 24 hours at 44.5°C. The number of colonies of bacteria is reported. Suppose a lake has a natural background level of 5 colonies per 50 mL of water tested and must be closed for swimming if it exceeds 10 colonies per 50 mL of water tested. We can use a Poisson distribution to determine, over the course of a year of daily testing, the probability that a test will exceed this limit even though the lake’s true fecal coliform count remains at its natural background level. To simulate the distribution we use R’s rpois(n, lambda) function, which returns n random values drawn from a Poisson distribution defined by lambda which is its average incidence. Because we are interested in modeling out a year, n is set to 365 days. The result is shown in Figure $3$, where the dots, added using the points() function, shows the theoretical Poisson distribution. These theoretical values are calculated using the dpois() function. The bar plot is assigned to the object choliform_bar to provide access to the values of x when plotting the points. # create vector of results drawn at random from Poisson distribution coliforms = rpois(365,5) # create table of simulated results coliform_table = table(coliforms) # create bar plot; ylim ensures there is some space above the plot's highest bar coliform_bar = barplot(coliform_table, ylim = c(0, 1.2 * max(coliform_table)), col = "lightblue") # theoretical results for Poisson distribution d_coliforms = dpois(seq(0,length(coliform_bar) - 1), 5) * 365 # overlay theoretical results for Poisson distribution points(coliform_bar, d_coliforms, pch = 19) To find the number of times our simulated results exceed the limit of 10 coliforms colonies per 50 mL we use R’s which() function to identify within coliforms the values that are greater than 10 coliforms[which(coliforms > 10)] finding that this happen 2 times over the course of a year. The theoretical probability that a single test will exceed the limit of 10 colonies per 50 mL of water, we use R’s ppois(q, lambda) function, where q is the value we wish to test, which returns the cumulative probability of obtaining a result less than or equal to q on any day; over the course of 365 days (1 - ppois(10,5))*365 [1] 4.998773 we expect that on 5 days the fecal coliform count will exceed the limit of 10. Modeling a Normal Distribution Using R If we place copper metal and an excess of powdered sulfur in a crucible and ignite it, copper sulfide forms with an empirical formula of CuxS. The value of x is determined by weighing the Cu and the S before ignition and finding the mass of CuxS when the reaction is complete (any excess sulfur leaves as the gas SO2). The following are the Cu/S ratios from 62 such experiments, of which just 3 are greater than 2. Because of the central limit theorem, we can use a normal distribution to model the data. Table $1$: Experimental Cu/S Ratios When Igniting Cu(s) and S(s). 1.764 1.838 1.890 1.891 1.906 1.908 1.920 1.922 1.936 1.937 1.941 1.942 1.957 1.957 1.963 1.963 1.975 1.976 1.993 1.993 2.029 2.042 1.866 1.872 1.891 1.897 1.899 1.910 1.911 1.916 1.927 1.931 1.935 1.939 1.939 1.940 1.943 1.948 1.953 1.957 1.959 1.962 1.966 1.968 1.969 1.977 1.981 1.981 1.995 1.995 1.865 1.995 1.877 1.900 1.919 1.936 1.941 1.955 1.963 1.973 1.988 2.017 Figure $4$ shows the distribution of the experimental results as a histogram overlaid with the theoretical normal distribution calculated assuming that $\mu$ is equal to the mean of the 62 samples and that $\sigma$ is equal to the standard deviation of the 62 samples. Both the experimental data and theoretical normal distribution suggest that most values of x are between 1.85 and 2.03. # enter the data into a vector with the name cuxs cuxs = c(1.764, 1.920, 1.957, 1.993, 1.891, 1.927, 1.943, 1.966, 1.995, 1.919, 1.988, 1.838, 1.922, 1.957, 1.993, 1.897, 1.931, 1.948, 1.968, 1.995, 1.936, 2.017, 1.890, 1.936, 1.963, 2.029, 1.899, 1.935, 1.953, 1.969, 1.865, 1.941, 1.891, 1.937, 1.963, 2.042, 1.910, 1.939, 1.957, 1.977, 1.995, 1.955, 1.906, 1.941, 1.975, 1.866, 1.911, 1.939, 1.959, 1.981, 1.877, 1.963, 1.908, 1.942, 1.976, 1.872, 1.916, 1.940, 1.962, 1.981, 1.900, 1.973) # sequence of ratios over which to display experimental results and theoretical distribution x = seq(1.7,2.2,0.02) # create histogram for experimental results cuxs_hist = hist(cuxs, breaks = x, col = c("blue", "lightblue"), xlab = "value for x", ylab = "frequency", main = NULL) # calculate theoretical results for normal distribution using the mean and the standard deviation # for the 62 samples as predictors for mu and sigma cuxs_theo = dnorm(cuxs_hist$mids, mean = mean(cuxs), sd = sd(cuxs)) # overlay results for theoretical normal distribution points(cuxs_hist$mids, cuxs_theo, pch = 19) 5.05: Exercises Behavioral and ecological factors influence dispersion. Uniform patterns of dispersion are generally a result of interactions between individuals like competition and territoriality. 1. In ecology a uniform distribution of an organism may result when the organism exhibits territorial behavior that keeps most organisms. In one study, a portion of a field was divided into a $20 \times \20$ grid and a count made of the number of organisms in each unit of the grid giving the results seen below. number of organisms in plot frequency 2 58 3 51 4 60 5 64 6 54 7 52 8 61 Create a plot similar to that in 5.4.1 and comment on your results. 2. Chlorine has two isotopes, 35Cl (75.8% abundance) and 37Cl (24.2% abundance). Create a plot similar to that in Figure 5.4.2 for the molecule PCB 77, a chlorinated compound with the formula C12H6Cl4 and comment on your results. 3. A radioactive decay process has a background level of 3 emissions per minute and follows a Poisson distribution. The number of emissions per minute was monitored for one hour giving the following results emissions per minute frequency of event 0 3 1 9 2 13 3 16 4 9 5 5 6 3 7 1 8 1 9 0 10 0 Use this data to create a plot similar to that in Figure 5.4.3 and comment on your results 4. Using the penny data from Exercise 3.4.5, create a plot similar to that in Figure 5.4.4 using all pennies minted after 1982 and comment on your results. 5. Use this link to access a case study on data analysis and complete the first four investigations included in Part IV: Ways to Model Data.
textbooks/chem/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/05%3A_The_Distribution_of_Data/5.04%3A_Modeling_Distributions_Using_R.txt
In Chapter 5 we examined four ways in which the individual samples we collect and analyze are distributed about a central value: a uniform distribution, a binomial distribution, a Poisson distribution, and a normal distribution. We also learned that regardless of how individual samples are distributed, the distribution of averages for multiple samples often follows a normal distribution. This tendency for a normal distribution to emerge when we report averages for multiple samples is known as the central limit theorem. In this chapter we look more closely at the normal distribution—examining some of its properties—and consider how we can use these properties to say something more meaningful about our data than simply reporting a mean and a standard deviation. 06: Uncertainty of Data Mathematically a normal distribution is defined by the equation $P(x) = \frac {1} {\sqrt{2 \pi \sigma^2}} e^{-(x - \mu)^2/(2 \sigma^2)} \nonumber$ where $P(x)$ is the probability of obtaining a result, $x$, from a population with a known mean, $\mu$, and a known standard deviation, $\sigma$. Figure $1$ shows the normal distribution curves for $\mu = 0$ with standard deviations of 5, 10, and 20. Because the equation for a normal distribution depends solely on the population’s mean, $\mu$, and its standard deviation, $\sigma$, the probability that a sample drawn from a population has a value between any two arbitrary limits is the same for all populations. For example, Figure $2$ shows that 68.26% of all samples drawn from a normally distributed population have values within the range $\mu \pm 1\sigma$, and only 0.14% have values greater than $\mu + 3\sigma$. This feature of a normal distribution—that the area under the curve is the same for all values of $\sigma$—allows us to create a probability table (see Appendix 1) based on the relative deviation, $z$, between a limit, x, and the mean, $\mu$. $z = \frac {x - \mu} {\sigma} \nonumber$ The value of $z$ gives the area under the curve between that limit and the distribution’s closest tail, as shown in Figure $3$. Example $1$ Suppose we know that $\mu$ is 5.5833 ppb Pb and that $\sigma$ is 0.0558 ppb Pb for a particular standard reference material (SRM). What is the probability that we will obtain a result that is greater than 5.650 ppb if we analyze a single, random sample drawn from the SRM? Solution Figure $4$ shows the normal distribution curve given values of 5.5833 ppb Pb for $\mu$ and of 0.0558 ppb Pb $\sigma$. The shaded area in the figures is the probability of obtaining a sample with a concentration of Pb greater than 5.650 ppm. To determine the probability, we first calculate $z$ $z = \frac {x - \mu} {\sigma} = \frac {5.650 - 5.5833} {0.0558} = 1.195 \nonumber$ Next, we look up the probability in Appendix 1 for this value of $z$, which is the average of 0.1170 (for $z = 1.19$) and 0.1151 (for $z = 1.20$), or a probability of 0.1160; thus, we expect that 11.60% of samples will provide a result greater than 5.650 ppb Pb. Example $2$ Example $1$ considers a single limit—the probability that a result exceeds a single value. But what if we want to determine the probability that a sample has between 5.580 g Pb and 5.625 g Pb? Solution In this case we are interested in the shaded area shown in Figure $5$. First, we calculate $z$ for the upper limit $z = \frac {5.625 - 5.5833} {0.0558} = 0.747 \nonumber$ and then we calculate $z$ for the lower limit $z = \frac {5.580 - 5.5833} {0.0558} = -0.059 \nonumber$ Then, we look up the probability in Appendix 1 that a result will exceed our upper limit of 5.625, which is 0.2275, or 22.75%, and the probability that a result will be less than our lower limit of 5.580, which is 0.4765, or 47.65%. The total unshaded area is 71.4% of the total area, so the shaded area corresponds to a probability of $100.00 - 22.75 - 47.65 = 100.00 - 71.40 = 29.6 \% \nonumber$
textbooks/chem/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/06%3A_Uncertainty_of_Data/6.01%3A_Properties_of_a_Normal_Distribution.txt
In the previous section, we learned how to predict the probability of obtaining a particular outcome if our data are normally distributed with a known $\mu$ and a known $\sigma$. For example, we estimated that 11.60% of samples drawn at random from a standard reference material will have a concentration of Pb greater than 5.650 ppb given a $\mu$ of 5.5833 ppb and a $\sigma$ of 0.0558 ppb. In essence, we determined how many standard deviations 5.650 is from $\mu$ and used this to define the probability given the standard area under a normal distribution curve. We can look at this in a different way by asking the following question: If we collect a single sample at random from a population with a known $\mu$ and a known $\sigma$, within what range of values might we reasonably expect to find the sample’s result 95% of the time? Rearranging the equation $z = \frac {x - \mu} {\sigma} \nonumber$ and solving for $x$ gives $x = \mu \pm z \sigma = 5.5833 \pm (1.96)(0.0558) = 5.5833 \pm 0.1094 \nonumber$ where a $z$ of 1.96 corresponds to 95% of the area under the curve; we call this a 95% confidence interval for a single sample. It generally is a poor idea to draw a conclusion from the result of a single experiment; instead, we usually collect several samples and ask the question this way: If we collect $n$ random samples from a population with a known $\mu$ and a known $\sigma$, within what range of values might we reasonably expect to find the mean of these samples 95% of the time? We might reasonably expect that the standard deviation for the mean of several samples is smaller than the standard deviation for a set of individual samples; indeed it is and it is given as $\sigma_{\bar{x}} = \frac {\sigma} {\sqrt{n}} \nonumber$ where $\frac {\sigma} {\sqrt{n}}$ is called the standard error of the mean. For example, if we collect three samples from the standard reference material described above, then we expect that the mean for these three samples will fall within a range $\bar{x} = \mu \pm z \sigma_{\bar{X}} = \mu \pm \frac {z \sigma} {\sqrt{n}} = 5.5833 \pm \frac{(1.96)(0.0558)} {\sqrt{3}} = 5.5833 \pm 0.0631 \nonumber$ that is $\pm 0.0631$ ppb around $\mu$, a range that is smaller than that of $\pm 0.1094$ ppb when we analyze individual samples. Note that the relative value to us of increasing the sample’s size diminishes as $n$ increases because of the square root term, as shown in Figure $1$. Our treatment thus far assumes we know $\mu$ and $\sigma$ for the parent population, but we rarely know these values; instead, we examine samples drawn from the parent population and ask the following question: Given the sample’s mean, $\bar{x}$, and its standard deviation, $s$, what is our best estimate of the population’s mean, $\mu$, and its standard deviation, $\sigma$. To make this estimate, we replace the population’s standard deviation, $\sigma$, with the standard deviation, $s$, for our samples, replace the population’s mean, $\mu$, with the mean, $\bar{x}$, for our samples, replace $z$ with $t$, where the value of $t$ depends on the number of samples, $n$ $\bar{x} = \mu \pm \frac{ts}{\sqrt{n}} \nonumber$ and then rearrange the equation to solve for $\mu$. $\mu = \bar{x} \pm \frac {ts} {\sqrt{n}} \nonumber$ We call this a confidence interval. Values for $t$ are available in tables (see Appendix 2) and depend on the probability level, $\alpha$, where $(1 − \alpha) \times 100$ is the confidence level, and the degrees of freedom, $n − 1$; note that for any probability level, $t \longrightarrow z$ as $n \longrightarrow \infty$. We need to give special attention to what this confidence interval means and to what it does not mean: • It does not mean that there is a 95% probability that the population’s mean is in the range $\mu = \bar{x} \pm ts$ because our measurements may be biased or the normal distribution may be inappropriate for our system. • It does provide our best estimate of the population’s mean, $\mu$ given our analysis of $n$ samples drawn at random from the parent population; a different sample, however, will give a different confidence interval and, therefore, a different estimate for $\mu$.
textbooks/chem/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/06%3A_Uncertainty_of_Data/6.02%3A_Confidence_Intervals.txt
Given a mean and a standard deviation, we can use R’s dnorm()function to plot the corresponding normal distribution dnorm(x, mean, sd) wheremeanis the value for $\mu$,sdis the value for $\sigma$, andxis a vector of values that spans the range of x-axis values we want to plot. # define the mean and the standard deviation mu = 12 sigma = 2 # create vector for values of x that span a sufficient range of # standard deviations on either side of the mean; here we use values # for x that are four standard deviations on either side of the mean x = seq(4, 20, 0.01) # use dnorm() to calculate probabilities for each x y = dnorm(x, mean = mu, sd = sigma) # plot normal distribution curve plot(x, y, type = "l", lwd = 2, col = "blue", ylab = "probability", xlab = "x") To annotate the normal distribution curve to show an area of interest to us, we use R’s polygon() function, as illustrated here for the normal distribution curve in Figure $1$, showing the area that includes values between 8 and 15. # define the mean and the standard deviation mu = 12 sigma = 2 # create vector for values of x that span a sufficient range of # standard deviations on either side of the mean; here we use values # for x that are four standard deviations on either side of the mean x = seq(4, 20, 0.01) # use dnorm() to calculate probabilities for each x y = dnorm(x, mean = mu, sd = sigma) # plot normal distribution curve; the options xaxt = "i" and yaxt = "i" # force the axes to begin and end at the limits of the data plot(x, y, type = "l", lwd = 2, col = "ivory4", ylab = "probability", xlab = "x", xaxs = "i", yaxs = "i") # create vector for values of x between a lower limit of 8 and an upper limit of 15 lowlim = 8 uplim = 15 dx = seq(lowlim, uplim, 0.01) # use polygon to fill in area; x and y are vectors of x,y coordinates # that define the shape that is then filled using the desired color polygon(x = c(lowlim, dx, uplim), y = c(0, dnorm(dx, mean = 12, sd = 2), 0), border = NA, col = "ivory4") To find the probability of obtaining a value within the shaded are, we use R’spnorm()command pnorm(q, mean, sd, lower.tail) whereqis a limit of interest,meanis the value for $\mu$,sdis the value for $\sigma$, andlower.tailis a logical value that indicates whether we return the probability for values below the limit (lower.tail = TRUE) or for values above the limit (lower.tail = FALSE). For example, to find the probability of obtaining a result between 8 and 15, given $\mu = 12$ and $\sigma = 2$, we use the following lines of code. # find probability of obtaining a result greater than 15 prob_greater15 = pnorm(15, mean = 12, sd = 2, lower.tail = FALSE) # find probability of obtaining a result less than 8 prob_less8 = pnorm(8, mean = 12, sd = 2, lower.tail = TRUE) # find probability of obtaining a result between 8 and 15 prob_between = 1 - prob_greater15 - prob_less8 # display results prob_greater15 [1] 0.0668072 prob_less8 [1] 0.02275013 prob_between [1] 0.9104427 Thus, 91.04% of values fall between the limits of 8 and 15.
textbooks/chem/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/06%3A_Uncertainty_of_Data/6.03%3A_Using_R_to_Model_Properties_of_a_Normal_Distribution.txt
The confidence interval for a population’s mean, $\mu$, given an experimental mean, $\bar{x}$, for $n$ samples is defined as $\mu = \bar{x} \pm \frac {z \sigma} {\sqrt{n}} \nonumber$ if we know the population's standard deviation, $\sigma$, and as $\mu = \bar{x} \pm \frac {t s} {\sqrt{n}} \nonumber$ if we assume that the sample's standard deviation, $s$, is a reasonable predictor of the population's standard deviation. To find values for $z$ we use R'sqnorm()function, which takes the form qnorm(p) wherepis the probability on one side of the normal distribution curve that a result is not included within the confidence interval. For a 95% confidence interval, $p = 0.05/2 = 0.025$ because the total probability of 0.05 is equally divided between both sides of the normal distribution. To find $t$ we use R'sqt()function, which takes the form qt(p, df) wherepis defined as above and wheredfis the degrees of freedom or $n - 1$. For example, if we have a mean of $\bar{x} = 12$ for 10 samples with a known standard deviation of $\sigma = 2$, then for the 95% confidence interval the value of $z$ and the resulting confidence interval are # for a 95% confidence interval, alpha is 0.05 and the probability, p, on either end of the distribution is 0.025; # the value of z is positive on one side of the normal distribution and negative on the other side; # as we are interested in just the magnitude, not the sign, we use the abs() function to return the absolute value z = qnorm(0.025) conf_int_pop = abs(z * 2/sqrt(10)) conf_int_pop [1] 1.23959 Adding and subtracting this value from the mean defines the confidence interval, which, in this case is $12 \pm 1.2$. If we have a mean of $\bar{x} = 12$ for 10 samples with an experimental standard deviation of $s = 2$, then for the 95% confidence interval the value of $t$ and the resulting confidence interval are t = qt(p = 0.025, 9) conf_int_samp = abs(t * 2/sqrt(10)) conf_int_samp [1] 1.430714 Adding and subtracting this value from the mean defines the confidence interval, which, in this case is $12 \pm 1.4$. 6.05: Exercises 1. Berglund and Wichardt investigated the quantitative determination of Cr in high-alloy steels using a potentiometric titration of Cr(VI). Before the titration, samples of the steel were dissolved in acid and the chromium oxidized to Cr(VI) using peroxydisulfate. Shown here are the results ( as %w/w Cr) for the analysis of a reference steel as reported in Berglund, B.; Wichardt, C. Anal. Chim. Acta 1990, 236, 399–410. 16.968 16.922 16.840 16.883 16.887 16.977 16.857 16.728 Calculate the mean, the standard deviation, and the 95% confidence interval about the mean. What does this confidence interval mean? 2. In Exercise 4.3.2 you determined the mean and the variance for 10 separate tablets of Excedrin Extra Strength Pain Reliever gives the following results (in mg). The data in this problem are from Simonian, M. H.; Dinh, S.; Fray, L. A. Spectroscopy 1993, 8(6), 37–47. 224.3 240.4 246.3 239.4 253.1 261.7 229.4 255.5 235.5 249.7 Assuming that $\overline{X}$ and $s^2$ are good approximations for $\mu$ and for $\sigma^2$, and that the population is normally distributed, what percentage of the tablets are expected to contain more than the standard amount of 250 mg acetaminophen per tablet?. 3. In Exercise 4.3.3 you determined the mean and the standard deviation for the amount of morphine hydrochloride in each of four different nominal dosages levels using data from Salem, I. I.; Galan, A. C. Anal. Chim. Acta 1993, 283, 334–337. All results are in mg/tablet. 100-mg tablets 60-mg tablets 30-mg tablets 10-mg tablets 99.17 54.21 28.51 9.06 94.31 55.62 26.25 8.83 95.92 57.40 25.92 9.08 94.55 57.51 28.62 93.83 52.59 24.93 For each dosage level, and assuming that $\overline{X}$ and $s^2$ are good approximations for $\mu$ and for $\sigma^2$, and that the population is normally, what percentage of tablets contain more than the nominal amount of mophine hydrochloride per tablet? 4. Use this link to access a case study on data analysis and complete the last three investigations included in Part IV: Ways to Model Data and the first three investigations included in Part V: Ways to Draw Conclusions from Data.
textbooks/chem/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/06%3A_Uncertainty_of_Data/6.04%3A_Using_R_to_Find_Confidence_Intervals.txt
A confidence interval is a useful way to report the result of an analysis because it sets limits on the expected result. In the absence of determinate error, or bias, a confidence interval based on a sample’s mean indicates the range of values in which we expect to find the population’s mean. When we report a 95% confidence interval for the mass of a penny as 3.117 g ± 0.047 g, for example, we are stating that there is only a 5% probability that the penny’s expected mass is less than 3.070 g or more than 3.164 g. Because a confidence interval is a statement of probability, it allows us to consider comparative questions, such as these: “Are the results for a newly developed method to determine cholesterol in blood significantly different from those obtained using a standard method?” “Is there a significant variation in the composition of rainwater collected at different sites downwind from a coal-burning utility plant?” In this chapter we introduce a general approach that uses experimental data to ask and answer such questions, an approach we call significance testing. The reliability of significance testing recently has received much attention—see Nuzzo, R. “Scientific Method: Statistical Errors,” Nature, 2014, 506, 150–152 for a general discussion of the issues—so it is appropriate to begin this chapter by noting the need to ensure that our data and our research question are compatible so that we do not read more into a statistical analysis than our data allows; see Leek, J. T.; Peng, R. D. “What is the Question? Science, 2015, 347, 1314-1315 for a useful discussion of six common research questions. In the context of analytical chemistry, significance testing often accompanies an exploratory data analysis "Is there a reason to suspect that there is a difference between these two analytical methods when applied to a common sample?" or an inferential data analysis. "Is there a reason to suspect that there is a relationship between these two independent measurements?" A statistically significant result for these types of analytical research questions generally leads to the design of additional experiments that are better suited to making predictions or to explaining an underlying causal relationship. A significance test is the first step toward building a greater understanding of an analytical problem, not the final answer to that problem! 07: Testing the Significance of Data Let’s consider the following problem. To determine if a medication is effective in lowering blood glucose concentrations, we collect two sets of blood samples from a patient. We collect one set of samples immediately before we administer the medication, and we collect the second set of samples several hours later. After we analyze the samples, we report their respective means and variances. How do we decide if the medication was successful in lowering the patient’s concentration of blood glucose? One way to answer this question is to construct a normal distribution curve for each sample, and to compare the two curves to each other. Three possible outcomes are shown in Figure $1$. In Figure $\PageIndex{1a}$, there is a complete separation of the two normal distribution curves, which suggests the two samples are significantly different from each other. In Figure $\PageIndex{1b}$, the normal distribution curves for the two samples almost completely overlap each other, which suggests the difference between the samples is insignificant. Figure $\PageIndex{1c}$, however, presents us with a dilemma. Although the means for the two samples seem different, the overlap of their normal distribution curves suggests that a significant number of possible outcomes could belong to either distribution. In this case the best we can do is to make a statement about the probability that the samples are significantly different from each other. The process by which we determine the probability that there is a significant difference between two samples is called significance testing or hypothesis testing. Before we discuss specific examples let's first establish a general approach to conducting and interpreting a significance test. Constructing a Significance Test The purpose of a significance test is to determine whether the difference between two or more results is sufficiently large that we are comfortable stating that the difference cannot be explained by indeterminate errors. The first step in constructing a significance test is to state the problem as a yes or no question, such as “Is this medication effective at lowering a patient’s blood glucose levels?” A null hypothesis and an alternative hypothesis define the two possible answers to our yes or no question. The null hypothesis, H0, is that indeterminate errors are sufficient to explain any differences between our results. The alternative hypothesis, HA, is that the differences in our results are too great to be explained by random error and that they must be determinate in nature. We test the null hypothesis, which we either retain or reject. If we reject the null hypothesis, then we must accept the alternative hypothesis and conclude that the difference is significant. Failing to reject a null hypothesis is not the same as accepting it. We retain a null hypothesis because we have insufficient evidence to prove it incorrect. It is impossible to prove that a null hypothesis is true. This is an important point and one that is easy to forget. To appreciate this point let’s use this data for the mass of 100 circulating United States pennies. Table $1$. Masses for a Sample of 100 Circulating U. S. Pennies Penny Weight (g) Penny Weight (g) Penny Weight (g) Penny Weight (g) 1 3.126 26 3.073 51 3.101 76 3.086 2 3.140 27 3.084 52 3.049 77 3.123 3 3.092 28 3.148 53 3.082 78 3.115 4 3.095 29 3.047 54 3.142 79 3.055 5 3.080 30 3.121 55 3.082 80 3.057 6 3.065 31 3.116 56 3.066 81 3.097 7 3.117 32 3.005 57 3.128 82 3.066 8 3.034 33 3.115 58 3.112 83 3.113 9 3.126 34 3.103 59 3.085 84 3.102 10 3.057 35 3.086 60 3.086 85 3.033 11 3.053 36 3.103 61 3.084 86 3.112 12 3.099 37 3.049 62 3.104 87 3.103 13 3.065 38 2.998 63 3.107 88 3.198 14 3.059 39 3.063 64 3.093 89 3.103 15 3.068 40 3.055 65 3.126 90 3.126 16 3.060 41 3.181 66 3.138 91 3.111 17 3.078 42 3.108 67 3.131 92 3.126 18 3.125 43 3.114 68 3.120 93 3.052 19 3.090 44 3.121 69 3.100 94 3.113 20 3.100 45 3.105 70 3.099 95 3.085 21 3.055 46 3.078 71 3.097 96 3.117 22 3.105 47 3.147 72 3.091 97 3.142 23 3.063 48 3.104 73 3.077 98 3.031 24 3.083 49 3.146 74 3.178 99 3.083 25 3.065 50 3.095 75 3.054 100 3.104 After looking at the data we might propose the following null and alternative hypotheses. H0: The mass of a circulating U.S. penny is between 2.900 g and 3.200 g HA: The mass of a circulating U.S. penny may be less than 2.900 g or more than 3.200 g To test the null hypothesis we find a penny and determine its mass. If the penny’s mass is 2.512 g then we can reject the null hypothesis and accept the alternative hypothesis. Suppose that the penny’s mass is 3.162 g. Although this result increases our confidence in the null hypothesis, it does not prove that the null hypothesis is correct because the next penny we sample might weigh less than 2.900 g or more than 3.200 g. After we state the null and the alternative hypotheses, the second step is to choose a confidence level for the analysis. The confidence level defines the probability that we will incorrectly reject the null hypothesis when it is, in fact, true. We can express this as our confidence that we are correct in rejecting the null hypothesis (e.g. 95%), or as the probability that we are incorrect in rejecting the null hypothesis. For the latter, the confidence level is given as $\alpha$, where $\alpha = 1 - \frac {\text{confidence interval (%)}} {100} \nonumber$ For a 95% confidence level, $\alpha$ is 0.05. The third step is to calculate an appropriate test statistic and to compare it to a critical value. The test statistic’s critical value defines a breakpoint between values that lead us to reject or to retain the null hypothesis, which is the fourth, and final, step of a significance test. As we will see in the sections that follow, how we calculate the test statistic depends on what we are comparing. The four steps for a statistical analysis of data using a significance test: 1. Pose a question, and state the null hypothesis, H0, and the alternative hypothesis, HA. 2. Choose a confidence level for the statistical analysis. 3. Calculate an appropriate test statistic and compare it to a critical value. 4. Either retain the null hypothesis, or reject it and accept the alternative hypothesis. One-Tailed and Two-tailed Significance Tests Suppose we want to evaluate the accuracy of a new analytical method. We might use the method to analyze a Standard Reference Material that contains a known concentration of analyte, $\mu$. We analyze the standard several times, obtaining a mean value, $\overline{X}$, for the analyte’s concentration. Our null hypothesis is that there is no difference between $\overline{X}$ and $\mu$ $H_0 \text{: } \overline{X} = \mu \nonumber$ If we conduct the significance test at $\alpha = 0.05$, then we retain the null hypothesis if a 95% confidence interval around $\overline{X}$ contains $\mu$. If the alternative hypothesis is $H_\text{A} \text{: } \overline{X} \neq \mu \nonumber$ then we reject the null hypothesis and accept the alternative hypothesis if $\mu$ lies in the shaded areas at either end of the sample’s probability distribution curve (Figure $\PageIndex{2a}$). Each of the shaded areas accounts for 2.5% of the area under the probability distribution curve, for a total of 5%. This is a two-tailed significance test because we reject the null hypothesis for values of $\mu$ at either extreme of the sample’s probability distribution curve. We can write the alternative hypothesis in two additional ways $H_\text{A} \text{: } \overline{X} > \mu \nonumber$ $H_\text{A} \text{: } \overline{X} < \mu \nonumber$ rejecting the null hypothesis if $\mu$ falls within the shaded areas shown in Figure $\PageIndex{2b}$ or Figure $\PageIndex{2c}$, respectively. In each case the shaded area represents 5% of the area under the probability distribution curve. These are examples of a one-tailed significance test. For a fixed confidence level, a two-tailed significance test is the more conservative test because rejecting the null hypothesis requires a larger difference between the results we are comparing. In most situations we have no particular reason to expect that one result must be larger (or must be smaller) than the other result. This is the case, for example, when we evaluate the accuracy of a new analytical method. A two-tailed significance test, therefore, usually is the appropriate choice. We reserve a one-tailed significance test for a situation where we specifically are interested in whether one result is larger (or smaller) than the other result. For example, a one-tailed significance test is appropriate if we are evaluating a medication’s ability to lower blood glucose levels. In this case we are interested only in whether the glucose levels after we administer the medication are less than the glucose levels before we initiated treatment. If a patient’s blood glucose level is greater after we administer the medication, then we know the answer—the medication did not work—and we do not need to conduct a statistical analysis. Errors in Significance Testing Because a significance test relies on probability, its interpretation is subject to error. In a significance test, $\alpha$ defines the probability of rejecting a null hypothesis that is true. When we conduct a significance test at $\alpha = 0.05$, there is a 5% probability that we will incorrectly reject the null hypothesis. This is known as a type 1 error, and its risk is always equivalent to $\alpha$. A type 1 error in a two-tailed or a one-tailed significance tests corresponds to the shaded areas under the probability distribution curves in Figure $2$. A second type of error occurs when we retain a null hypothesis even though it is false. This is a type 2 error, and the probability of its occurrence is $\beta$. Unfortunately, in most cases we cannot calculate or estimate the value for $\beta$. The probability of a type 2 error, however, is inversely proportional to the probability of a type 1 error. Minimizing a type 1 error by decreasing $\alpha$ increases the likelihood of a type 2 error. When we choose a value for $\alpha$ we must compromise between these two types of error. Most of the examples in this text use a 95% confidence level ($\alpha = 0.05$) because this usually is a reasonable compromise between type 1 and type 2 errors for analytical work. It is not unusual, however, to use a more stringent (e.g. $\alpha = 0.01$) or a more lenient (e.g. $\alpha = 0.10$) confidence level when the situation calls for it.
textbooks/chem/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/07%3A_Testing_the_Significance_of_Data/7.01%3A_Significance_Testing.txt
A normal distribution is the most common distribution for the data we collect. Because the area between any two limits of a normal distribution curve is well defined, it is straightforward to construct and evaluate significance tests. Note You can review the properties of a normal distribution in Chapter 5 and Chapter 6. Comparing $\overline{X}$ to $\mu$ One way to validate a new analytical method is to analyze a sample that contains a known amount of analyte, $\mu$. To judge the method’s accuracy we analyze several portions of the sample, determine the average amount of analyte in the sample, $\overline{X}$, and use a significance test to compare $\overline{X}$ to $\mu$. The null hypothesis is that the difference between $\overline{X}$ and $\mu$ is explained by indeterminate errors that affect our determination of $\overline{X}$. The alternative hypothesis is that the difference between $\overline{X}$ and $\mu$ is too large to be explained by indeterminate error. $H_0 \text{: } \overline{X} = \mu \nonumber$ $H_A \text{: } \overline{X} \neq \mu \nonumber$ The test statistic is texp, which we substitute into the confidence interval for $\mu$ $\mu = \overline{X} \pm \frac {t_\text{exp} s} {\sqrt{n}} \nonumber$ Rearranging this equation and solving for $t_\text{exp}$ $t_\text{exp} = \frac {|\mu - \overline{X}| \sqrt{n}} {s} \nonumber$ gives the value for $t_\text{exp}$ when $\mu$ is at either the right edge or the left edge of the sample's confidence interval (Figure $\PageIndex{1a}$). To determine if we should retain or reject the null hypothesis, we compare the value of texp to a critical value, $t(\alpha, \nu)$, where $\alpha$ is the confidence level and $\nu$ is the degrees of freedom for the sample. The critical value $t(\alpha, \nu)$ defines the largest confidence interval explained by indeterminate error. If $t_\text{exp} > t(\alpha, \nu)$, then our sample’s confidence interval is greater than that explained by indeterminate errors (Figure $1$b). In this case, we reject the null hypothesis and accept the alternative hypothesis. If $t_\text{exp} \leq t(\alpha, \nu)$, then our sample’s confidence interval is smaller than that explained by indeterminate error, and we retain the null hypothesis (Figure $1$c). Example $1$ provides a typical application of this significance test, which is known as a t-test of $\overline{X}$ to $\mu$. You will find values for $t(\alpha, \nu)$ in Appendix 2. Example $1$ Before determining the amount of Na2CO3 in a sample, you decide to check your procedure by analyzing a standard sample that is 98.76% w/w Na2CO3. Five replicate determinations of the %w/w Na2CO3 in the standard gives the following results $98.71 \% \quad 98.59 \% \quad 98.62 \% \quad 98.44 \% \quad 98.58 \%$ Using $\alpha = 0.05$, is there any evidence that the analysis is giving inaccurate results? Solution The mean and standard deviation for the five trials are $\overline{X} = 98.59 \quad \quad \quad s = 0.0973 \nonumber$ Because there is no reason to believe that the results for the standard must be larger or smaller than $\mu$, a two-tailed t-test is appropriate. The null hypothesis and alternative hypothesis are $H_0 \text{: } \overline{X} = \mu \quad \quad \quad H_\text{A} \text{: } \overline{X} \neq \mu \nonumber$ The test statistic, texp, is $t_\text{exp} = \frac {|\mu - \overline{X}|\sqrt{n}} {2} = \frac {|98.76 - 98.59| \sqrt{5}} {0.0973} = 3.91 \nonumber$ The critical value for t(0.05, 4) from Appendix 2 is 2.78. Since texp is greater than t(0.05, 4), we reject the null hypothesis and accept the alternative hypothesis. At the 95% confidence level the difference between $\overline{X}$ and $\mu$ is too large to be explained by indeterminate sources of error, which suggests there is a determinate source of error that affects the analysis. Note There is another way to interpret the result of this t-test. Knowing that texp is 3.91 and that there are 4 degrees of freedom, we use Appendix 2 to estimate the value of $\alpha$ that corresponds to a t($\alpha$, 4) of 3.91. From Appendix 2, t(0.02, 4) is 3.75 and t(0.01, 4) is 4.60. Although we can reject the null hypothesis at the 98% confidence level, we cannot reject it at the 99% confidence level. For a discussion of the advantages of this approach, see J. A. C. Sterne and G. D. Smith “Sifting the evidence—what’s wrong with significance tests?” BMJ 2001, 322, 226–231. Earlier we made the point that we must exercise caution when we interpret the result of a statistical analysis. We will keep returning to this point because it is an important one. Having determined that a result is inaccurate, as we did in Example $1$, the next step is to identify and to correct the error. Before we expend time and money on this, however, we first should critically examine our data. For example, the smaller the value of s, the larger the value of texp. If the standard deviation for our analysis is unrealistically small, then the probability of a type 2 error increases. Including a few additional replicate analyses of the standard and reevaluating the t-test may strengthen our evidence for a determinate error, or it may show us that there is no evidence for a determinate error. Comparing $s^2$ to $\sigma^2$ If we analyze regularly a particular sample, we may be able to establish an expected variance, $\sigma^2$, for the analysis. This often is the case, for example, in a clinical lab that analyzes hundreds of blood samples each day. A few replicate analyses of a single sample gives a sample variance, s2, whose value may or may not differ significantly from $\sigma^2$. We can use an F-test to evaluate whether a difference between s2 and $\sigma^2$ is significant. The null hypothesis is $H_0 \text{: } s^2 = \sigma^2$ and the alternative hypothesis is $H_\text{A} \text{: } s^2 \neq \sigma^2$. The test statistic for evaluating the null hypothesis is Fexp, which is given as either $F_\text{exp} = \frac {s^2} {\sigma^2} \text{ if } s^2 > \sigma^2 \text{ or } F_\text{exp} = \frac {\sigma^2} {s^2} \text{ if } \sigma^2 > s^2 \nonumber$ depending on whether s2 is larger or smaller than $\sigma^2$. This way of defining Fexp ensures that its value is always greater than or equal to one. If the null hypothesis is true, then Fexp should equal one; however, because of indeterminate errors, Fexp, usually is greater than one. A critical value, $F(\alpha, \nu_\text{num}, \nu_\text{den})$, is the largest value of Fexp that we can attribute to indeterminate error given the specified significance level, $\alpha$, and the degrees of freedom for the variance in the numerator, $\nu_\text{num}$, and the variance in the denominator, $\nu_\text{den}$. The degrees of freedom for s2 is n – 1, where n is the number of replicates used to determine the sample’s variance, and the degrees of freedom for $\sigma^2$ is defined as infinity, $\infty$. Critical values of F for $\alpha = 0.05$ are listed in Appendix 3 for both one-tailed and two-tailed F-tests. Example $2$ A manufacturer’s process for analyzing aspirin tablets has a known variance of 25. A sample of 10 aspirin tablets is selected and analyzed for the amount of aspirin, yielding the following results in mg aspirin/tablet. $254 \quad 249 \quad 252 \quad 252 \quad 249 \quad 249 \quad 250 \quad 247 \quad 251 \quad 252$ Determine whether there is evidence of a significant difference between the sample’s variance and the expected variance at $\alpha = 0.05$. Solution The variance for the sample of 10 tablets is 4.3. The null hypothesis and alternative hypotheses are $H_0 \text{: } s^2 = \sigma^2 \quad \quad \quad H_\text{A} \text{: } s^2 \neq \sigma^2 \nonumber$ and the value for Fexp is $F_\text{exp} = \frac {\sigma^2} {s^2} = \frac {25} {4.3} = 5.8 \nonumber$ The critical value for F(0.05, $\infty$, 9) from Appendix 3 is 3.333. Since Fexp is greater than F(0.05, $\infty$, 9), we reject the null hypothesis and accept the alternative hypothesis that there is a significant difference between the sample’s variance and the expected variance. One explanation for the difference might be that the aspirin tablets were not selected randomly. Comparing Variances for Two Samples We can extend the F-test to compare the variances for two samples, A and B, by rewriting our equation for Fexp as $F_\text{exp} = \frac {s_A^2} {s_B^2} \nonumber$ defining A and B so that the value of Fexp is greater than or equal to 1. Example $3$ The table below shows results for two experiments to determine the mass of a circulating U.S. penny. Determine whether there is a difference in the variances of these analyses at $\alpha = 0.05$. First Experiment Second Experiment Penny Mass (g) Penny Mass (g) 1 3.080 1 3.052 2 3.094 2 3.141 3 3.107 3 3.083 4 3.056 4 3.083 5 3.112 5 3.048 6 3.174 7 3.198 Solution The standard deviations for the two experiments are 0.051 for the first experiment (A) and 0.037 for the second experiment (B). The null and alternative hypotheses are $H_0 \text{: } s_A^2 = s_B^2 \quad \quad \quad H_\text{A} \text{: } s_A^2 \neq s_B^2 \nonumber$ and the value of Fexp is $F_\text{exp} = \frac {s_A^2} {s_B^2} = \frac {(0.051)^2} {(0.037)^2} = \frac {0.00260} {0.00137} = 1.90 \nonumber$ From Appendix 3 the critical value for F(0.05, 6, 4) is 9.197. Because Fexp < F(0.05, 6, 4), we retain the null hypothesis. There is no evidence at $\alpha = 0.05$ to suggest that the difference in variances is significant. Comparing Means for Two Samples Three factors influence the result of an analysis: the method, the sample, and the analyst. We can study the influence of these factors by conducting experiments in which we change one factor while holding constant the other factors. For example, to compare two analytical methods we can have the same analyst apply each method to the same sample and then examine the resulting means. In a similar fashion, we can design experiments to compare two analysts or to compare two samples. Before we consider the significance tests for comparing the means of two samples, we need to understand the difference between unpaired data and paired data. This is a critical distinction and learning to distinguish between these two types of data is important. Here are two simple examples that highlight the difference between unpaired data and paired data. In each example the goal is to compare two balances by weighing pennies. • Example 1: We collect 10 pennies and weigh each penny on each balance. This is an example of paired data because we use the same 10 pennies to evaluate each balance. • Example 2: We collect 10 pennies and divide them into two groups of five pennies each. We weigh the pennies in the first group on one balance and we weigh the second group of pennies on the other balance. Note that no penny is weighed on both balances. This is an example of unpaired data because we evaluate each balance using a different sample of pennies. In both examples the samples of 10 pennies were drawn from the same population; the difference is how we sampled that population. We will learn why this distinction is important when we review the significance test for paired data; first, however, we present the significance test for unpaired data. Note One simple test for determining whether data are paired or unpaired is to look at the size of each sample. If the samples are of different size, then the data must be unpaired. The converse is not true. If two samples are of equal size, they may be paired or unpaired. Unpaired Data Consider two analyses, A and B, with means of $\overline{X}_A$ and $\overline{X}_B$, and standard deviations of sA and sB. The confidence intervals for $\mu_A$ and for $\mu_B$ are $\mu_A = \overline{X}_A \pm \frac {t s_A} {\sqrt{n_A}} \nonumber$ $\mu_B = \overline{X}_B \pm \frac {t s_B} {\sqrt{n_B}} \nonumber$ where nA and nB are the sample sizes for A and for B. Our null hypothesis, $H_0 \text{: } \mu_A = \mu_B$, is that any difference between $\mu_A$ and $\mu_B$ is the result of indeterminate errors that affect the analyses. The alternative hypothesis, $H_A \text{: } \mu_A \neq \mu_B$, is that the difference between $\mu_A$and $\mu_B$ is too large to be explained by indeterminate error. To derive an equation for texp, we assume that $\mu_A$ equals $\mu_B$, and combine the equations for the two confidence intervals $\overline{X}_A \pm \frac {t_\text{exp} s_A} {\sqrt{n_A}} = \overline{X}_B \pm \frac {t_\text{exp} s_B} {\sqrt{n_B}} \nonumber$ Solving for $|\overline{X}_A - \overline{X}_B|$ and using a propagation of uncertainty, gives $|\overline{X}_A - \overline{X}_B| = t_\text{exp} \times \sqrt{\frac {s_A^2} {n_A} + \frac {s_B^2} {n_B}} \nonumber$ Finally, we solve for texp $t_\text{exp} = \frac {|\overline{X}_A - \overline{X}_B|} {\sqrt{\frac {s_A^2} {n_A} + \frac {s_B^2} {n_B}}} \nonumber$ and compare it to a critical value, $t(\alpha, \nu)$, where $\alpha$ is the probability of a type 1 error, and $\nu$ is the degrees of freedom. Thus far our development of this t-test is similar to that for comparing $\overline{X}$ to $\mu$, and yet we do not have enough information to evaluate the t-test. Do you see the problem? With two independent sets of data it is unclear how many degrees of freedom we have. Suppose that the variances $s_A^2$ and $s_B^2$ provide estimates of the same $\sigma^2$. In this case we can replace $s_A^2$ and $s_B^2$ with a pooled variance, $s_\text{pool}^2$, that is a better estimate for the variance. Thus, our equation for $t_\text{exp}$ becomes $t_\text{exp} = \frac {|\overline{X}_A - \overline{X}_B|} {s_\text{pool} \times \sqrt{\frac {1} {n_A} + \frac {1} {n_B}}} = \frac {|\overline{X}_A - \overline{X}_B|} {s_\text{pool}} \times \sqrt{\frac {n_A n_B} {n_A + n_B}} \nonumber$ where spool, the pooled standard deviation, is $s_\text{pool} = \sqrt{\frac {(n_A - 1) s_A^2 + (n_B - 1)s_B^2} {n_A + n_B - 2}} \nonumber$ The denominator of this equation shows us that the degrees of freedom for a pooled standard deviation is $n_A + n_B - 2$, which also is the degrees of freedom for the t-test. Note that we lose two degrees of freedom because the calculations for $s_A^2$ and $s_B^2$ require the prior calculation of $\overline{X}_A$ amd $\overline{X}_B$. Note So how do you determine if it is okay to pool the variances? Use an F-test. If $s_A^2$ and $s_B^2$ are significantly different, then we calculate texp using the following equation. In this case, we find the degrees of freedom using the following imposing equation. $\nu = \frac {\left( \frac {s_A^2} {n_A} + \frac {s_B^2} {n_B} \right)^2} {\frac {\left( \frac {s_A^2} {n_A} \right)^2} {n_A + 1} + \frac {\left( \frac {s_B^2} {n_B} \right)^2} {n_B + 1}} - 2 \nonumber$ Because the degrees of freedom must be an integer, we round to the nearest integer the value of $\nu$ obtained from this equation. Note The equation above for the degrees of freedom is from Miller, J.C.; Miller, J.N. Statistics for Analytical Chemistry, 2nd Ed., Ellis-Horward: Chichester, UK, 1988. In the 6th Edition, the authors note that several different equations have been suggested for the number of degrees of freedom for t when sA and sB differ, reflecting the fact that the determination of degrees of freedom an approximation. An alternative equation—which is used by statistical software packages, such as R, Minitab, Excel—is $\nu = \frac {\left( \frac {s_A^2} {n_A} + \frac {s_B^2} {n_B} \right)^2} {\frac {\left( \frac {s_A^2} {n_A} \right)^2} {n_A - 1} + \frac {\left( \frac {s_B^2} {n_B} \right)^2} {n_B - 1}} = \frac {\left( \frac {s_A^2} {n_A} + \frac {s_B^2} {n_B} \right)^2} {\frac {s_A^4} {n_A^2(n_A - 1)} + \frac {s_B^4} {n_B^2(n_B - 1)}} \nonumber$ For typical problems in analytical chemistry, the calculated degrees of freedom is reasonably insensitive to the choice of equation. Regardless of whether how we calculate texp, we reject the null hypothesis if texp is greater than $t(\alpha, \nu)$ and retain the null hypothesis if texp is less than or equal to $t(\alpha, \nu)$. Example $4$ Example $3$ provides results for two experiments to determine the mass of a circulating U.S. penny. Determine whether there is a difference in the means of these analyses at $\alpha = 0.05$. Solution First we use an F-test to determine whether we can pool the variances. We completed this analysis in Example $3$, finding no evidence of a significant difference, which means we can pool the standard deviations, obtaining $s_\text{pool} = \sqrt{\frac {(7 - 1)(0.051)^2 + (5 - 1)(0.037)^2} {7 + 5 - 2}} = 0.0459 \nonumber$ with 10 degrees of freedom. To compare the means we use the following null hypothesis and alternative hypotheses $H_0 \text{: } \mu_A = \mu_B \quad \quad \quad H_A \text{: } \mu_A \neq \mu_B \nonumber$ Because we are using the pooled standard deviation, we calculate texp as $t_\text{exp} = \frac {|3.117 - 3.081|} {0.0459} \times \sqrt{\frac {7 \times 5} {7 + 5}} = 1.34 \nonumber$ The critical value for t(0.05, 10), from Appendix 2, is 2.23. Because texp is less than t(0.05, 10) we retain the null hypothesis. For $\alpha = 0.05$ we do not have evidence that the two sets of pennies are significantly different. Example $5$ One method for determining the %w/w Na2CO3 in soda ash is to use an acid–base titration. When two analysts analyze the same sample of soda ash they obtain the results shown here. Analyst A: $86.82 \% \quad 87.04 \% \quad 86.93 \% \quad 87.01 \% \quad 86.20 \% \quad 87.00 \%$ Analyst B: $81.01 \% \quad 86.15 \% \quad 81.73 \% \quad 83.19 \% \quad 80.27 \% \quad 83.93 \% \quad$ Determine whether the difference in the mean values is significant at $\alpha = 0.05$. Solution We begin by reporting the mean and standard deviation for each analyst. $\overline{X}_A = 86.83\% \quad \quad s_A = 0.32\% \nonumber$ $\overline{X}_B = 82.71\% \quad \quad s_B = 2.16\% \nonumber$ To determine whether we can use a pooled standard deviation, we first complete an F-test using the following null and alternative hypotheses. $H_0 \text{: } s_A^2 = s_B^2 \quad \quad \quad H_A \text{: } s_A^2 \neq s_B^2 \nonumber$ Calculating Fexp, we obtain a value of $F_\text{exp} = \frac {(2.16)^2} {(0.32)^2} = 45.6 \nonumber$ Because Fexp is larger than the critical value of 7.15 for F(0.05, 5, 5) from Appendix 3, we reject the null hypothesis and accept the alternative hypothesis that there is a significant difference between the variances; thus, we cannot calculate a pooled standard deviation. To compare the means for the two analysts we use the following null and alternative hypotheses. $H_0 \text{: } \overline{X}_A = \overline{X}_B \quad \quad \quad H_A \text{: } \overline{X}_A \neq \overline{X}_B \nonumber$ Because we cannot pool the standard deviations, we calculate texp as $t_\text{exp} = \frac {|86.83 - 82.71|} {\sqrt{\frac {(0.32)^2} {6} + \frac {(2.16)^2} {6}}} = 4.62 \nonumber$ and calculate the degrees of freedom as $\nu = \frac {\left( \frac {(0.32)^2} {6} + \frac {(2.16)^2} {6} \right)^2} {\frac {\left( \frac {(0.32)^2} {6} \right)^2} {6 + 1} + \frac {\left( \frac {(2.16)^2} {6} \right)^2} {6 + 1}} - 2 = 5.3 \approx 5 \nonumber$ From Appendix 2, the critical value for t(0.05, 5) is 2.57. Because texp is greater than t(0.05, 5) we reject the null hypothesis and accept the alternative hypothesis that the means for the two analysts are significantly different at $\alpha = 0.05$. Paired Data Suppose we are evaluating a new method for monitoring blood glucose concentrations in patients. An important part of evaluating a new method is to compare it to an established method. What is the best way to gather data for this study? Because the variation in the blood glucose levels amongst patients is large we may be unable to detect a small, but significant difference between the methods if we use different patients to gather data for each method. Using paired data, in which the we analyze each patient’s blood using both methods, prevents a large variance within a population from adversely affecting a t-test of means. Note Typical blood glucose levels for most non-diabetic individuals ranges between 80–120 mg/dL (4.4–6.7 mM), rising to as high as 140 mg/dL (7.8 mM) shortly after eating. Higher levels are common for individuals who are pre-diabetic or diabetic. When we use paired data we first calculate the individual differences, di, between each sample's paired resykts. Using these individual differences, we then calculate the average difference, $\overline{d}$, and the standard deviation of the differences, sd. The null hypothesis, $H_0 \text{: } d = 0$, is that there is no difference between the two samples, and the alternative hypothesis, $H_A \text{: } d \neq 0$, is that the difference between the two samples is significant. The test statistic, texp, is derived from a confidence interval around $\overline{d}$ $t_\text{exp} = \frac {|\overline{d}| \sqrt{n}} {s_d} \nonumber$ where n is the number of paired samples. As is true for other forms of the t-test, we compare texp to $t(\alpha, \nu)$, where the degrees of freedom, $\nu$, is n – 1. If texp is greater than $t(\alpha, \nu)$, then we reject the null hypothesis and accept the alternative hypothesis. We retain the null hypothesis if texp is less than or equal to t(a, o). This is known as a paired t-test. Example $6$ Marecek et. al. developed a new electrochemical method for the rapid determination of the concentration of the antibiotic monensin in fermentation vats [Marecek, V.; Janchenova, H.; Brezina, M.; Betti, M. Anal. Chim. Acta 1991, 244, 15–19]. The standard method for the analysis is a test for microbiological activity, which is both difficult to complete and time-consuming. Samples were collected from the fermentation vats at various times during production and analyzed for the concentration of monensin using both methods. The results, in parts per thousand (ppt), are reported in the following table. Sample Microbiological Electrochemical 1 129.5 132.3 2 89.6 91.0 3 76.6 73.6 4 52.2 58.2 5 110.8 104.2 6 50.4 49.9 7 72.4 82.1 8 141.4 154.1 9 75.0 73.4 10 34.1 38.1 11 60.3 60.1 Is there a significant difference between the methods at $\alpha = 0.05$? Solution Acquiring samples over an extended period of time introduces a substantial time-dependent change in the concentration of monensin. Because the variation in concentration between samples is so large, we use a paired t-test with the following null and alternative hypotheses. $H_0 \text{: } \overline{d} = 0 \quad \quad \quad H_A \text{: } \overline{d} \neq 0 \nonumber$ Defining the difference between the methods as $d_i = (X_\text{elect})_i - (X_\text{micro})_i \nonumber$ we calculate the difference for each sample. sample 1 2 3 4 5 6 7 8 9 10 11 $d_i$ 2.8 1.4 –3.0 6.0 –6.6 –0.5 9.7 12.7 –1.6 4.0 –0.2 The mean and the standard deviation for the differences are, respectively, 2.25 ppt and 5.63 ppt. The value of texp is $t_\text{exp} = \frac {|2.25| \sqrt{11}} {5.63} = 1.33 \nonumber$ which is smaller than the critical value of 2.23 for t(0.05, 10) from Appendix 2. We retain the null hypothesis and find no evidence for a significant difference in the methods at $\alpha = 0.05$. One important requirement for a paired t-test is that the determinate and the indeterminate errors that affect the analysis must be independent of the analyte’s concentration. If this is not the case, then a sample with an unusually high concentration of analyte will have an unusually large di. Including this sample in the calculation of $\overline{d}$ and sd gives a biased estimate for the expected mean and standard deviation. This rarely is a problem for samples that span a limited range of analyte concentrations, such as those in Example $4$ or Exercise $6$. When paired data span a wide range of concentrations, however, the magnitude of the determinate and indeterminate sources of error may not be independent of the analyte’s concentration; when true, a paired t-test may give misleading results because the paired data with the largest absolute determinate and indeterminate errors will dominate $\overline{d}$. In this situation a regression analysis, which is the subject of the next chapter, is more appropriate method for comparing the data. Note The importance of distinguishing between paired and unpaired data is worth examining more closely. The following is data from some work I completed with a colleague in which we were looking at concentration of Zn in Lake Erie at the air-water interface and the sediment-water interface. sample site ppm Zn at air-water interface ppm Zn at the sediment-water interface 1 0.430 0.415 2 0.266 0.238 3 0.457 0.390 4 0.531 0.410 5 0.707 0.605 6 0.716 0.609 The mean and the standard deviation for the ppm Zn at the air-water interface are 0.5178 ppm and 0.01732 ppm, and the mean and the standard deviation for the ppm Zn at the sediment-water interface are 0.4445 ppm and 0.1418 ppm. We can use these values to draw normal distributions for both by letting the means and the standard deviations for the samples, $\overline{X}$ and $s$, serve as estimates for the means and the standard deviations for the population, $\mu$ and $\sigma$. As we see in the following figure the two distributions overlap strongly, suggesting that a t-test of their means is not likely to find evidence of a difference. And yet, we also see that for each site, the concentration of Zn at the sediment-water interface is less than that at the air-water interface. In this case, the difference between the concentration of Zn at individual sites is sufficiently large that it masks our ability to see the difference between the two interfaces. If we take the differences between the air-water and sediment-water interfaces, we have values of 0.015, 0.028, 0.067, 0.121, 0.102, and 0.107 ppm Zn, with a mean of 0.07333 ppm Zn and a standard deviation of 0.04410 ppm Zn. Superimposing all three normal distributions shows clearly that most of the normal distribution for the differences lies above zero, suggesting that a t-test might show evidence that the difference is significant. Outliers In chapter 7.1 we examined a data set consisting of the masses of 100 circulating United States penny. Table $1$ provides one more data set. Do you notice anything unusual in this data? Of the 100 pennies included in the earlier table, no penny has a mass of less than 3 g. In this table, however, the mass of one penny is less than 3 g. We might ask whether this penny’s mass is so different from the other pennies that it is in error. Table $1$. Mass (g) for Additional Sample of Circulating U. S. Penniese 3.067 2.514 3.094 3.049 3.048 3.109 3.039 3.079 3.102 A measurement that is not consistent with other measurements is called an outlier. An outlier might exist for many reasons: the outlier might belong to a different population Is this a Canadian penny? or the outlier might be a contaminated or an otherwise altered sample Is the penny damaged or unusually dirty? or the outlier may result from an error in the analysis Did we forget to tare the balance? Regardless of its source, the presence of an outlier compromises any meaningful analysis of our data. There are many significance tests that we can use to identify a potential outlier, three of which we present here. Dixon's Q-Test One of the most common significance tests for identifying an outlier is Dixon’s Q-test. The null hypothesis is that there are no outliers, and the alternative hypothesis is that there is an outlier. The Q-test compares the gap between the suspected outlier and its nearest numerical neighbor to the range of the entire data set (Figure $2$). The test statistic, Qexp, is $Q_\text{exp} = \frac {\text{gap}} {\text{range}} = \frac {|\text{outlier's value} - \text{nearest value}|} {\text{largest value} - \text{smallest value}} \nonumber$ This equation is appropriate for evaluating a single outlier. Other forms of Dixon’s Q-test allow its extension to detecting multiple outliers [Rorabacher, D. B. Anal. Chem. 1991, 63, 139–146]. The value of Qexp is compared to a critical value, $Q(\alpha, n)$, where $\alpha$ is the probability that we will reject a valid data point (a type 1 error) and n is the total number of data points. To protect against rejecting a valid data point, usually we apply the more conservative two-tailed Q-test, even though the possible outlier is the smallest or the largest value in the data set. If Qexp is greater than $Q(\alpha, n)$, then we reject the null hypothesis and may exclude the outlier. We retain the possible outlier when Qexp is less than or equal to $Q(\alpha, n)$. Table $2$ provides values for $Q(\alpha, n)$ for a data set that has 3–10 values. A more extensive table is in Appendix 4. Values for $Q(\alpha, n)$ assume an underlying normal distribution. Table $2$: Dixon's Q-Test n Q(0.05, n) 3 0.970 4 0.829 5 0.710 6 0.625 7 0.568 8 0.526 9 0.493 10 0.466 Grubb's Test Although Dixon’s Q-test is a common method for evaluating outliers, it is no longer favored by the International Standards Organization (ISO), which recommends the Grubb’s test. There are several versions of Grubb’s test depending on the number of potential outliers. Here we will consider the case where there is a single suspected outlier. Note For details on this recommendation, see International Standards ISO Guide 5752-2 “Accuracy (trueness and precision) of measurement methods and results–Part 2: basic methods for the determination of repeatability and reproducibility of a standard measurement method,” 1994. The test statistic for Grubb’s test, Gexp, is the distance between the sample’s mean, $\overline{X}$, and the potential outlier, $X_\text{out}$, in terms of the sample’s standard deviation, s. $G_\text{exp} = \frac {|X_\text{out} - \overline{X}|} {s} \nonumber$ We compare the value of Gexp to a critical value $G(\alpha, n)$, where $\alpha$ is the probability that we will reject a valid data point and n is the number of data points in the sample. If Gexp is greater than $G(\alpha, n)$, then we may reject the data point as an outlier, otherwise we retain the data point as part of the sample. Table $3$ provides values for G(0.05, n) for a sample containing 3–10 values. A more extensive table is in Appendix 5. Values for $G(\alpha, n)$ assume an underlying normal distribution. Table $3$: Grubb's Test n G(0.05, n) 3 1.115 4 1.481 5 1.715 6 1.887 7 2.020 8 2.126 9 2.215 10 2.290 Chauvenet's Criterion Our final method for identifying an outlier is Chauvenet’s criterion. Unlike Dixon’s Q-Test and Grubb’s test, you can apply this method to any distribution as long as you know how to calculate the probability for a particular outcome. Chauvenet’s criterion states that we can reject a data point if the probability of obtaining the data point’s value is less than $(2n^{-1})$, where n is the size of the sample. For example, if n = 10, a result with a probability of less than $(2 \times 10)^{-1}$, or 0.05, is considered an outlier. To calculate a potential outlier’s probability we first calculate its standardized deviation, z $z = \frac {|X_\text{out} - \overline{X}|} {s} \nonumber$ where $X_\text{out}$ is the potential outlier, $\overline{X}$ is the sample’s mean and s is the sample’s standard deviation. Note that this equation is identical to the equation for Gexp in the Grubb’s test. For a normal distribution, we can find the probability of obtaining a value of z using the probability table in Appendix 1. Example $7$ Table $1$ contains the masses for nine circulating United States pennies. One entry, 2.514 g, appears to be an outlier. Determine if this penny is an outlier using a Q-test, Grubb’s test, and Chauvenet’s criterion. For the Q-test and Grubb’s test, let $\alpha = 0.05$. Solution For the Q-test the value for $Q_\text{exp}$ is $Q_\text{exp} = \frac {|2.514 - 3.039|} {3.109 - 2.514} = 0.882 \nonumber$ From Table $2$, the critical value for Q(0.05, 9) is 0.493. Because Qexp is greater than Q(0.05, 9), we can assume the penny with a mass of 2.514 g likely is an outlier. For Grubb’s test we first need the mean and the standard deviation, which are 3.011 g and 0.188 g, respectively. The value for Gexp is $G_\text{exp} = \frac {|2.514 - 3.011|} {0.188} = 2.64 \nonumber$ Using Table $3$, we find that the critical value for G(0.05, 9) is 2.215. Because Gexp is greater than G(0.05, 9), we can assume that the penny with a mass of 2.514 g likely is an outlier. For Chauvenet’s criterion, the critical probability is $(2 \times 9)^{-1}$, or 0.0556. The value of z is the same as Gexp, or 2.64. Using Appendix 1, the probability for z = 2.64 is 0.00415. Because the probability of obtaining a mass of 0.2514 g is less than the critical probability, we can assume the penny with a mass of 2.514 g likely is an outlier. You should exercise caution when using a significance test for outliers because there is a chance you will reject a valid result. In addition, you should avoid rejecting an outlier if it leads to a precision that is much better than expected based on a propagation of uncertainty. Given these concerns it is not surprising that some statisticians caution against the removal of outliers [Deming, W. E. Statistical Analysis of Data; Wiley: New York, 1943 (republished by Dover: New York, 1961); p. 171]. Note You also can adopt a more stringent requirement for rejecting data. When using the Grubb’s test, for example, the ISO 5752 guidelines suggest retaining a value if the probability for rejecting it is greater than $\alpha = 0.05$, and flagging a value as a “straggler” if the probability for rejecting it is between $\alpha = 0.05$ and $\alpha = 0.01$. A “straggler” is retained unless there is compelling reason for its rejection. The guidelines recommend using $\alpha = 0.01$ as the minimum criterion for rejecting a possible outlier. On the other hand, testing for outliers can provide useful information if we try to understand the source of the suspected outlier. For example, the outlier in Table $1$ represents a significant change in the mass of a penny (an approximately 17% decrease in mass), which is the result of a change in the composition of the U.S. penny. In 1982 the composition of a U.S. penny changed from a brass alloy that was 95% w/w Cu and 5% w/w Zn (with a nominal mass of 3.1 g), to a pure zinc core covered with copper (with a nominal mass of 2.5 g) [Richardson, T. H. J. Chem. Educ. 1991, 68, 310–311]. The pennies in Table $1$, therefore, were drawn from different populations.
textbooks/chem/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/07%3A_Testing_the_Significance_of_Data/7.02%3A_Significance_Tests_for_Normal_Distributions.txt
Consider the following data, which shows the stability of a reagent under different conditions for storing samples; all values are percent recoveries, so a result of 100 indicates that the reagent's concentration remains unchanged and that there was no degradation. trial/treatment A (total dark) B (subdued light) C (full light) 1 101 100 90 2 101 99 92 3 104 101 94 To determine if light has a significant affect on the reagent’s stability, we might choose to perform a series of t–tests, comparing all possible mean values; in this case we need three such tests: • compare A to B • compare A to C • compare B to C Each such test has a probability of a type I error of $\alpha_{test}$. The total probability of a type I error across k tests, $\alpha_{total}$, is $\alpha_{total} = 1 - (1 - \alpha_{test})^{k} \nonumber$ For three such tests using $\alpha = 0.05$, we have $\alpha_{total} = 1 - (1 - 0.05)^{3} = 0.143 \nonumber$ or a 14.3% proability of a type I error. The relationship between the number of conditions, n, and the number of tests, k, is $k = \frac {n(n-1)} {2} \nonumber$ which means that k grows quickly as n increases, as shown in Figure $1$. and that the magnitude of a type I error increases quickly as well, as seen in Figure $2$. We can compensate for this problem by decreasing $\alpha_{test}$ for each independent test so that $\alpha_{total}$ is equal to our desired probability; thus, for $n = 3$ we have $k = 3$, and to achieve an $\alpha_{total}$ of 0.05 each individual value of $\alpha_{test}$ be $\alpha_{test} = 1 - (1 - 0.05)^{1/3} = 0.017 \nonumber$ Values of $\alpha_{test}$ decrease quickly, as seen in Figure $3$. The problem here is that we are searching for a significant difference on a pair-wise basis without any evidence that the overall variation in the data across all conditions (also known as treatments) is sufficiently large that it cannot be explained by experimental uncertainty (that is, random error) only. One way to determine if there is a systematic error in the data set, without identifying the source of the systematic error, is to compare the variation within each treatment to the variation between the treatments. We assume that the variation within each treatment reflects uncertainty in the analytical method (random errors) and that the variation between the treatments includes both the method’s uncertainty and any systematic errors in the individual treatments. If the variation between the treatments is significantly greater than the variation within the treatments, then a systematic error seems likely. We call this process an analysis of variance, or ANOVA; for one independent variable (the amount of light in this case), it is a one-way analysis of variance. The basic details of a one-way ANOVA calculation are as follows: Step 1: Treat the data as one large data set and calculate its mean and its variance, which we call the global mean, $\bar{\bar{x}}$, and the global variance, $\bar{\bar{s^{2}}}$. $\bar{\bar{x}} = \frac { \sum_{i=1}^h \sum_{j=1}^{n_{i}} x_{ij} } {N} \nonumber$ $\bar{\bar{s^{2}}} = \frac { \sum_{i=1}^h \sum_{j=1}^{n_{i}} (x_{ij} - \bar{\bar{x}})^{2} } {N - 1} \nonumber$ where $h$ is the number of treatments, $n_{i}$ is the number of replicates for the $i^{th}$ treatment, and $N$ is the total number of measurements. Step 2: Calculate the within-sample variance, $s_{w}^{2}$, using the mean for each treatment, $\bar{x}_{i}$, and the replicates for that treatment. $s_{w}^{2} = \frac { \sum_{i=1}^h \sum_{j=1}^{n_{i}} (x_{ij} - \bar{x}_{i})^{2} } {N - h} \nonumber$ Step 3: Calculate the between-sample variance, $s_{b}^{2}$, using the means for each treatment and the global mean $s_{b}^{2} = \frac { \sum_{i=1}^h \sum_{j=1}^{n_{i}} (\bar{x}_{i} - \bar{\bar{x}})^2 } {h - 1} = \frac {\sum_{i=1}^h n_{i} (\bar{x}_{i} - \bar{\bar{x}})^2 } {h - 1} \nonumber$ Step 4: If there is a significant difference between the treatments, then $s_{b}^{2}$ should be significantly greater than $s_{w}^{2}$, which we evaluate using a one-tailed $F$-test where $H_{0}: s_{b}^{2} = s_{w}^{2} \nonumber$ $H_{A}: s_{b}^{2} > s_{w}^{2} \nonumber$ Step 5: If there is a significant difference, then we estimate $\sigma_{rand}^{2}$ and $\sigma_{systematic}^{2}$ as $s_{w}^{2} \approx \sigma_{rand}^{2} \nonumber$ $s_{b}^{2} \approx \sigma_{rand}^{2} + \bar{n}\sigma_{systematic}^{2} \nonumber$ where $\bar{n}$ is the average number of replicates per treatment. This seems like a lot of work, but we can simplify the calculations by noting that $SS_{total} = \sum_{i=1}^h \sum_{j=1}^{n_{i}} (x_{ij} - \bar{\bar{x}})^{2} = \bar{\bar{s^{2}}}(N - 1) \nonumber$ $SS_{w} = \sum_{i=1}^h \sum_{j=1}^{n_{i}} (x_{ij} - \bar{x}_{i})^{2} \nonumber$ $SS_{b} = \sum_{i=1}^h n_{i} (\bar{x}_{i} - \bar{\bar{x}})^2 \nonumber$ $SS_{total} = SS_{w} + SS_{b} \nonumber$ and that $SS_{total}$ and $SS_{b}$ are relatively easy to calculate, where $SS$ is short for sum-of-squares. Table $1$ gathers these equations together Table $1$. Summary of Calculations Needed to Complete an Analysis of Variance source of variance sum-of-squares degrees of freedom variance between samples $\sum_{i=1}^h n_{i} (\bar{x}_{i} - \bar{\bar{x}})^2$ $h - 1$ $s_{b}^{2} = \frac {SS_{b}} {h - 1}$ within samples $SS_{total} = SS_{w} + SS_{b}$ $N - h$ $s_{w}^{2} = \frac {SS_{w}} {N - h}$ total $\bar{\bar{s^{2}}}(N - 1)$ Example $1$ Chemical reagents have a limited shelf-life. To determine the effect of light on a reagent's stability, a freshly prepared solution is stored for one hour under three different light conditions: total dark, subdued light, and full light. At the end of one hour, each solution was analyzed three times, yielding the following percent recoveries; a recovery of 100% means that the measured concentration is the same as the actual concentration.The null hypothesis is that there is there is no difference between the different treatments, and the alternative hypothesis is that at least one of the treatments yields a result that is significantly different than the other treatments. trial/condition A (total dark) B (subdued light) C (full light) 1 101 100 90 2 101 99 92 3 104 101 94 Solution First, we treat the data as one large data set of nine values and calculate the global mean, $\bar{\bar{x}}$, and the global variance, $\bar{\bar{s^{2}}}$; these are 98 and 23.75, respectively. We also calculate the mean for each of the three treatments, obtaining a value of 102.0 for treatment A, 100.0 for treatment B, and 92.0 for treatment C. Next, we calculate the total sum-of-squares, $SS_{total}$ $\bar{\bar{s^{2}}}(N - 1) = 23.75(9 - 1) = 190.0 \nonumber$ the between sample sum-of-squares, $SS_{b}$ $SS_{b} = \sum_{i=1}^h n_{i} (\bar{x}_{i} - \bar{\bar{x}})^2 = 3(102.0 - 98.0)^2 + 3(100.0 - 98.0)^2 + 3(92.0 - 98.0)^2 = 168.0 \nonumber$ and the within sample sum-of-squares, $SS_{w}$ $SS_{w} = SS_{total} - SS_{b} = 190.0 - 168.0 = 22.0 \nonumber$ The variance between the treatments, $s_b^2$ is $\frac {SS_{b}} {h - 1} = \frac{168}{3 - 1} = 84.0 \nonumber$ and the variance within the treatments, $s_w^2$ is $\frac {SS_{w}} {N - h} = \frac{22.0}{9 - 3} = 3.67 \nonumber$ Finally, we complete an F-test, calculating Fexp $F_{exp} = \frac{s_b^2}{s_w^2} = \frac{84.0}{3.67} = 22.9 \nonumber$ and compare it to the critical value for F(0.05, 2, 6) = 5.143 from Appendix 3. Because Fexp > F(0.05, 2, 6), we reject the null hypothesis and accept the alternative hypothesis that at least one of the treatments yields a result that is significantly different from the other treatments. We can estimate the variance due to random errors as $\sigma_{random}^{2} = s_{w}^{2} = 3.67 \nonumber$ and the variance due to systematic errors as $\sigma_{systematic}^{2} = \frac {\sigma_{random}^{2} - s_{w}^{2}} {\bar{n}} = \frac {84.0 - 3.67} {3} = 26.8 \nonumber$ Having found evidence for a significant difference between the treatments, we can use individual t-tests on pairs of treatments to show that the results for treatment C are significantly different from the other two treatments.
textbooks/chem/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/07%3A_Testing_the_Significance_of_Data/7.03%3A_Analysis_of_Variance.txt
The significance tests described in Chapter 7.2 assume that we can treat the individual samples as if they are drawn from a population that is normally distributed. Although often a reasonable assumption, there are times when this is a poor assumption, such as when there is a likely outlier that we are not inclined to remove. Non-parametric significance tests allow us to compare data sets, but without making implicit assumptions about our data's distribution. In this section we will consider two non-parametric tests, the Wicoxson signed rank test, which we can use in place of a paired t-test, and the Wilcoxon rank sum test, which we can use in place of an unpaired t-test. Wilcoxson Signed Rank Test When we use paired data we first calculate the difference, di, between each sample's paired values. We then subtract the expected difference from each di and then sort these adjusted differences from smallest-to-largest without considering the sign. We then assign each difference a rank (1, 2, 3, ...) and add back its sign. If two or more entries have the same absolute difference, then we average their ranks. Finally, we add together the positive ranks and add together the negative ranks. If there is no difference in the two data sets, then we expect that these two sums should be similar in value. If the smaller of the two ranks is less than a critical value, then there is reason to believe that the two data sets are significantly different from each other; see Appendix 6 for a table of critical values. Example $1$ Marecek et. al. developed a new electrochemical method for the rapid determination of the concentration of the antibiotic monensin in fermentation vats [Marecek, V.; Janchenova, H.; Brezina, M.; Betti, M. Anal. Chim. Acta 1991, 244, 15–19]. The standard method for the analysis is a test for microbiological activity, which is both difficult to complete and time-consuming. Samples were collected from the fermentation vats at various times during production and analyzed for the concentration of monensin using both methods. The results, in parts per thousand (ppt), are reported in the following table. This is the same data as in Example 7.2.6. Sample Microbiological Electrochemical 1 129.5 132.3 2 89.6 91.0 3 76.6 73.6 4 52.2 58.2 5 110.8 104.2 6 50.4 49.9 7 72.4 82.1 8 141.4 154.1 9 75.0 73.4 10 34.1 38.1 11 60.3 60.1 Is there a significant difference between the methods at $\alpha = 0.05$? Solution Defining the difference between the methods as $d_i = (X_\text{elect})_i - (X_\text{micro})_i \nonumber$ we calculate the difference for each sample. sample 1 2 3 4 5 6 7 8 9 10 11 $d_i$ 2.8 1.4 –3.0 6.0 –6.6 –0.5 9.7 12.7 –1.6 4.0 –0.2 Next, we order the individual differences from smallest-to-largest without considering the sign $d_i$ –0.2 –0.5 1.4 –1.6 2.8 –3.0 4.0 6.0 –6.6 9.7 12.7 We then assign each individual difference a rank, retaining the sign; thus $d_i$ –1 –2 3 –4 5 –6 7 8 –9 10 11 The sum of the negative ranks is 22 and the sum of the positive ranks is 44. The critical value for 11 samples and $\alpha = 0.05$ is 10. As the smaller of our two ranks, 22, is greater than 10, there is no evidence to suggest that there is a difference between the two methods. Wilcoxson Rank Sum Test The Wilcoxon rank sum test (also know as the Mann-Whitney U test) is used to compare two unpaired data sets. The values in the two data sets are sorted from smallest-to-largest, maintaining sample identity. After sorting, each value is assigned a rank (1, 2, 3, ...), again, maintaining sample identity. If two or more entries have the same absolute difference, then their ranks are averaged. Next, we add up the ranks for each sample. If there is no difference in the two data sets, then we expect that the positive and negative ranks should be similar in value. To account for differences in the size of each sample, we subtract $\frac{n_i(n_i + 1)}{2} \nonumber$ from each sum where $n_i$ is the size of the sample. If the smaller of the two ranks is less than a critical value, then there is reason to believe that the two data sets are significantly different from each other; see Appenidx 7 for a table of critical values. Example $2$ To compare two production lots of aspirin tablets, you collect samples from each and analyze them, obtaining the following results (in mg aspirin/tablet). Lot 1: 256, 248, 245, 244, 248, 261 Lot 2: 241, 258, 241, 256, 254 Is there any evidence at $\alpha = 0.05$ that there is a significant difference between these two sets of results? Solution First, we sort the results from smallest-to-largest. To distinguish between the two samples, those from Lot 1 are shown in bold. 241, 241, 244, 245, 248, 248, 254, 256, 256, 258, 261 Next we assign ranks, identifying those samples from Lot 1 by underlying them. 1.5, 1.5, 3, 4, 5.5, 5.5, 7, 8.5, 8.5, 10, 11 The sum of the ranks for Lot 1 is 37.5 and the sum of the ranks for Lot 2 is 28.5. After adjusting for the size of each sample, we have $37.5 - \frac{6(6 + 1)}{2} = 16.5 \nonumber$ for Lot 1 and $28.5 - \frac{(5)(5+1)}{2} = 13.5 \nonumber$ for Lot 2. From Appendix 7, the critical value for $\alpha = 0.05$ is 3. As the smaller of our two ranks, 13.5, is greater than 3, there is no evidence to suggest that there is a difference between the two methods.
textbooks/chem/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/07%3A_Testing_the_Significance_of_Data/7.04%3A_Non-Parametric_Significance_Tests.txt
The base installation of R has functions for most of the significance tests covered in Chapter 7.2 - Chapter 7.4. Using R to Compare Variances The R function for comparing variances isvar.test()which takes the following form var.test(x, y, ratio = 1, alternative = c("two.sided", "less", "greater"), conf.level = 0.95, ...) where x and y are numeric vectors that contain the two samples, ratio is the expected ratio for the null hypothesis (which defaults to 1), alternative is a character string that states the alternative hypothesis (which defaults to two-sided or two-tailed), and a conf.level that gives the size of the confidence interval, which defaults to 0.95, or 95%, or $\alpha = 0.05$. We can use this function to compare the variances of two samples, $s_1^2$ vs $s_2^2$, but not the variance of a sample and the variance for a population $s^2$ vs $\sigma^2$. Let's use R on the data from Example 7.2.3, which considers two sets of United States pennies. # create vectors to store the data sample1 = c(3.080, 3.094, 3.107, 3.056, 3.112, 3.174, 3.198) sample2 = c(3.052, 3.141, 3.083, 3.083, 3.048) # run two-sided variance test with alpha = 0.05 and null hypothesis that variances are equal var.test(x = sample1, y = sample2, ratio = 1, alternative = "two.sided", conf.level = 0.95) The code above yields the following output F test to compare two variances data: sample1 and sample2 F = 1.8726, num df = 6, denom df = 4, p-value = 0.5661 alternative hypothesis: true ratio of variances is not equal to 1 95 percent confidence interval: 0.2036028 11.6609726 sample estimates: ratio of variances 1.872598 Two parts of this output lead us to retain the null hypothesis of equal variances. First, the reported p-value of 0.5661 is larger than our critical value for $\alpha$ of 0.05, and second, the 95% confidence interval for the ratio of the variances, which runs from 0.204 to 11.7 includes the null hypothesis that it is 1. R does not include a function for comparing $s^2$ to $\sigma^2$. Using R to Compare Means The R function for comparing means is t.test()and takes the following form t.test(x, y = NULL, alternative = c("two.sided", "less", "greater"), mu = 0, paired = FALSE, var.equal = FALSE, conf.level = 0.95, ...) where x is a numeric vector that contains the data for one sample and y is an optional vector that contains data for a second sample, alternative is a character string that states the alternative hypothesis (which defaults to two-tailed), mu is either the population's expected mean or the expected difference in the means of the two samples, paired is a logical value that indicates whether the data is paired , var.equal is a logical value that indicates whether the variances for two samples are treated as equal or unequal (based on a prior var.test()), andconf.level gives the size of the confidence interval (which defaults to 0.95, or 95%, or $\alpha = 0.05$). Using R to Compare $\overline{X}$ to $\mu$ Let's use R on the data from Example 7.2.1, which considers the determination of the $\% \text{Na}_2 \text{CO}_3$ in a standard sample that is known to be 98.76 % w/w $\text{Na}_2 \text{CO}_3$. # create vector to store the data na2co3 = c(98.71, 98.59, 98.62, 98.44, 98.58) # run a two-sided t-test, using mu to define the expected mean; because the default values # for paired and var.equal are FALSE, we can omit them here t.test(x = na2co3, alternative = "two.sided", mu = 98.76, conf.level = 0.95) The code above yields the following output One Sample t-test data: na2co3 t = -3.9522, df = 4, p-value = 0.01679 alternative hypothesis: true mean is not equal to 98.76 95 percent confidence interval: 98.46717 98.70883 sample estimates: mean of x 98.588 Two parts of this output lead us to reject the null hypothesis of equal variances. First, the reported p-value of 0.01679 is less than our critical value for $\alpha$ of 0.05, and second, the 95% confidence interval for the experimental mean of 98.588, which runs from 98.467 to 98.709, does not includes the null hypothesis that it is 98.76. Using R to Compare Means for Two Samples When comparing the means for two samples, we have to be careful to consider whether the data is unpaired or paired, and for unpaired data we must determine whether we can pool the variances for the two samples. Unpaired Data Let's use R on the data from Example 7.2.4, which considers two sets of United States pennies. This data is unpaired and, as we showed earlier, there is no evidence to suggest that the variances of the two samples are different. # create vectors to store the data sample1 = c(3.080, 3.094, 3.107, 3.056, 3.112, 3.174, 3.198) sample2 = c(3.052, 3.141, 3.083, 3.083, 3.048) # run a two-sided t-test, setting mu to 0 as the null hypothesis is that the means are the same, and setting var.equal to TRUE t.test(x = sample1, y = sample2, alternative = "two.sided", mu = 0, var.equal = TRUE, conf.level = 0.95) The code above yields the following output Two Sample t-test data: sample1 and sample2 t = 1.3345, df = 10, p-value = 0.2116 alternative hypothesis: true difference in means is not equal to 0 95 percent confidence interval: -0.02403040 0.09580182 sample estimates: mean of x mean of y 3.117286 3.081400 Two parts of this output lead us to retain the null hypothesis of equal means. First, the reported p-value of 0.2116 is greater than our critical value for $\alpha$ of 0.05, and second, the 95% confidence interval for the difference in the experimental means, which runs from -0.0240 to 0.0958, includes the null hypothesis that it is 0. Paired Data Let's use R on the data from Example 7.2.1, which compares two methods for determining the concentration of the antibiotic monensin in fermentation vats. # create vectors to store the data microbiological = c(129.5, 89.6, 76.6, 52.2, 110.8, 50.4, 72.4, 141.4, 75.0, 34.1, 60.3) electrochemical = c(132.3, 91.0, 73.6, 58.2, 104.2, 49.9, 82.1, 154.1, 73.4, 38.1, 60.1) # run a two-tailed t-test, setting mu to 0 as the null hypothesis is that the means are the same, and setting paired to TRUE t.test(x = microbiological, y = electrochemical, alternative = "two.sided", mu = 0, paired = TRUE, conf.level = 0.95) The code above yields the following output Paired t-test data: microbiological and electrochemical t = -1.3225, df = 10, p-value = 0.2155 alternative hypothesis: true difference in means is not equal to 0 95 percent confidence interval: -6.028684 1.537775 sample estimates: mean of the differences -2.245455 Two parts of this output lead us to retain the null hypothesis of equal means. First, the reported p-value of 0.2155 is greater than our critical value for $\alpha$ of 0.05, and second, the 95% confidence interval for the difference in the experimental mean, which runs from -6.03 to 1.54, includes the null hypothesis that it is 0. Using R to Detect Outliers The base installation of R does not include tests for outliers, but the outliers package provided functions for Dixon's Q-test and Grubb's test. To install the package, use the following lines of code install.packages("outliers") library(outliers) You only need to install the package once, but you must use library() to make the package available when you begin a new R session. Dixon Q-test The R function for Dixon's Q-test is dixon.test()and takes the following form dixon.test(x, type, two.sided) where x is a numeric vector with the data we are considering, type defines the specific value(s) that we are testing (we will use type = 10, which tests for a single outlier on either end of the ranked data), and two.sided, which indicates whether we use a one-tailed or two-tailed test (we will use two.sided = FALSE as we are interested in whether the smallest value is too small or the largest value is too large). Let's use R on the data from Example 7.2.7, which considers the masses of a set of United States pennies. penny = c(3.067, 2.514, 3.094, 3.049, 3.048, 3.109, 3.039, 3.079, 3.102) dixon.test(x = penny, two.sided = FALSE, type = 10) The code above yields the following output Dixon test for outliers data: penny Q = 0.88235, p-value < 2.2e-16 alternative hypothesis: lowest value 2.514 is an outlier The reported p-value of less than $2.2 \times 10^{-16}$ is less than our critical value for $\alpha$ of 0.05, which suggests that the penny with a mass of 2.514 g is drawn from a different population than the other pennies. Grubb's Test The R function for the Grubb's test is grubbs.test()and takes the following form gurbbs.test(x, type, two.sided) where x is a numeric vector with the data we are considering, type defines the specific value(s) that we are testing (we will use type = 10, which tests for a single outlier on either end of the ranked data), and two.sided, which indicated whether we use a one-tailed or two-tailed test (we will use two.sided = FALSE as we are interested in whether the smallest value is too small or the largest value is too large). Let's use R on the data from Example 7.2.7, which considers the masses of a set of United States pennies. penny = c(3.067, 2.514, 3.094, 3.049, 3.048, 3.109, 3.039, 3.079, 3.102) grubbs.test(x = penny, two.sided = FALSE, type = 10) The code above yields the following output Grubbs test for one outlier data: penny G = 2.64300, U = 0.01768, p-value = 9.69e-07 alternative hypothesis: lowest value 2.514 is an outlier The reported p-value of $9.69 \times 10^{-7}$ is less than our critical value for $\alpha$ of 0.05, which suggests that the penny with a mass of 2.514 g is drawn from a different population than the other pennies. Using R to Complete Non-Parametric Significance Tests The R function for completing the Wilcoxson signed rank test and the Wilcoxson rank sum test is wilcox.test(), which takes the following form wilcox.test(x, y = NULL, alternative = c("two.sided", "less", "greater"), mu = 0, paired = FALSE, conf.level = 0.95, ...) where x is a numeric vector that contains the data for one sample and y is an optional vector that contains data for a second sample, alternative is a character string that states the alternative hypothesis (which defaults to two-tailed), mu is either the population's expected mean or the expected difference in the means of the two samples, paired is a logical value that indicates whether the data is paired , andconf.level gives the size of the confidence interval (which defaults to 0.95, or 95%, or $\alpha = 0.05$). Using R to Complete a Wilcoxson Signed Rank Test Let's use R on the data from Example 7.3.1, which compares two methods for determining the concentration of the antibiotic monensin in fermentation vats. # create vectors to store the data microbiological = c(129.5, 89.6, 76.6, 52.2, 110.8, 50.4, 72.4, 141.4, 75.0, 34.1, 60.3) electrochemical = c(132.3, 91.0, 73.6, 58.2, 104.2, 49.9, 82.1, 154.1, 73.4, 38.1, 60.1) # run a two-tailed wilcoxson signed rank test, setting mu to 0 as the null hypothesis is that # the means are the same and setting paired to TRUE wilcox.test(x = microbiological, y = electrochemical, alternative = "two.sided", mu = 0, paired = TRUE, conf.level = 0.95) The code above yields the following output Wilcoxon signed rank test data: microbiological and electrochemical V = 22, p-value = 0.3652 alternative hypothesis: true location shift is not equal to 0 where the value V is the smaller of the two signed ranks. The reported p-value of 0.3652 is greater than our critical value for $\alpha$ of 0.05, which means we do not have evidence to suggest that there is a difference between the mean values for the two methods. Using R to Complete a Wilcoxson Rank Sum Test Let's use R on the data from Example 7.3.2, which compares two methods for determining the amount of aspirin in tablets from two production lots. # create vectors to store the data lot1 = c(256, 248, 245, 244, 248, 261) lot2= c(241, 258, 241, 256, 254) # run a two-tailed wilcoxson signed rank test, setting mu to 0 as the null hypothesis is # that the means are the same, and setting paired to TRUE wilcox.test(x = lot1, y = lot2, alternative = "two.sided", mu = 0, paired = FALSE, conf.level = 0.95) The code above yields the following output Wilcoxon rank sum test with continuity correction data: lot1 and lot2 W = 16.5, p-value = 0.8541 alternative hypothesis: true location shift is not equal to 0 Warning message: In wilcox.test.default(x = lot1, y = lot2, alternative = "two.sided", : cannot compute exact p-value with ties where the value W is the larger of the two ranked sums. The reported p-value of 0.8541 is greater than our critical value for $\alpha$ of 0.05, which means we do not have evidence to suggest that there is a difference between the mean values for the two methods. Note: we can ignore the warning message here as our calculated value for p is very large relative to an $\alpha$ of 0.05. Using R to Complete an Analysis of Variance Let's use the data in Example 7.3.1 to show how to complete an analysis of variance in R. First, we need to create individual numerical vectors for each treatment and then combine these vectors into a single numerical vector, which we will call recovery, that contains the results for each treatment. a = c(101, 101, 104) b = c(100, 98, 102) c = c(90, 92, 94) recovery = c(a, b, c) We also need to create a vector of character strings that identifies the individual treatments for each element in the vector recovery. treatment = c(rep("a", 3), rep("b", 3), rep("c", 3)) The R function for completing an analysis of variance is aov(), which takes the following form aov(formula, ...) where formula is a way of telling R to "explain this variable by using that variable." We will examine formulas in more detail in Chapter 8, but in this case the syntax is recovery ~ treatment , which means to model the recovery based on the treatment. In the code below, we assign the output of the aov() function to a variable so that we have access to the results of the analysis of variance aov_output = aov(recovery ~ treatment) through the summary() function summary(aov_output) Df Sum Sq Mean Sq F value Pr(>F) treatment 2 168 84.00 22.91 0.00155 ** Residuals 6 22 3.67 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Note that what we earlier called the between variance is identified here as the variance due to the treatments, and that we earlier called the within variance is identified her as the residual variance. As we saw in Example 7.3.1, the value for Fexp is significantly greater than the critical value for F at $\alpha = 0.05$. Having found evidence that there is a significant difference between the treatments, we can use R's TukeyHSD() function to identify the source(s) of that difference (HSD stands for Honest Significant Difference), which takes the general form TukeyHSD(x, conf.level = 0.95, ...) where x is an object that contains the results of an analysis of variance. TukeyHSD(aov_output) Tukey multiple comparisons of means 95% family-wise confidence level Fit: aov(formula = recovery ~ treatment) \$treatment diff lwr upr p adj b-a -2 -6.797161 2.797161 0.4554965 c-a -10 -14.797161 -5.202839 0.0016720 c-b -8 -12.797161 -3.202839 0.0052447 The table at the end of the output shows, for each pair of treatments, the difference in their mean values, the lower and the upper values for the confidence interval about the mean, and the value for $\alpha$, which in R is listed as an adjusted p-value, for which we can reject the null hypothesis that the means are identical. In this case, we can see that the results for treatment C are significantly different from both treatments A and B. We also can view the results of the TukeyHSD analysis visually by passing it to R's plot() function. plot(TukeyHSD(aov_output))
textbooks/chem/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/07%3A_Testing_the_Significance_of_Data/7.05%3A_Using_R_for_Significance_Testing_and_Analysis_of_Variance.txt
1. Use this link to access a case study on data analysis and complete the last investigation in Part V: Ways to Draw Conclusions from Data. 2. Ketkar and co-workers developed an analytical method to determine trace levels of atmospheric gases. An analysis of a sample that is 40.0 parts per thousand (ppt) 2-chloroethylsulfide gave the following results from Ketkar, S. N.; Dulak, J. G.; Dheandhanou, S.; Fite, W. L. Anal. Chim. Acta 1991, 245, 267–270. 43.3 34.8 31.9 37.8 34.4 31.9 42.1 33.6 35.3 Determine whether there is a significant difference between the experimental mean and the expected value at $\alpha = 0.05$. 3. To test a spectrophotometer’s accuracy a solution of 60.06 ppm K2Cr2O7 in 5.0 mM H2SO4 is prepared and analyzed. This solution has an expected absorbance of 0.640 at 350.0 nm in a 1.0-cm cell when using 5.0 mM H2SO4 as a reagent blank. Several aliquots of the solution produce the following absorbance values. 0.639 0.638 0.640 0.639 0.640 0.639 0.638 Determine whether there is a significant difference between the experimental mean and the expected value at $\alpha = 0.01$. 4. Monna and co-workers used radioactive isotopes to date sediments from lakes and estuaries. To verify this method they analyzed a 208Po standard known to have an activity of 77.5 decays/min, obtaining the following results. 77.09 75.37 72.42 76.84 77.84 76.69 78.03 74.96 77.54 76.09 81.12 75.75 Determine whether there is a significant difference between the mean and the expected value at $\alpha = 0.05$. The data in this problem are from Monna, F.; Mathieu, D.; Marques, A. N.; Lancelot, J.; Bernat, M. Anal. Chim. Acta 1996, 330, 107–116. 5. A 2.6540-g sample of an iron ore, which is 53.51% w/w Fe, is dissolved in a small portion of concentrated HCl and diluted to volume in a 250-mL volumetric flask. A spectrophotometric determination of the concentration of Fe in this solution yields results of 5840, 5770, 5650, and 5660 ppm. Determine whether there is a significant difference between the experimental mean and the expected value at $alpha = 0.05$. 6. Horvat and co-workers used atomic absorption spectroscopy to determine the concentration of Hg in coal fly ash. Of particular interest to the authors was developing an appropriate procedure for digesting samples and releasing the Hg for analysis. As part of their study they tested several reagents for digesting samples. Their results using HNO3 and using a 1 + 3 mixture of HNO3 and HCl are shown here. All concentrations are given as ppb Hg sample. HNO3: 161 165 160 167 166 1 + 3 HNO3 – HCl: 159 145 140 147 143 156 Determine whether there is a significant difference between these methods at $\alpha = 0.05$. The data in this problem are from Horvat, M.; Lupsina, V.; Pihlar, B. Anal. Chim. Acta 1991, 243, 71–79. 7. Lord Rayleigh, John William Strutt (1842-1919), was one of the most well known scientists of the late nineteenth and early twentieth centuries, publishing over 440 papers and receiving the Nobel Prize in 1904 for the discovery of argon. An important turning point in Rayleigh’s discovery of Ar was his experimental measurements of the density of N2. Rayleigh approached this experiment in two ways: first by taking atmospheric air and removing O2 and H2; and second, by chemically producing N2 by decomposing nitrogen containing compounds (NO, N2O, and NH4NO3) and again removing O2 and H2. The following table shows his results for the density of N2, as published in Proc. Roy. Soc. 1894, LV, 340 (publication 210); all values are the grams of gas at an equivalent volume, pressure, and temperature. atmospheric origin chemical origin 2.31017 2.30143 2.30986 2.29890 2.31010 2.29816 2.31001 2.30182 2.31024 2.29869 2.31010 2.29940 2.31028 2.29849 2.29889 Explain why this data led Rayleigh to look for and to discover Ar. You can read more about this discovery here: Larsen, R. D. J. Chem. Educ. 1990, 67, 925–928. 8. Gács and Ferraroli reported a method for monitoring the concentration of SO2 in air. They compared their method to the standard method by analyzing urban air samples collected from a single location. Samples were collected by drawing air through a collection solution for 6 min. Shown here is a summary of their results with SO2 concentrations reported in μL/m3. standard method new method 21.62 21.54 22.20 20.51 24.27 22.31 23.54 21.30 24.25 24.62 23.09 25.72 21.02 21.54 Using an appropriate statistical test, determine whether there is any significant difference between the standard method and the new method at $\alpha = 0.05$. The data in this problem are from Gács, I.; Ferraroli, R. Anal. Chim. Acta 1992, 269, 177–185. 9. One way to check the accuracy of a spectrophotometer is to measure absorbances for a series of standard dichromate solutions obtained from the National Institute of Standards and Technology. Absorbances are measured at 257 nm and compared to the accepted values. The results obtained when testing a newly purchased spectrophotometer are shown here. Determine if the tested spectrophotometer is accurate at $\alpha = 0.05$. standard measured absorbance expected absorbance 1 0.2872 0.2871 2 0.5773 0.5760 3 0.8674 0.8677 4 1.1623 1.1608 5 1.4559 1.4565 10. Maskarinec and co-workers investigated the stability of volatile organics in environmental water samples. Of particular interest was establishing the proper conditions to maintain the sample’s integrity between its collection and its analysis. Two preservatives were investigated—ascorbic acid and sodium bisulfate—and maximum holding times were determined for a number of volatile organics and water matrices. The following table shows results for the holding time (in days) of nine organic compounds in surface water. compound Ascorbic Acid Sodium Bisulfate methylene chloride 77 62 carbon disulfide 23 54 trichloroethane 52 51 benzene 62 42 1,1,2-trichlorethane 57 53 1,1,2,2-tetrachloroethane 33 85 tetrachloroethene 32 94 chlorbenzene 36 86 Determine whether there is a significant difference in the effectiveness of the two preservatives at $\alpha = 0.10$. The data in this problem are from Maxkarinec, M. P.; Johnson, L. H.; Holladay, S. K.; Moody, R. L.; Bayne, C. K.; Jenkins, R. A. Environ. Sci. Technol. 1990, 24, 1665–1670. 11. Using X-ray diffraction, Karstang and Kvalhein reported a new method to determine the weight percent of kaolinite in complex clay minerals using X-ray diffraction. To test the method, nine samples containing known amounts of kaolinite were prepared and analyzed. The results (as % w/w kaolinite) are shown here. actual 5.0 10.0 20.0 40.0 50.0 60.0 80.0 90.0 95.0 found 6.8 11.7 19.8 40.5 53.6 61.7 78.9 91.7 94.7 Evaluate the accuracy of the method at $\alpha = 0.05$. The data in this problem are from Karstang, T. V.; Kvalhein, O. M. Anal. Chem. 1991, 63, 767–772. 12. Mizutani, Yabuki and Asai developed an electrochemical method for analyzing l-malate. As part of their study they analyzed a series of beverages using both their method and a standard spectrophotometric procedure based on a clinical kit purchased from Boerhinger Scientific. The following table summarizes their results. All values are in ppm. The data in this problem are from Mizutani, F.; Yabuki, S.; Asai, M. Anal. Chim. Acta 1991, 245,145–150. Sample Electrode Spectrophotometric Apple Juice 1 34.0 33.4 Apple Juice 2 22.6 28.4 Apple Juice 3 29.7 29.5 Apple Juice 4 24.9 24.8 Grape Juice 1 17.8 18.3 Grape Juice 2 14.8 15.4 Mixed Fruit Juice 1 8.6 8.5 Mixed Fruit Juice 2 31.4 31.9 White Wine 1 10.8 11.5 White Wine 2 17.3 17.6 White Wine 3 15.7 15.4 White Wine 4 18.4 18.3 13. Alexiev and colleagues describe an improved photometric method for determining Fe3+ based on its ability to catalyze the oxidation of sulphanilic acid by KIO4. As part of their study, the concentration of Fe3+ in human serum samples was determined by the improved method and the standard method. The results, with concentrations in μmol/L, are shown in the following table. Sample Improved Method Standard Method 1 8.25 8.06 2 9.75 8.84 3 9.75 8.36 4 9.75 8.73 5 10.75 13.13 6 11.25 13.65 7 13.88 13.85 8 14.25 13.43 Determine whether there is a significant difference between the two methods at $\alpha = 0.05$. The data in this problem are from Alexiev, A.; Rubino, S.; Deyanova, M.; Stoyanova, A.; Sicilia, D.; Perez Bendito, D. Anal. Chim. Acta, 1994, 295, 211–219. 14. Ten laboratories were asked to determine an analyte’s concentration of in three standard test samples. Following are the results, in μg/mL. Laboratory Sample 1 Sample 2 Sample 3 1 22.6 13.6 16.0 2 23.0 14.2 15.9 3 21.5 13.9 16.9 4 21.9 13.9 16.9 5 21.3 13.5 16.7 6 22.1 13.5 17.4 7 23.1 13.5 17.5 8 21.7 13.5 16.8 9 22.2 12.9 17.2 10 21.7 13.8 16.7 Determine if there are any potential outliers in Sample 1, Sample 2 or Sample 3. Use all three methods—Dixon’s Q-test, Grubb’s test, and Chauvenet’s criterion—and compare the results to each other. For Dixon’s Q-test and for the Grubb’s test, use a significance level of $\alpha = 0.05$. The data in this problem are adapted from Steiner, E. H. “Planning and Analysis of Results of Collaborative Tests,” in Statistical Manual of the Association of Official Analytical Chemists, Association of Official Analytical Chemists: Washington, D. C., 1975. 15. Use an appropriate non-parametric test to reanalyze the data in some or all of Exercises 7.6.2 to 7.6.14. 16. The importance of between-laboratory variability on the results of an analytical method are determined by having several laboratories analyze the same sample. In one such study, seven laboratories analyzed a sample of homogenized milk for a selected aflatoxin [data from Massart, D. L.; Vandeginste, B. G. M; Deming, S. N.; Michotte, Y.; Kaufman, L. Chemometrics: A Textbook, Elsevier: Amsterdam, 1988]. The results, in ppb, are summarized below. lab A lab B lab C lab D lab E lab F lab G 1.6 4.6 1.2 1.5 6.0 6.2 3.3 2.9 2.8 1.9 2.7 3.9 3.8 3.8 3.5 3.0 2.9 3.4 4.3 5.5 5.5 4.5 4.5 1.1 2.0 5.8 4.2 4.9 2.2 3.1 2.9 3.4 4.0 5.3 4.5 (a) Determine if the between-laboratory variability is significantly greater than the within-laboratory variability at $\alpha = 0.05$. If the between-laboratory variability is significant, then determine the source(s) of that variability. (b) Estimate values for $\sigma_{rand}^2$ and for $\sigma_{syst}^2$.
textbooks/chem/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/07%3A_Testing_the_Significance_of_Data/7.06%3A_Exercises.txt
A calibration curve is one of the most important tools in analytical chemistry as it allows us to determine the concentration of an analyte in a sample by measuring the signal it generates when placed in an instrument, such as a spectrophotometer. To determine the analyte's concentration we must know the relationship between the signal we measure , $S$, and the analyte's concentration, $C_A$, which we can write as $S = k_A C_A + S_{blank} \nonumber$ where $k_A$ is the calibration curve's sensitivity and $S_{blank}$ is the signal in the absence of analyte. How do we find the best estimate for this relationship between the signal and the concentration of analyte? When a calibration curve is a straight-line, we represent it using the following mathematical model $y = \beta_0 + \beta_1 x \nonumber$ where y is the analyte’s measured signal, S, and x is the analyte’s known concentration, $C_A$, in a series of standard solutions. The constants $\beta_0$ and $\beta_1$ are, respectively, the calibration curve’s expected y-intercept and its expected slope. Because of uncertainty in our measurements, the best we can do is to estimate values for $\beta_0$ and $\beta_1$, which we represent as b0 and b1. The goal of a linear regression analysis is to determine the best estimates for b0 and b1. 08: Modeling Data The most common method for completing a linear regression makes three assumptions: 1. the difference between our experimental data and the calculated regression line is the result of indeterminate errors that affect y 2. any indeterminate errors that affect y are normally distributed 3. that indeterminate errors in y are independent of the value of x Because we assume that the indeterminate errors are the same for all standards, each standard contributes equally in our estimate of the slope and the y-intercept. For this reason the result is considered an unweighted linear regression. The second assumption generally is true because of the central limit theorem, which we considered in Chapter 5.3. The validity of the two remaining assumptions is less obvious and you should evaluate them before you accept the results of a linear regression. In particular the first assumption is always suspect because there certainly is some indeterminate error in the measurement of x. When we prepare a calibration curve, however, it is not unusual to find that the uncertainty in the signal, S, is significantly greater than the uncertainty in the analyte’s concentration, $C_A$. In such circumstances the first assumption usually is reasonable. How a Linear Regression Works To understand the logic of a linear regression consider the example in Figure $1$, which shows three data points and two possible straight-lines that might reasonably explain the data. How do we decide how well these straight-lines fit the data, and how do we determine which, if either, is the best straight-line? Let’s focus on the solid line in Figure $1$. The equation for this line is $\hat{y} = b_0 + b_1 x \nonumber$ where b0 and b1 are estimates for the y-intercept and the slope, and $\hat{y}$ is the predicted value of y for any value of x. Because we assume that all uncertainty is the result of indeterminate errors in y, the difference between y and $\hat{y}$ for each value of x is the residual error, r, in our mathematical model. $r_i = (y_i - \hat{y}_i) \nonumber$ Figure $2$ shows the residual errors for the three data points. The smaller the total residual error, R, which we define as $R = \sum_{i = 1}^{n} (y_i - \hat{y}_i)^2 \nonumber$ the better the fit between the straight-line and the data. In a linear regression analysis, we seek values of b0 and b1 that give the smallest total residual error. Note The reason for squaring the individual residual errors is to prevent a positive residual error from canceling out a negative residual error. You have seen this before in the equations for the sample and population standard deviations introduced in Chapter 4. You also can see from this equation why a linear regression is sometimes called the method of least squares. Finding the Slope and y-Intercept for the Regression Model Although we will not formally develop the mathematical equations for a linear regression analysis, you can find the derivations in many standard statistical texts [ See, for example, Draper, N. R.; Smith, H. Applied Regression Analysis, 3rd ed.; Wiley: New York, 1998]. The resulting equation for the slope, b1, is $b_1 = \frac {n \sum_{i = 1}^{n} x_i y_i - \sum_{i = 1}^{n} x_i \sum_{i = 1}^{n} y_i} {n \sum_{i = 1}^{n} x_i^2 - \left( \sum_{i = 1}^{n} x_i \right)^2} \nonumber$ and the equation for the y-intercept, b0, is $b_0 = \frac {\sum_{i = 1}^{n} y_i - b_1 \sum_{i = 1}^{n} x_i} {n} \nonumber$ Although these equations appear formidable, it is necessary only to evaluate the following four summations $\sum_{i = 1}^{n} x_i \quad \sum_{i = 1}^{n} y_i \quad \sum_{i = 1}^{n} x_i y_i \quad \sum_{i = 1}^{n} x_i^2 \nonumber$ Many calculators, spreadsheets, and other statistical software packages are capable of performing a linear regression analysis based on this model; see Section 8.5 for details on completing a linear regression analysis using R. For illustrative purposes the necessary calculations are shown in detail in the following example. Example $1$ Using the calibration data in the following table, determine the relationship between the signal, $y_i$, and the analyte's concentration, $x_i$, using an unweighted linear regression. Solution We begin by setting up a table to help us organize the calculation. $x_i$ $y_i$ $x_i y_i$ $x_i^2$ 0.000 0.00 0.000 0.000 0.100 12.36 1.236 0.010 0.200 24.83 4.966 0.040 0.300 35.91 10.773 0.090 0.400 48.79 19.516 0.160 0.500 60.42 30.210 0.250 Adding the values in each column gives $\sum_{i = 1}^{n} x_i = 1.500 \quad \sum_{i = 1}^{n} y_i = 182.31 \quad \sum_{i = 1}^{n} x_i y_i = 66.701 \quad \sum_{i = 1}^{n} x_i^2 = 0.550 \nonumber$ Substituting these values into the equations for the slope and the y-intercept gives $b_1 = \frac {(6 \times 66.701) - (1.500 \times 182.31)} {(6 \times 0.550) - (1.500)^2} = 120.706 \approx 120.71 \nonumber$ $b_0 = \frac {182.31 - (120.706 \times 1.500)} {6} = 0.209 \approx 0.21 \nonumber$ The relationship between the signal, $S$, and the analyte's concentration, $C_A$, therefore, is $S = 120.71 \times C_A + 0.21 \nonumber$ For now we keep two decimal places to match the number of decimal places in the signal. The resulting calibration curve is shown in Figure $3$. Uncertainty in the Regression Model As we see in Figure $3$, because of indeterminate errors in the signal, the regression line does not pass through the exact center of each data point. The cumulative deviation of our data from the regression line—the total residual error—is proportional to the uncertainty in the regression. We call this uncertainty the standard deviation about the regression, sr, which is equal to $s_r = \sqrt{\frac {\sum_{i = 1}^{n} \left( y_i - \hat{y}_i \right)^2} {n - 2}} \nonumber$ where yi is the ith experimental value, and $\hat{y}_i$ is the corresponding value predicted by the regression equation $\hat{y} = b_0 + b_1 x$. Note that the denominator indicates that our regression analysis has n – 2 degrees of freedom—we lose two degree of freedom because we use two parameters, the slope and the y-intercept, to calculate $\hat{y}_i$. A more useful representation of the uncertainty in our regression analysis is to consider the effect of indeterminate errors on the slope, b1, and the y-intercept, b0, which we express as standard deviations. $s_{b_1} = \sqrt{\frac {n s_r^2} {n \sum_{i = 1}^{n} x_i^2 - \left( \sum_{i = 1}^{n} x_i \right)^2}} = \sqrt{\frac {s_r^2} {\sum_{i = 1}^{n} \left( x_i - \overline{x} \right)^2}} \nonumber$ $s_{b_0} = \sqrt{\frac {s_r^2 \sum_{i = 1}^{n} x_i^2} {n \sum_{i = 1}^{n} x_i^2 - \left( \sum_{i = 1}^{n} x_i \right)^2}} = \sqrt{\frac {s_r^2 \sum_{i = 1}^{n} x_i^2} {n \sum_{i = 1}^{n} \left( x_i - \overline{x} \right)^2}} \nonumber$ We use these standard deviations to establish confidence intervals for the expected slope, $\beta_1$, and the expected y-intercept, $\beta_0$ $\beta_1 = b_1 \pm t s_{b_1} \nonumber$ $\beta_0 = b_0 \pm t s_{b_0} \nonumber$ where we select t for a significance level of $\alpha$ and for n – 2 degrees of freedom. Note that these equations do not contain the factor of $(\sqrt{n})^{-1}$ seen in the confidence intervals for $\mu$ in Chapter 6.2; this is because the confidence interval here is based on a single regression line. Example $2$ Calculate the 95% confidence intervals for the slope and y-intercept from Example $1$. Solution We begin by calculating the standard deviation about the regression. To do this we must calculate the predicted signals, $\hat{y}_i$ , using the slope and the y-intercept from Example $1$, and the squares of the residual error, $(y_i - \hat{y}_i)^2$. Using the last standard as an example, we find that the predicted signal is $\hat{y}_6 = b_0 + b_1 x_6 = 0.209 + (120.706 \times 0.500) = 60.562 \nonumber$ and that the square of the residual error is $(y_i - \hat{y}_i)^2 = (60.42 - 60.562)^2 = 0.2016 \approx 0.202 \nonumber$ The following table displays the results for all six solutions. $x_i$ $y_i$ $\hat{y}_i$ $\left( y_i - \hat{y}_i \right)^2$ 0.000 0.00 0.209 0.0437 0.100 12.36 12.280 0.0064 0.200 24.83 24.350 0.2304 0.300 35.91 36.421 0.2611 0.400 48.79 48.491 0.0894 0.500 60.42 60.562 0.0202 Adding together the data in the last column gives the numerator in the equation for the standard deviation about the regression; thus $s_r = \sqrt{\frac {0.6512} {6 - 2}} = 0.4035 \nonumber$ Next we calculate the standard deviations for the slope and the y-intercept. The values for the summation terms are from Example $1$. $s_{b_1} = \sqrt{\frac {6 \times (0.4035)^2} {(6 \times 0.550) - (1.500)^2}} = 0.965 \nonumber$ $s_{b_0} = \sqrt{\frac {(0.4035)^2 \times 0.550} {(6 \times 0.550) - (1.500)^2}} = 0.292 \nonumber$ Finally, the 95% confidence intervals ($\alpha = 0.05$, 4 degrees of freedom) for the slope and y-intercept are $\beta_1 = b_1 \pm ts_{b_1} = 120.706 \pm (2.78 \times 0.965) = 120.7 \pm 2.7 \nonumber$ $\beta_0 = b_0 \pm ts_{b_0} = 0.209 \pm (2.78 \times 0.292) = 0.2 \pm 0.80 \nonumber$ where t(0.05, 4) from Appendix 2 is 2.78. The standard deviation about the regression, sr, suggests that the signal, Sstd, is precise to one decimal place. For this reason we report the slope and the y-intercept to a single decimal place. Using the Regression Model to Determine a Value for x Given a Value for y Once we have our regression equation, it is easy to determine the concentration of analyte in a sample. When we use a normal calibration curve, for example, we measure the signal for our sample, Ssamp, and calculate the analyte’s concentration, CA, using the regression equation. $C_A = \frac {S_{samp} - b_0} {b_1} \nonumber$ What is less obvious is how to report a confidence interval for CA that expresses the uncertainty in our analysis. To calculate a confidence interval we need to know the standard deviation in the analyte’s concentration, $s_{C_A}$, which is given by the following equation $s_{C_A} = \frac {s_r} {b_1} \sqrt{\frac {1} {m} + \frac {1} {n} + \frac {\left( \overline{S}_{samp} - \overline{S}_{std} \right)^2} {(b_1)^2 \sum_{i = 1}^{n} \left( C_{std_i} - \overline{C}_{std} \right)^2}} \nonumber$ where m is the number of replicates we use to establish the sample’s average signal, Ssamp, n is the number of calibration standards, Sstd is the average signal for the calibration standards, and $C_{std_i}$ and $\overline{C}_{std}$ are the individual and the mean concentrations for the calibration standards. Knowing the value of $s_{C_A}$, the confidence interval for the analyte’s concentration is $\mu_{C_A} = C_A \pm t s_{C_A} \nonumber$ where $\mu_{C_A}$ is the expected value of CA in the absence of determinate errors, and with the value of t is based on the desired level of confidence and n – 2 degrees of freedom. A close examination of these equations should convince you that we can decrease the uncertainty in the predicted concentration of analyte, $C_A$ if we increase the number of standards, $n$, increase the number of replicate samples that we analyze, $m$, and if the sample’s average signal, $\overline{S}_{samp}$, is equal to the average signal for the standards, $\overline{S}_{std}$. When practical, you should plan your calibration curve so that Ssamp falls in the middle of the calibration curve. For more information about these regression equations see (a) Miller, J. N. Analyst 1991, 116, 3–14; (b) Sharaf, M. A.; Illman, D. L.; Kowalski, B. R. Chemometrics, Wiley-Interscience: New York, 1986, pp. 126-127; (c) Analytical Methods Committee “Uncertainties in concentrations estimated from calibration experiments,” AMC Technical Brief, March 2006. Note The equation for the standard deviation in the analyte's concentration is written in terms of a calibration experiment. A more general form of the equation, written in terms of x and y, is given here. $s_{x} = \frac {s_r} {b_1} \sqrt{\frac {1} {m} + \frac {1} {n} + \frac {\left( \overline{Y} - \overline{y} \right)^2} {(b_1)^2 \sum_{i = 1}^{n} \left( x_i - \overline{x} \right)^2}} \nonumber$ Example $3$ Three replicate analyses for a sample that contains an unknown concentration of analyte, yields values for Ssamp of 29.32, 29.16 and 29.51 (arbitrary units). Using the results from Example $1$ and Example $2$, determine the analyte’s concentration, CA, and its 95% confidence interval. Solution The average signal, $\overline{S}_{samp}$, is 29.33, which, using the slope and the y-intercept from Example $1$, gives the analyte’s concentration as $C_A = \frac {\overline{S}_{samp} - b_0} {b_1} = \frac {29.33 - 0.209} {120.706} = 0.241 \nonumber$ To calculate the standard deviation for the analyte’s concentration we must determine the values for $\overline{S}_{std}$ and for $\sum_{i = 1}^{2} (C_{std_i} - \overline{C}_{std})^2$. The former is just the average signal for the calibration standards, which, using the data in Table $1$, is 30.385. Calculating $\sum_{i = 1}^{2} (C_{std_i} - \overline{C}_{std})^2$ looks formidable, but we can simplify its calculation by recognizing that this sum-of-squares is the numerator in a standard deviation equation; thus, $\sum_{i = 1}^{n} (C_{std_i} - \overline{C}_{std})^2 = (s_{C_{std}})^2 \times (n - 1) \nonumber$ where $s_{C_{std}}$ is the standard deviation for the concentration of analyte in the calibration standards. Using the data in Table $1$ we find that $s_{C_{std}}$ is 0.1871 and $\sum_{i = 1}^{n} (C_{std_i} - \overline{C}_{std})^2 = (0.1872)^2 \times (6 - 1) = 0.175 \nonumber$ Substituting known values into the equation for $s_{C_A}$ gives $s_{C_A} = \frac {0.4035} {120.706} \sqrt{\frac {1} {3} + \frac {1} {6} + \frac {(29.33 - 30.385)^2} {(120.706)^2 \times 0.175}} = 0.0024 \nonumber$ Finally, the 95% confidence interval for 4 degrees of freedom is $\mu_{C_A} = C_A \pm ts_{C_A} = 0.241 \pm (2.78 \times 0.0024) = 0.241 \pm 0.007 \nonumber$ Figure $4$ shows the calibration curve with curves showing the 95% confidence interval for CA. Evaluating a Regression Model You should never accept the result of a linear regression analysis without evaluating the validity of the model. Perhaps the simplest way to evaluate a regression analysis is to examine the residual errors. As we saw earlier, the residual error for a single calibration standard, ri, is $r_i = (y_i - \hat{y}_i) \nonumber$ If the regression model is valid, then the residual errors should be distributed randomly about an average residual error of zero, with no apparent trend toward either smaller or larger residual errors (Figure $\PageIndex{5a}$). Trends such as those in Figure $\PageIndex{5b}$ and Figure $\PageIndex{5c}$ provide evidence that at least one of the model’s assumptions is incorrect. For example, a trend toward larger residual errors at higher concentrations, Figure $\PageIndex{5b}$, suggests that the indeterminate errors affecting the signal are not independent of the analyte’s concentration. In Figure $\PageIndex{5c}$, the residual errors are not random, which suggests we cannot model the data using a straight-line relationship. Regression methods for the latter two cases are discussed in the following sections. Example $4$ Use your results from Exercise $1$ to construct a residual plot and explain its significance. Solution To create a residual plot, we need to calculate the residual error for each standard. The following table contains the relevant information. $x_i$ $y_i$ $\hat{y}_i$ $y_i - \hat{y}_i$ 0.000 0.000 0.0015 –0.0015 $1.55 \times 10^{-3}$ 0.050 0.0473 0.0027 $3.16 \times 10^{-3}$ 0.093 0.0949 –0.0019 $4.74 \times 10^{-3}$ 0.143 0.1417 0.0013 $6.34 \times 10^{-3}$ 0.188 0.1890 –0.0010 $7.92 \times 10^{-3}$ 0.236 0.2357 0.0003 The figure below shows a plot of the resulting residual errors. The residual errors appear random, although they do alternate in sign, and they do not show any significant dependence on the analyte’s concentration. Taken together, these observations suggest that our regression model is appropriate.
textbooks/chem/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/08%3A_Modeling_Data/8.01%3A_Linear_Regression_of_a_Straight-Line_Calibration_Curve.txt
Our treatment of linear regression to this point assumes that any indeterminate errors that affect y are independent of the value of x. If this assumption is false, then we must include the variance for each value of y in our determination of the y-intercept, b0, and the slope, b1; thus $b_0 = \frac {\sum_{i = 1}^{n} w_i y_i - b_1 \sum_{i = 1}^{n} w_i x_i} {n} \nonumber$ $b_1 = \frac {n \sum_{i = 1}^{n} w_i x_i y_i - \sum_{i = 1}^{n} w_i x_i \sum_{i = 1}^{n} w_i y_i} {n \sum_{i =1}^{n} w_i x_i^2 - \left( \sum_{i = 1}^{n} w_i x_i \right)^2} \nonumber$ where wi is a weighting factor that accounts for the variance in yi $w_i = \frac {n (s_{y_i})^{-2}} {\sum_{i = 1}^{n} (s_{y_i})^{-2}} \nonumber$ and $s_{y_i}$ is the standard deviation for yi. In a weighted linear regression, each xy-pair’s contribution to the regression line is inversely proportional to the precision of yi; that is, the more precise the value of y, the greater its contribution to the regression. Example $4$ Shown here are data for an external standardization in which sstd is the standard deviation for three replicate determination of the signal. This is the same data used in the examples in Section 8.1 with additional information about the standard deviations in the signal. $C_{std}$ (arbitrary units) $S_{std}$ (arbitrary units) $s_{std}$ 0.000 0.00 0.02 0.100 12.36 0.02 0.200 24.83 0.07 0.300 35.91 0.13 0.400 48.79 0.22 0.500 60.42 0.33 Determine the calibration curve’s equation using a weighted linear regression. As you work through this example, remember that x corresponds to Cstd, and that y corresponds to Sstd. Solution We begin by setting up a table to aid in calculating the weighting factors. $C_{std}$ (arbitrary units) $S_{std}$ (arbitrary units) $s_{std}$ $(s_{y_i})^{-2}$ $w_i$ 0.000 0.00 0.02 2500.00 2.8339 0.100 12.36 0.02 250.00 2.8339 0.200 24.83 0.07 204.08 0.2313 0.300 35.91 0.13 59.17 0.0671 0.400 48.79 0.22 20.66 0.0234 0.500 60.42 0.33 9.18 0.0104 Adding together the values in the fourth column gives $\sum_{i = 1}^{n} (s_{y_i})^{-2} \nonumber$ which we use to calculate the individual weights in the last column. As a check on your calculations, the sum of the individual weights must equal the number of calibration standards, n. The sum of the entries in the last column is 6.0000, so all is well. After we calculate the individual weights, we use a second table to aid in calculating the four summation terms in the equations for the slope, $b_1$, and the y-intercept, $b_0$. $x_i$ $y_i$ $w_i$ $w_i x_i$ $w_i y_i$ $w_i x_i^2$ $w_i x_i y_i$ 0.000 0.00 2.8339 0.0000 0.0000 0.0000 0.0000 0.100 12.36 2.8339 0.2834 35.0270 0.0283 3.5027 0.200 24.83 0.2313 0.0463 5.7432 0.0093 1.1486 0.300 35.91 0.0671 0.0201 2.4096 0.0060 0.7229 0.400 48.79 0.0234 0.0094 1.1417 0.0037 0.4567 0.500 60.42 0.0104 0.0052 0.6284 0.0026 0.3142 Adding the values in the last four columns gives $\sum_{i = 1}^{n} w_i x_i = 0.3644 \quad \sum_{i = 1}^{n} w_i y_i = 44.9499 \quad \sum_{i = 1}^{n} w_i x_i^2 = 0.0499 \quad \sum_{i = 1}^{n} w_i x_i y_i = 6.1451 \nonumber$ which gives the estimated slope and the estimated y-intercept as $b_1 = \frac {(6 \times 6.1451) - (0.3644 \times 44.9499)} {(6 \times 0.0499) - (0.3644)^2} = 122.985 \nonumber$ $b_0 = \frac{44.9499 - (122.985 \times 0.3644)} {6} = 0.0224 \nonumber$ The calibration equation is $S_{std} = 122.98 \times C_{std} + 0.2 \nonumber$ Figure $1$ shows the calibration curve for the weighted regression determined here and the calibration curve for the unweighted regression in from Section 8.2. Although the two calibration curves are very similar, there are slight differences in the slope and in the y-intercept. Most notably, the y-intercept for the weighted linear regression is closer to the expected value of zero. Because the standard deviation for the signal, Sstd, is smaller for smaller concentrations of analyte, Cstd, a weighted linear regression gives more emphasis to these standards, allowing for a better estimate of the y-intercept. Equations for calculating confidence intervals for the slope, the y-intercept, and the concentration of analyte when using a weighted linear regression are not as easy to define as for an unweighted linear regression [Bonate, P. J. Anal. Chem. 1993, 65, 1367–1372]. The confidence interval for the analyte’s concentration, however, is at its optimum value when the analyte’s signal is near the weighted centroid, yc , of the calibration curve. $y_c = \frac {1} {n} \sum_{i = 1}^{n} w_i x_i \nonumber$ 8.03: Weighted Linear Regression With Errors in Both x and y If we remove our assumption that indeterminate errors affecting a calibration curve are present only in the signal (y), then we also must factor into the regression model the indeterminate errors that affect the analyte’s concentration in the calibration standards (x). The solution for the resulting regression line is computationally more involved than that for either the unweighted or weighted regression lines. Although we will not consider the details in this textbook, you should be aware that neglecting the presence of indeterminate errors in x can bias the results of a linear regression. Note See, for example, Analytical Methods Committee, “Fitting a linear functional relationship to data with error on both variable,” AMC Technical Brief, March, 2002), as well as this chapter’s Additional Resources.
textbooks/chem/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/08%3A_Modeling_Data/8.02%3A_Weighted_Linear_Regression_with_Errors_in_y.txt
A straight-line regression model, despite its apparent complexity, is the simplest functional relationship between two variables. What do we do if our calibration curve is curvilinear—that is, if it is a curved-line instead of a straight-line? One approach is to try transforming the data into a straight-line. Logarithms, exponentials, reciprocals, square roots, and trigonometric functions have been used in this way. A plot of log(y) versus x is a typical example. Such transformations are not without complications, of which the most obvious is that data with a uniform variance in y will not maintain that uniform variance after it is transformed. Note It is worth noting here that the term “linear” does not mean a straight-line. A linear function may contain more than one additive term, but each such term has one and only one adjustable multiplicative parameter. The function $y = ax + bx^2 \nonumber$ is an example of a linear function because the terms x and x2 each include a single multiplicative parameter, a and b, respectively. The function $y = x^b \nonumber$ is nonlinear because b is not a multiplicative parameter; it is, instead, a power. This is why you can use linear regression to fit a polynomial equation to your data. Sometimes it is possible to transform a nonlinear function into a linear function. For example, taking the log of both sides of the nonlinear function above gives a linear function. $\log(y) = b \log(x) \nonumber$ Another approach to developing a linear regression model is to fit a polynomial equation to the data, such as $y = a + b x + c x^2$. You can use linear regression to calculate the parameters a, b, and c, although the equations are different than those for the linear regression of a straight-line. If you cannot fit your data using a single polynomial equation, it may be possible to fit separate polynomial equations to short segments of the calibration curve. The result is a single continuous calibration curve known as a spline function. The use of R for curvilinear regression is included in Chapter 8.5. Note For details about curvilinear regression, see (a) Sharaf, M. A.; Illman, D. L.; Kowalski, B. R. Chemometrics, Wiley-Interscience: New York, 1986; (b) Deming, S. N.; Morgan, S. L. Experimental Design: A Chemometric Approach, Elsevier: Amsterdam, 1987. The regression models in this chapter apply only to functions that contain a single dependent variable and a single independent variable. One example is the simplest form of Beer's law in which the absorbance, $A$, of a sample at a single wavelength, $\lambda$, depends upon the concentration of a single analyte, $C_A$ $A_{\lambda} = \epsilon_{\lambda, A} b C_A \nonumber$ where $\epsilon_{\lambda, A}$ is the analyte's molar absorptivity at the selected wavelength and $b$ is the pathlength through the sample. In the presence of an interferent, $I$, however, the signal may depend on the concentrations of both the analyte and the interferent $A_{\lambda} = \epsilon_{\lambda, A} b C_A + \epsilon_{\lambda, I} b C_I \nonumber$ where $\epsilon_{\lambda, I}$ is the interferent’s molar absorptivity and CI is the interferent’s concentration. This is an example of multivariable regression, which is covered in more detail in Chapter 9 when we consider the optimization of experiments where there is a single dependent variable and two or more independent variables. Note For more details on Beer's law, see Chapter 10 of Analytical Chemistry 2.1. In multivariate regression we have both multiple dependent variables, such as the absorbance of samples at two or more wavelengths, and multiple independent variables, such as the concentrations of two or more analytes in the samples. As discussed in Chapter 0.2, we can represent this using matrix notation $\begin{bmatrix} \cdots & \cdots & \cdots \ \vdots & A & \vdots \ \cdots & \cdots & \cdots \end{bmatrix}_{r \times c} = \begin{bmatrix} \cdots & \cdots & \cdots \ \vdots & \epsilon b & \vdots \ \cdots & \cdots & \cdots \end{bmatrix}_{r \times n} \times \begin{bmatrix} \cdots & \cdots & \cdots \ \vdots & C & \vdots \ \cdots & \cdots & \cdots \end{bmatrix}_{n \times c} \nonumber$ where there are $r$ wavelengths, $c$ samples, and $n$ analytes. Each column in the $\epsilon b$ matrix, for example, holds the $\epsilon b$ value for a different analyte at one of $r$ wavelengths, and each row in the $C$ matrix is the concentration of one of the $n$ analytes in one of the $c$ samples. We will consider this approach in more detail in Chapter 11. Note For a nice discussion of the difference between multivariable regression and multivariate regression, see Hidalgo, B.; Goodman, M. "Multivariate or Multivariable Regression," Am. J. Public Health, 2013, 103, 39-40.
textbooks/chem/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/08%3A_Modeling_Data/8.04%3A_Curvilinear_and_Multivariate_Regression.txt
In Section 8.1 we used the data in the table below to work through the details of a linear regression analysis where values of $x_i$ are the concentrations of analyte, $C_A$, in a series of standard solutions, and where values of $y_i$, are their measured signals, $S$. Let’s use R to model this data using the equation for a straight-line. $y = \beta_0 + \beta_1 x \nonumber$ Table $1$: Calibration Data From Worked Example in Section 8.1. $x_i$ $y_i$ 0.000 0.00 0.100 12.36 0.200 24.83 0.300 35.91 0.400 48.79 0.500 60.42 Entering Data into R To begin, we create two objects, one that contains the concentration of the standards and one that contains their corresponding signals. conc = c(0, 0.1, 0.2, 0.3, 0.4, 0.5) signal = c(0, 12.36, 24.83, 35.91, 48.79, 60.42) Creating a Linear Model in R A linear model in R is defined using the general syntax dependent variable ~ independent variable(s) For example, the syntax for a model with the equation $y = \beta_0 + \beta_1 x$, where $\beta_0$ and $\beta_1$ are the model's adjustable parameters, is $y \sim x$. Table $2$ provides some additional examples where $A$ and $B$ are independent variables, such as the concentrations of two analytes, and $y$ is a dependent variable, such as a measured signal. Table $2$: Syntax for Selected Linear Models in R. model syntax comments on model $y = \beta_a A$ $y \sim 0 + A$ straight-line forced through (0, 0) $y = \beta_0 + \beta_a A$ $y \sim A$ stright-line with a y-intercept $y = \beta_0 + \beta_a A + \beta_b B$ $y \sim A + B$ first-order in A and B $y = \beta_0 + \beta_a A + \beta_b B + \beta_{ab} AB$ $y \sim A * B$ first-order in A and B with AB interaction $y = \beta_0 + \beta_{ab} AB$ $y \sim A:B$ AB interaction only $y = \beta_0 + \beta_a A + \beta_{aa} A^2$ $y \sim A + I(A\text{^2})$ second-order polynomial Note The last formula in this table, $y \sim A + I(A\text{^2})$, includes the I(), or AsIs function. One complication with writing formulas is that they use symbols that have different meanings in formulas than they have in a mathematical equation. For example, take the simple formula $y \sim A + B$ that corresponds to the model $y = \beta_0 + \beta_a A + \beta_b B$. Note that the plus sign here builds a formula that has an intercept and a term for $A$ and a term for $B$. But what if we wanted to build a model that used the sum of $A$ and $B$ as the variable. Wrapping $A+B$ inside of the I() function accomplishes this; thus $y \sim I(A + B)$ builds the model $y = \beta_0 + \beta_{a+b} (A + B)$. To create our model we use the lm() function—where lm stands for linear model—assigning the results to an object so that we can access them later. calcurve = lm(signal ~ conc) Evaluating the Linear Regression Model To evaluate the results of a linear regression we need to examine the data and the regression line, and to review a statistical summary of the model. To examine our data and the regression line, we use the plot() function, first introduced in Chapter 3, which takes the following general form plot(x, y, ...) where x and y are the objects that contain our data and the ... allow for passing optional arguments to control the plot's style. To overlay the regression curve, we use the abline() function abline(object, ...) object is the object that contains the results of the linear regression model and the ... allow for passing optional arguments to control the model's style. Entering the commands plot(conc, signal, pch = 19, col = "blue", cex = 2) abline(calcurve, col = "red", lty = 2, lwd = 2) creates the plot shown in Figure $1$. Note The abline() function works only with a straight-line model. To review a statistical summary of the regression model, we use the summary() function. summary(calcurve) The resulting output, which is shown below, contains three sections. Call: lm(formula = signal ~ conc) Residuals: 1 2 3 4 5 6 -0.20857 0.08086 0.48029 -0.51029 0.29914 -0.14143 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 0.2086 0.2919 0.715 0.514 conc 120.7057 0.9641 125.205 2.44e-08 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.4033 on 4 degrees of freedom Multiple R-squared: 0.9997, Adjusted R-squared: 0.9997 F-statistic: 1.568e+04 on 1 and 4 DF, p-value: 2.441e-08 The first section of this summary lists the residual errors. To examine a plot of the residual errors, use the command plot(calcurve, which = 1) which produces the result shown in Figure $2$. Note that R plots the residuals against the predicted (fitted) values of y instead of against the known values of x, as we did in Section 8.1; the choice of how to plot the residuals is not critical. The line in Figure $2$ is a smoothed fit of the residuals. Note The reason for including the argument which = 1 is not immediately obvious. When you use R’s plot() function on an object created using lm(), the default is to create four charts that summarize the model’s suitability. The first of these charts is the residual plot; thus, which = 1 limits the output to this plot. The second section of the summary provides estimate's for the model’s coefficients—the slope, $\beta_1$, and the y-intercept, $\beta_0$—along with their respective standard deviations (Std. Error). The column t value and the column Pr(>|t|) are the p-values for the following t-tests. slope: $H_0 \text{: } \beta_1 = 0 \quad H_A \text{: } \beta_1 \neq 0$ y-intercept: $H_0 \text{: } \beta_0 = 0 \quad H_A \text{: } \beta_0 \neq 0$ The results of these t-tests provide convincing evidence that the slope is not zero and no evidence that the y-intercept differs significantly from zero. The last section of the summary provides the standard deviation about the regression (residual standard error), the square of the correlation coefficient (multiple R-squared), and the result of an F-test on the model’s ability to explain the variation in the y values. The value for F-statistic is the result of an F-test of the following null and alternative hypotheses. H0: the regression model does not explain the variation in y HA: the regression model does explain the variation in y The value in the column for Significance F is the probability for retaining the null hypothesis. In this example, the probability is $2.5 \times 10^{-8}$, which is strong evidence for rejecting the null hypothesis and accepting the regression model. As is the case with the correlation coefficient, a small value for the probability is a likely outcome for any calibration curve, even when the model is inappropriate. The probability for retaining the null hypothesis for the data in Figure $3$, for example, is $9.0 \times 10^{-5}$. The correlation coefficient is a measure of the extent to which the regression model explains the variation in y. Values of r range from –1 to +1. The closer the correlation coefficient is to +1 or to –1, the better the model is at explaining the data. A correlation coefficient of 0 means there is no relationship between x and y. In developing the calculations for linear regression, we did not consider the correlation coefficient. There is a reason for this. For most straight-line calibration curves the correlation coefficient is very close to +1, typically 0.99 or better. There is a tendency, however, to put too much faith in the correlation coefficient’s significance, and to assume that an r greater than 0.99 means the linear regression model is appropriate. Figure $3$ provides a useful counterexample. Although the regression line has a correlation coefficient of 0.993, the data clearly is curvilinear. The take-home lesson is simple: do not fall in love with the correlation coefficient! Predicting the Uncertainty in $x$ Given $y$ Although R's base installation does not include a command for predicting the uncertainty in the independent variable, $x$, given a measured value for the dependent variable, $y$, the chemCal package does. To use this package you need to install it by entering the following command. install.packages("chemCal") Once installed, which you need to do just once, you can access the package's functions by using the library() command. library(chemCal) The command for predicting the uncertainty in CA is inverse.predict()and takes the following form for an unweighted linear regression inverse.predict(object, newdata, alpha = value) where object is the object that contains the regression model’s results, new-data is an object that contains one or more replicate values for the dependent variable and value is the numerical value for the significance level. Let’s use this command to complete the calibration curve example from Section 8.1 in which we determined the concentration of analyte in a sample using three replicate analyses. First, we create an object that contains the replicate measurements of the signal rep_signal = c(29.32, 29.16, 29.51) and then we complete the computation using the following command inverse.predict(calcurve, rep_signal, alpha = 0.05) which yields the results shown here $Prediction [1] 0.2412597 $Standard Error [1] 0.002363588 $Confidence [1] 0.006562373 $Confidence Limits [1] 0.2346974 0.2478221 The analyte’s concentration, CA, is given by the value $Prediction, and its standard deviation, $s_{C_A}$, is shown as $Standard Error. The value for $Confidence is the confidence interval, $\pm t s_{C_A}$, for the analyte’s concentration, and $Confidence Limits provides the lower limit and upper limit for the confidence interval for CA. Using R for a Weighted Linear Regression R’s command for an unweighted linear regression also allows for a weighted linear regression if we include an additional argument, weights, whose value is an object that contains the weights. lm(y ~ x, weights = object) Let’s use this command to complete the weighted linear regression example in Section 8.2 First, we need to create an object that contains the weights, which in R are the reciprocals of the standard deviations in y, $(s_{y_i})^{-2}$. Using the data from the earlier example, we enter syi = c(0.02, 0.02, 0.07, 0.13, 0.22, 0.33) w =1/syi^2 to create the object, w, that contains the weights. The commands weighted_calcurve = lm(signal ~ conc, weights = w) summary(weighted_calcurve) generate the following output. Call: lm(formula = signal ~ conc, weights = w) Weighted Residuals: 1 2 3 4 5 6 -2.223 2.571 3.676 -7.129 -1.413 -2.864 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 0.04446 0.08542 0.52 0.63 conc 122.64111 0.93590 131.04 2.03e-08 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 4.639 on 4 degrees of freedom Multiple R-squared: 0.9998, Adjusted R-squared: 0.9997 F-statistic: 1.717e+04 on 1 and 4 DF, p-value: 2.034e-08 Any difference between the results shown here and the results in Section 8.2 are the result of round-off errors in our earlier calculations. Note You may have noticed that this way of defining weights is different than that shown in Section 8.2 In deriving equations for a weighted linear regression, you can choose to normalize the sum of the weights to equal the number of points, or you can choose not to—the algorithm in R does not normalize the weights. Using R for a Curvilinear Regression As we see in this example, we can use R to model data that is not in the form of a straight-line by simply adjusting the linear model. Example $1$ Use the data below to explore two models for the data in the table below, one using a straight-line, $y = \beta_0 + \beta_1 x$, and one that is a second-order polynomial, $y = \beta_0 + \beta_1 x + \beta_2 x^2$. $x_i$ $y_i$ 0.00 0.00 1.00 0.94 2.00 2.15 3.00 3.19 4.00 3.70 5.00 4.21 Solution First, we create objects to store our data. x = c(0, 1.00, 2.00, 3.00, 4.00, 5.00) y = c(0, 0.94, 2.15, 3.19, 3.70, 4.21) Next, we build our linear models for a straight-line and for a curvilinear fit to the data straight_line = lm(y ~ x) curvilinear = lm(y ~ x + I(x^2)) and plot the data and both linear models on the same plot. Because abline() only works for a straight-line, we use our curvilinear model to calculate sufficient values for x and y that we can use to plot the curvilinear model. Note that the coefficients for this model are stored in curvilinear$coefficients with the first value being $\beta_0$, the second value being $\beta_1$, and the third value being $\beta_2$. plot(x, y, pch = 19, col = "blue", ylim = c(0,5), xlab = "x", ylab = "y") abline(straight_line, lwd = 2, col = "blue", lty = 2) x_seq = seq(-0.5, 5.5, 0.01) y_seq = curvilinear$coefficients[1] + curvilinear$coefficients[2] * x_seq + curvilinear$coefficients[3] * x_seq^2 lines(x_seq, y_seq, lwd = 2, col = "red", lty = 3) legend(x = "topleft", legend = c("straight-line", "curvilinear"), col = c("blue", "red"), lty = c(2, 3), lwd = 2, bty = "n") The resulting plot is shown here.
textbooks/chem/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/08%3A_Modeling_Data/8.05%3A_Using_R_for_a_Linear_Regression_Analysis.txt